Sample records for maximum clique problem

  1. Estimating landscape carrying capacity through maximum clique analysis

    USGS Publications Warehouse

    Donovan, Therese; Warrington, Greg; Schwenk, W. Scott; Dinitz, Jeffrey H.

    2012-01-01

    Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be broken into several, smaller problems), or for species with large home ranges relative to grid scale where resampling the points to a coarser resolution can reduce the problem to manageable proportions.

  2. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE PAGES

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg; ...

    2018-05-03

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  3. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  4. Replicator equations, maximal cliques, and graph isomorphism.

    PubMed

    Pelillo, M

    1999-11-15

    We present a new energy-minimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid-1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a standard quadratic program. The attractive feature of this formulation is that a clear one-to-one correspondence exists between the solutions of the quadratic program and those in the original, combinatorial problem. To solve the program we use the so-called replicator equations--a class of straightforward continuous- and discrete-time dynamical systems developed in various branches of theoretical biology. We show how, despite their inherent inability to escape from local solutions, they nevertheless provide experimental results that are competitive with those obtained using more elaborate mean-field annealing heuristics.

  5. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  6. MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.

    PubMed

    Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel

    2016-01-01

    We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.

  7. Quantum speedup in solving the maximal-clique problem

    NASA Astrophysics Data System (ADS)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  8. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dam, Wim van; Howard, Mark; Department of Physics, University of California, Santa Barbara, California 93106

    2011-07-15

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiolkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships withmore » known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.« less

  9. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    NASA Astrophysics Data System (ADS)

    van Dam, Wim; Howard, Mark

    2011-07-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiołkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  10. Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem

    PubMed Central

    Ning, Jianguo; Li, Yanmei; Yu, Wen

    2015-01-01

    Molecular computers (also called DNA computers), as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model) on System-on-a-Programmable-Chip (SOPC) architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP) is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP) with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time. PMID:26075867

  11. On the Maximum-Weight Clique Problem.

    DTIC Science & Technology

    1985-06-01

    hypergeometric distribution", Discrete Math . 25, 285-287 .* CHVATAL, V. (1983), Linear Programming, W.H. Freeman, New York/San Francisco. COOK, S.A. (1971...Annals Discrete Math . 21, 325-356 GROTSCHEL, M., L. LOVASZ, and A. SCHRIJVER ((1984b), "Relaxations of Vertex Packing", Preprint No. 35...de Grenoble. See also N. Sbihi, "Algorithme de recherche d’un stable de cardinalite maximum dans un graphe sans etoile", Discrete Math . 19 (1980), 53

  12. The Erdős-Rothschild problem on edge-colourings with forbidden monochromatic cliques

    NASA Astrophysics Data System (ADS)

    Pikhurko, Oleg; Staden, Katherine; Yilma, Zelealem B.

    2017-09-01

    Let $\\mathbf{k} := (k_1,\\dots,k_s)$ be a sequence of natural numbers. For a graph $G$, let $F(G;\\mathbf{k})$ denote the number of colourings of the edges of $G$ with colours $1,\\dots,s$ such that, for every $c \\in \\{1,\\dots,s\\}$, the edges of colour $c$ contain no clique of order $k_c$. Write $F(n;\\mathbf{k})$ to denote the maximum of $F(G;\\mathbf{k})$ over all graphs $G$ on $n$ vertices. This problem was first considered by Erd\\H{o}s and Rothschild in 1974, but it has been solved only for a very small number of non-trivial cases. We prove that, for every $\\mathbf{k}$ and $n$, there is a complete multipartite graph $G$ on $n$ vertices with $F(G;\\mathbf{k}) = F(n;\\mathbf{k})$. Also, for every $\\mathbf{k}$ we construct a finite optimisation problem whose maximum is equal to the limit of $\\log_2 F(n;\\mathbf{k})/{n\\choose 2}$ as $n$ tends to infinity. Our final result is a stability theorem for complete multipartite graphs $G$, describing the asymptotic structure of such $G$ with $F(G;\\mathbf{k}) = F(n;\\mathbf{k}) \\cdot 2^{o(n^2)}$ in terms of solutions to the optimisation problem.

  13. Maximal clique enumeration with data-parallel primitives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lessley, Brenton; Perciano, Talita; Mathai, Manish

    The enumeration of all maximal cliques in an undirected graph is a fundamental problem arising in several research areas. We consider maximal clique enumeration on shared-memory, multi-core architectures and introduce an approach consisting entirely of data-parallel operations, in an effort to achieve efficient and portable performance across different architectures. We study the performance of the algorithm via experiments varying over benchmark graphs and architectures. Overall, we observe that our algorithm achieves up to a 33-time speedup and 9-time speedup over state-of-the-art distributed and serial algorithms, respectively, for graphs with higher ratios of maximal cliques to total cliques. Further, we attainmore » additional speedups on a GPU architecture, demonstrating the portable performance of our data-parallel design.« less

  14. Clustering Qualitative Data Based on Binary Equivalence Relations: Neighborhood Search Heuristics for the Clique Partitioning Problem

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich

    2009-01-01

    The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal…

  15. Social Network Centrality and Leadership Status

    PubMed Central

    Lansford, Jennifer E.; Costanzo, Philip R.; Grimes, Christina; Putallaz, Martha; Miller, Shari; Malone, Patrick S.

    2009-01-01

    Seventh-grade students (N = 324) completed social cognitive maps to identify peer groups and peer group leaders, sociometric nominations to describe their peers’ behaviors, and questionnaires to assess their own behaviors. Peer group members resembled one another in levels of direct and indirect aggression and substance use; girls’ cliques were more behaviorally homogenous than were boys’ cliques. On average, leaders (especially if they were boys) were perceived as engaging in more problem behaviors than were nonleaders. In girls’ cliques, peripheral group members were more similar to their group leader on indirect aggression than were girls who were more central to the clique. Peer leaders perceived themselves as being more able to influence peers but did not differ from nonleaders in their perceived susceptibility to peer influence. The findings contribute to our understanding of processes through which influence may occur in adolescent peer groups. PMID:19763241

  16. Multiple Semantic Matching on Augmented N-partite Graph for Object Co-segmentation.

    PubMed

    Wang, Chuan; Zhang, Hua; Yang, Liang; Cao, Xiaochun; Xiong, Hongkai

    2017-09-08

    Recent methods for object co-segmentation focus on discovering single co-occurring relation of candidate regions representing the foreground of multiple images. However, region extraction based only on low and middle level information often occupies a large area of background without the help of semantic context. In addition, seeking single matching solution very likely leads to discover local parts of common objects. To cope with these deficiencies, we present a new object cosegmentation framework, which takes advantages of semantic information and globally explores multiple co-occurring matching cliques based on an N-partite graph structure. To this end, we first propose to incorporate candidate generation with semantic context. Based on the regions extracted from semantic segmentation of each image, we design a merging mechanism to hierarchically generate candidates with high semantic responses. Secondly, all candidates are taken into consideration to globally formulate multiple maximum weighted matching cliques, which complements the discovery of part of the common objects induced by a single clique. To facilitate the discovery of multiple matching cliques, an N-partite graph, which inherently excludes intralinks between candidates from the same image, is constructed to separate multiple cliques without additional constraints. Further, we augment the graph with an additional virtual node in each part to handle irrelevant matches when the similarity between two candidates is too small. Finally, with the explored multiple cliques, we statistically compute pixel-wise co-occurrence map for each image. Experimental results on two benchmark datasets, i.e., iCoseg and MSRC datasets, achieve desirable performance and demonstrate the effectiveness of our proposed framework.

  17. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  18. Structure Matters: The Role of Clique Hierarchy in the Relationship Between Adolescent Social Status and Aggression and Prosociality.

    PubMed

    Pattiselanno, Kim; Dijkstra, Jan Kornelis; Steglich, Christian; Vollebergh, Wilma; Veenstra, René

    2015-12-01

    Peer cliques form an important context for the social development of adolescents. Although clique members are often similar in social status, also within cliques, status differences exist. How differences in social status between clique members are related to behaviors of its individual members is rather unknown. This study examined to what extent the relationship of individual social status (i.e., perceived popularity) with aggression and prosocial behavior depends on the level of internal clique hierarchy. The sample consists of 2674 adolescents (49.8% boys), with a mean age of 14.02. We focused specifically on physical and relational aggression, and practical and emotional support, because these behaviors have shown to be of great importance for social relationships and social standing among adolescents. The internal status hierarchy of cliques was based on the variation in individual social status between clique members (i.e., clique hierarchization) and the structure of status scores within a clique (pyramid shape, inverted pyramid, or equal distribution of social status scores) (i.e., clique status structure). The results showed that differences in aggressive and prosocial behaviors were particularly moderated by clique status structure: aggression was stronger related to individual social status in (girls') cliques where the clique status structure reflected an inverted pyramid with relatively more high status adolescents within the clique than low status peers, and prosocial behavior showed a significant relationship with individual social status, again predominantly in inverted pyramid structured (boys' and girls') cliques. Furthermore, these effects differed by types of gender cliques: the associations were found in same gender but not mixed-gender cliques. The findings stress the importance of taking into account internal clique characteristics when studying adolescent social status in relationship to aggression and prosociality.

  19. Clique Relaxations in Biological and Social Network Analysis Foundations and Algorithms

    DTIC Science & Technology

    2015-10-26

    study of clique relaxation models arising in biological and social networks. This project examines the elementary clique-defining properties... elementary clique-defining properties inherently exploited in the available clique relaxation models and pro- poses a taxonomic framework that not...analyzes the elementary clique-defining properties implicitly exploited in the available clique relaxation models and proposes a taxonomic framework that

  20. Overlapping Modularity at the Critical Point of k-Clique Percolation

    NASA Astrophysics Data System (ADS)

    Tóth, Bálint; Vicsek, Tamás; Palla, Gergely

    2013-05-01

    One of the most remarkable social phenomena is the formation of communities in social networks corresponding to families, friendship circles, work teams, etc. Since people usually belong to several different communities at the same time, the induced overlaps result in an extremely complicated web of the communities themselves. Thus, uncovering the intricate community structure of social networks is a non-trivial task with great potential for practical applications, gaining a notable interest in the recent years. The Clique Percolation Method (CPM) is one of the earliest overlapping community finding methods, which was already used in the analysis of several different social networks. In this approach the communities correspond to k-clique percolation clusters, and the general heuristic for setting the parameters of the method is to tune the system just below the critical point of k-clique percolation. However, this rule is based on simple physical principles and its validity was never subject to quantitative analysis. Here we examine the quality of the partitioning in the vicinity of the critical point using recently introduced overlapping modularity measures. According to our results on real social and other networks, the overlapping modularities show a maximum close to the critical point, justifying the original criteria for the optimal parameter settings.

  1. Research on Some Bus Transport Networks with Random Overlapping Clique Structure

    NASA Astrophysics Data System (ADS)

    Yang, Xu-Hua; Wang, Bo; Wang, Wan-Liang; Sun, You-Xian

    2008-11-01

    On the basis of investigating the statistical data of bus transport networks of three big cities in China, we propose that each bus route is a clique (maximal complete subgraph) and a bus transport network (BTN) consists of a lot of cliques, which intensively connect and overlap with each other. We study the network properties, which include the degree distribution, multiple edges' overlapping time distribution, distribution of the overlap size between any two overlapping cliques, distribution of the number of cliques that a node belongs to. Naturally, the cliques also constitute a network, with the overlapping nodes being their multiple links. We also research its network properties such as degree distribution, clustering, average path length, and so on. We propose that a BTN has the properties of random clique increment and random overlapping clique, at the same time, a BTN is a small-world network with highly clique-clustered and highly clique-overlapped. Finally, we introduce a BTN evolution model, whose simulation results agree well with the statistical laws that emerge in real BTNs.

  2. Compressive Network Analysis

    PubMed Central

    Jiang, Xiaoye; Yao, Yuan; Liu, Han; Guibas, Leonidas

    2014-01-01

    Modern data acquisition routinely produces massive amounts of network data. Though many methods and models have been proposed to analyze such data, the research of network data is largely disconnected with the classical theory of statistical learning and signal processing. In this paper, we present a new framework for modeling network data, which connects two seemingly different areas: network data analysis and compressed sensing. From a nonparametric perspective, we model an observed network using a large dictionary. In particular, we consider the network clique detection problem and show connections between our formulation with a new algebraic tool, namely Randon basis pursuit in homogeneous spaces. Such a connection allows us to identify rigorous recovery conditions for clique detection problems. Though this paper is mainly conceptual, we also develop practical approximation algorithms for solving empirical problems and demonstrate their usefulness on real-world datasets. PMID:25620806

  3. Quantum Clique Gossiping.

    PubMed

    Li, Bo; Li, Shuang; Wu, Junfeng; Qi, Hongsheng

    2018-02-09

    This paper establishes a framework of quantum clique gossiping by introducing local clique operations to networks of interconnected qubits. Cliques are local structures in complex networks being complete subgraphs, which can be used to accelerate classical gossip algorithms. Based on cyclic permutations, clique gossiping leads to collective multi-party qubit interactions. We show that at reduced states, these cliques have the same acceleration effects as their roles in accelerating classical gossip algorithms. For randomized selection of cliques, such improved rate of convergence is precisely characterized. On the other hand, the rate of convergence at the coherent states of the overall quantum network is proven to be decided by the spectrum of a mean-square error evolution matrix. Remarkably, the use of larger quantum cliques does not necessarily increase the speed of the network density aggregation, suggesting quantum network dynamics is not entirely decided by its classical topology.

  4. Understanding the Scalability of Bayesian Network Inference Using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.

    2010-01-01

    One of the main approaches to performing computation in Bayesian networks (BNs) is clique tree clustering and propagation. The clique tree approach consists of propagation in a clique tree compiled from a Bayesian network, and while it was introduced in the 1980s, there is still a lack of understanding of how clique tree computation time depends on variations in BN size and structure. In this article, we improve this understanding by developing an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN s non-root nodes to the number of root nodes, and (ii) the expected number of moral edges in their moral graphs. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for the total size of each set. For the special case of bipartite BNs, there are two sets and two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, where random bipartite BNs generated using the BPART algorithm are studied, we systematically increase the out-degree of the root nodes in bipartite Bayesian networks, by increasing the number of leaf nodes. Surprisingly, root clique growth is well-approximated by Gompertz growth curves, an S-shaped family of curves that has previously been used to describe growth processes in biology, medicine, and neuroscience. We believe that this research improves the understanding of the scaling behavior of clique tree clustering for a certain class of Bayesian networks; presents an aid for trade-off studies of clique tree clustering using growth curves; and ultimately provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms.

  5. Predicting disease-related proteins based on clique backbone in protein-protein interaction network.

    PubMed

    Yang, Lei; Zhao, Xudong; Tang, Xianglong

    2014-01-01

    Network biology integrates different kinds of data, including physical or functional networks and disease gene sets, to interpret human disease. A clique (maximal complete subgraph) in a protein-protein interaction network is a topological module and possesses inherently biological significance. A disease-related clique possibly associates with complex diseases. Fully identifying disease components in a clique is conductive to uncovering disease mechanisms. This paper proposes an approach of predicting disease proteins based on cliques in a protein-protein interaction network. To tolerate false positive and negative interactions in protein networks, extending cliques and scoring predicted disease proteins with gene ontology terms are introduced to the clique-based method. Precisions of predicted disease proteins are verified by disease phenotypes and steadily keep to more than 95%. The predicted disease proteins associated with cliques can partly complement mapping between genotype and phenotype, and provide clues for understanding the pathogenesis of serious diseases.

  6. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  7. Understanding the Scalability of Bayesian Network Inference using Clique Tree Growth Curves

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob

    2009-01-01

    Bayesian networks (BNs) are used to represent and efficiently compute with multi-variate probability distributions in a wide range of disciplines. One of the main approaches to perform computation in BNs is clique tree clustering and propagation. In this approach, BN computation consists of propagation in a clique tree compiled from a Bayesian network. There is a lack of understanding of how clique tree computation time, and BN computation time in more general, depends on variations in BN size and structure. On the one hand, complexity results tell us that many interesting BN queries are NP-hard or worse to answer, and it is not hard to find application BNs where the clique tree approach in practice cannot be used. On the other hand, it is well-known that tree-structured BNs can be used to answer probabilistic queries in polynomial time. In this article, we develop an approach to characterizing clique tree growth as a function of parameters that can be computed in polynomial time from BNs, specifically: (i) the ratio of the number of a BN's non-root nodes to the number of root nodes, or (ii) the expected number of moral edges in their moral graphs. Our approach is based on combining analytical and experimental results. Analytically, we partition the set of cliques in a clique tree into different sets, and introduce a growth curve for each set. For the special case of bipartite BNs, we consequently have two growth curves, a mixed clique growth curve and a root clique growth curve. In experiments, we systematically increase the degree of the root nodes in bipartite Bayesian networks, and find that root clique growth is well-approximated by Gompertz growth curves. It is believed that this research improves the understanding of the scaling behavior of clique tree clustering, provides a foundation for benchmarking and developing improved BN inference and machine learning algorithms, and presents an aid for analytical trade-off studies of clique tree clustering using growth curves.

  8. Finite-size scaling of clique percolation on two-dimensional Moore lattices

    NASA Astrophysics Data System (ADS)

    Dong, Jia-Qi; Shen, Zhou; Zhang, Yongwen; Huang, Zi-Gang; Huang, Liang; Chen, Xiaosong

    2018-05-01

    Clique percolation has attracted much attention due to its significance in understanding topological overlap among communities and dynamical instability of structured systems. Rich critical behavior has been observed in clique percolation on Erdős-Rényi (ER) random graphs, but few works have discussed clique percolation on finite dimensional systems. In this paper, we have defined a series of characteristic events, i.e., the historically largest size jumps of the clusters, in the percolating process of adding bonds and developed a new finite-size scaling scheme based on the interval of the characteristic events. Through the finite-size scaling analysis, we have found, interestingly, that, in contrast to the clique percolation on an ER graph where the critical exponents are parameter dependent, the two-dimensional (2D) clique percolation simply shares the same critical exponents with traditional site or bond percolation, independent of the clique percolation parameters. This has been corroborated by bridging two special types of clique percolation to site percolation on 2D lattices. Mechanisms for the difference of the critical behaviors between clique percolation on ER graphs and on 2D lattices are also discussed.

  9. Fixation probability on clique-based graphs

    NASA Astrophysics Data System (ADS)

    Choi, Jeong-Ok; Yu, Unjong

    2018-02-01

    The fixation probability of a mutant in the evolutionary dynamics of Moran process is calculated by the Monte-Carlo method on a few families of clique-based graphs. It is shown that the complete suppression of fixation can be realized with the generalized clique-wheel graph in the limit of small wheel-clique ratio and infinite size. The family of clique-star is an amplifier, and clique-arms graph changes from amplifier to suppressor as the fitness of the mutant increases. We demonstrate that the overall structure of a graph can be more important to determine the fixation probability than the degree or the heat heterogeneity. The dependence of the fixation probability on the position of the first mutant is discussed.

  10. Peer Clique Participation of Victimized Children: Characteristics and Implications for Victimization over a School Year

    ERIC Educational Resources Information Center

    Zarbatany, Lynne; Tremblay, Paul F.; Ellis, Wendy E.; Chen, Xinyin; Kinal, Megan; Boyko, Lisa

    2017-01-01

    This study examined aspects of peer clique participation that mitigated victimization by peers over a school year. Participants were 1,033 children age 8-14 years (M[subscript age] = 11.81; 444 boys and 589 girls), including 128 (66 boys) victimized children. Cliques (N = 162) and clique participation were assessed by using the Social Cognitive…

  11. Early Adolescent Depressive Symptoms: Prediction from Clique Isolation, Loneliness, and Perceived Social Acceptance

    PubMed Central

    Witvliet, Miranda; Brendgen, Mara; van Lier, Pol A. C.; Vitaro, Frank

    2010-01-01

    This study examined whether clique isolation predicted an increase in depressive symptoms and whether this association was mediated by loneliness and perceived social acceptance in 310 children followed from age 11–14 years. Clique isolation was identified through social network analysis, whereas depressive symptoms, loneliness, and perceived social acceptance were assessed using self ratings. While accounting for initial levels of depressive symptoms, peer rejection, and friendlessness at age 11 years, a high probability of being isolated from cliques from age 11 to 13 years predicted depressive symptoms at age 14 years. The link between clique isolation and depressive symptoms was mediated by loneliness, but not by perceived social acceptance. No sex differences were found in the associations between clique isolation and depressive symptoms. These results suggest that clique isolation is a social risk factor for the escalation of depressive symptoms in early adolescence. Implications for research and prevention are discussed. PMID:20499155

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Kathleen E.; Humble, Travis S.

    Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less

  13. Detecting independent and recurrent copy number aberrations using interval graphs.

    PubMed

    Wu, Hsin-Ta; Hajirasouliha, Iman; Raphael, Benjamin J

    2014-06-15

    Somatic copy number aberrations SCNAS: are frequent in cancer genomes, but many of these are random, passenger events. A common strategy to distinguish functional aberrations from passengers is to identify those aberrations that are recurrent across multiple samples. However, the extensive variability in the length and position of SCNA: s makes the problem of identifying recurrent aberrations notoriously difficult. We introduce a combinatorial approach to the problem of identifying independent and recurrent SCNA: s, focusing on the key challenging of separating the overlaps in aberrations across individuals into independent events. We derive independent and recurrent SCNA: s as maximal cliques in an interval graph constructed from overlaps between aberrations. We efficiently enumerate all such cliques, and derive a dynamic programming algorithm to find an optimal selection of non-overlapping cliques, resulting in a very fast algorithm, which we call RAIG (Recurrent Aberrations from Interval Graphs). We show that RAIG outperforms other methods on simulated data and also performs well on data from three cancer types from The Cancer Genome Atlas (TCGA). In contrast to existing approaches that employ various heuristics to select independent aberrations, RAIG optimizes a well-defined objective function. We show that this allows RAIG to identify rare aberrations that are likely functional, but are obscured by overlaps with larger passenger aberrations. http://compbio.cs.brown.edu/software. © The Author 2014. Published by Oxford University Press.

  14. A graph theoretic approach to scene matching

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1991-01-01

    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.

  15. Bully Victimization: Selection and Influence Within Adolescent Friendship Networks and Cliques.

    PubMed

    Lodder, Gerine M A; Scholte, Ron H J; Cillessen, Antonius H N; Giletta, Matteo

    2016-01-01

    Adolescents tend to form friendships with similar peers and, in turn, their friends further influence adolescents' behaviors and attitudes. Emerging work has shown that these selection and influence processes also might extend to bully victimization. However, no prior work has examined selection and influence effects involved in bully victimization within cliques, despite theoretical account emphasizing the importance of cliques in this regard. This study examined selection and influence processes in adolescence regarding bully victimization both at the level of the entire friendship network and the level of cliques. We used a two-wave design (5-month interval). Participants were 543 adolescents (50.1% male, Mage = 15.8) in secondary education. Stochastic actor-based models indicated that at the level of the larger friendship network, adolescents tended to select friends with similar levels of bully victimization as they themselves. In addition, adolescent friends influenced each other in terms of bully victimization over time. Actor Parter Interdependence models showed that similarities in bully victimization between clique members were not due to selection of clique members. For boys, average clique bully victimization predicted individual bully victimization over time (influence), but not vice versa. No influence was found for girls, indicating that different mechanisms may underlie friend influence on bully victimization for girls and boys. The differences in results at the level of the larger friendship network versus the clique emphasize the importance of taking the type of friendship ties into account in research on selection and influence processes involved in bully victimization.

  16. An Examination of Adolescent Clique Language in a Suburban Secondary School.

    ERIC Educational Resources Information Center

    Leona, Matteo H.

    1978-01-01

    Through a survey and in-depth interviews, three major cliques were identified at a middle income suburban high school near Boston. "Jocks,""motorheads," and "fleabags" were groupings based, respectively, on common interests in sports, cars, and drugs. Each clique is described in terms of appearance, general…

  17. Popularity in the Peer Group and Victimization within Friendship Cliques during Early Adolescence

    ERIC Educational Resources Information Center

    Closson, Leanna M.; Watanabe, Lori

    2018-01-01

    Victimization has been primarily studied within the broader peer group, leaving other potentially important contexts, such as friendship cliques, unexplored. This study examined the role of popularity in identifying protective factors that buffer against victimization within early adolescents' (N = 387) friendship cliques. Previously identified…

  18. Early Adolescent Depressive Symptoms: Prediction from Clique Isolation, Loneliness, and Perceived Social Acceptance

    ERIC Educational Resources Information Center

    Witvliet, Miranda; Brendgen, Mara; van Lier, Pol A. C.; Koot, Hans M.; Vitaro, Frank

    2010-01-01

    This study examined whether clique isolation predicted an increase in depressive symptoms and whether this association was mediated by loneliness and perceived social acceptance in 310 children followed from age 11-14 years. Clique isolation was identified through social network analysis, whereas depressive symptoms, loneliness, and perceived…

  19. Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen E.; Humble, Travis S.

    2017-04-01

    Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. In an effort to reduce the complexity of the minor embedding problem, we introduce the minor set cover (MSC) of a known graph G: a subset of graph minors which contain any remaining minor of the graph as a subgraph. Any graph that can be embedded into G will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, which is a complete bipartite graph. We show that the complete bipartite graph K_{N,N} has a MSC of N minors, from which K_{N+1} is identified as the largest clique minor of K_{N,N}. The case of determining the largest clique minor of hardware with faults is briefly discussed but remains an open question.

  20. Identifying the minor set cover of dense connected bipartite graphs via random matching edge sets

    DOE PAGES

    Hamilton, Kathleen E.; Humble, Travis S.

    2017-02-23

    Using quantum annealing to solve an optimization problem requires minor embedding a logic graph into a known hardware graph. We introduce the minor set cover (MSC) of a known graph GG : a subset of graph minors which contain any remaining minor of the graph as a subgraph, in an effort to reduce the complexity of the minor embedding problem. Any graph that can be embedded into GG will be embeddable into a member of the MSC. Focusing on embedding into the hardware graph of commercially available quantum annealers, we establish the MSC for a particular known virtual hardware, whichmore » is a complete bipartite graph. Furthermore, we show that the complete bipartite graph K N,N has a MSC of N minors, from which K N+1 is identified as the largest clique minor of K N,N. In the case of determining the largest clique minor of hardware with faults we briefly discussed this open question.« less

  1. Uncovering the overlapping community structure of complex networks by maximal cliques

    NASA Astrophysics Data System (ADS)

    Li, Junqiu; Wang, Xingyuan; Cui, Yaozu

    2014-12-01

    In this paper, a unique algorithm is proposed to detect overlapping communities in the un-weighted and weighted networks with considerable accuracy. The maximal cliques, overlapping vertex, bridge vertex and isolated vertex are introduced. First, all the maximal cliques are extracted by the algorithm based on the deep and bread searching. Then two maximal cliques can be merged into a larger sub-graph by some given rules. In addition, the proposed algorithm successfully finds overlapping vertices and bridge vertices between communities. Experimental results using some real-world networks data show that the performance of the proposed algorithm is satisfactory.

  2. Alternative steady states in ecological networks

    NASA Astrophysics Data System (ADS)

    Fried, Yael; Shnerb, Nadav M.; Kessler, David A.

    2017-07-01

    In many natural situations, one observes a local system with many competing species that is coupled by weak immigration to a regional species pool. The dynamics of such a system is dominated by its stable and uninvadable (SU) states. When the competition matrix is random, the number of SUs depends on the average value and variance of its entries. Here we consider the problem in the limit of weak competition and large variance. Using a yes-no interaction model, we show that the number of SUs corresponds to the number of maximum cliques in an Erdös-Rényi network. The number of SUs grows exponentially with the number of species in this limit, unless the network is completely asymmetric. In the asymmetric limit, the number of SUs is O (1 ) . Numerical simulations suggest that these results are valid for models with a continuous distribution of competition terms.

  3. Revealing the ISO/IEC 9126-1 Clique Tree for COTS Software Evaluation

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    2007-01-01

    Previous research has shown that acyclic dependency models, if they exist, can be extracted from software quality standards and that these models can be used to assess software safety and product quality. In the case of commercial off-the-shelf (COTS) software, the extracted dependency model can be used in a probabilistic Bayesian network context for COTS software evaluation. Furthermore, while experts typically employ Bayesian networks to encode domain knowledge, secondary structures (clique trees) from Bayesian network graphs can be used to determine the probabilistic distribution of any software variable (attribute) using any clique that contains that variable. Secondary structures, therefore, provide insight into the fundamental nature of graphical networks. This paper will apply secondary structure calculations to reveal the clique tree of the acyclic dependency model extracted from the ISO/IEC 9126-1 software quality standard. Suggestions will be provided to describe how the clique tree may be exploited to aid efficient transformation of an evaluation model.

  4. Impacts of clustering on interacting epidemics.

    PubMed

    Wang, Bing; Cao, Lang; Suzuki, Hideyuki; Aihara, Kazuyuki

    2012-07-07

    Since community structures in real networks play a major role for the epidemic spread, we therefore explore two interacting diseases spreading in networks with community structures. As a network model with community structures, we propose a random clique network model composed of different orders of cliques. We further assume that each disease spreads only through one type of cliques; this assumption corresponds to the issue that two diseases spread inside communities and outside them. Considering the relationship between the susceptible-infected-recovered (SIR) model and the bond percolation theory, we apply this theory to clique random networks under the assumption that the occupation probability is clique-type dependent, which is consistent with the observation that infection rates inside a community and outside it are different, and obtain a number of statistical properties for this model. Two interacting diseases that compete the same hosts are also investigated, which leads to a natural generalization of analyzing an arbitrary number of infectious diseases. For two-disease dynamics, the clustering effect is hypersensitive to the cohesiveness and concentration of cliques; this illustrates the impacts of clustering and the composition of subgraphs in networks on epidemic behavior. The analysis of coexistence/bistability regions provides significant insight into the relationship between the network structure and the potential epidemic prevalence. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Tricriticality in the q-neighbor Ising model on a partially duplex clique.

    PubMed

    Chmiel, Anna; Sienkiewicz, Julian; Sznajd-Weron, Katarzyna

    2017-12-01

    We analyze a modified kinetic Ising model, a so-called q-neighbor Ising model, with Metropolis dynamics [Phys. Rev. E 92, 052105 (2015)PLEEE81539-375510.1103/PhysRevE.92.052105] on a duplex clique and a partially duplex clique. In the q-neighbor Ising model each spin interacts only with q spins randomly chosen from its whole neighborhood. In the case of a duplex clique the change of a spin is allowed only if both levels simultaneously induce this change. Due to the mean-field-like nature of the model we are able to derive the analytic form of transition probabilities and solve the corresponding master equation. The existence of the second level changes dramatically the character of the phase transition. In the case of the monoplex clique, the q-neighbor Ising model exhibits a continuous phase transition for q=3, discontinuous phase transition for q≥4, and for q=1 and q=2 the phase transition is not observed. On the other hand, in the case of the duplex clique continuous phase transitions are observed for all values of q, even for q=1 and q=2. Subsequently we introduce a partially duplex clique, parametrized by r∈[0,1], which allows us to tune the network from monoplex (r=0) to duplex (r=1). Such a generalized topology, in which a fraction r of all nodes appear on both levels, allows us to obtain the critical value of r=r^{*}(q) at which a tricriticality (switch from continuous to discontinuous phase transition) appears.

  6. Efficient Deployment of Key Nodes for Optimal Coverage of Industrial Mobile Wireless Networks

    PubMed Central

    Li, Xiaomin; Li, Di; Dong, Zhijie; Hu, Yage; Liu, Chengliang

    2018-01-01

    In recent years, industrial wireless networks (IWNs) have been transformed by the introduction of mobile nodes, and they now offer increased extensibility, mobility, and flexibility. Nevertheless, mobile nodes pose efficiency and reliability challenges. Efficient node deployment and management of channel interference directly affect network system performance, particularly for key node placement in clustered wireless networks. This study analyzes this system model, considering both industrial properties of wireless networks and their mobility. Then, static and mobile node coverage problems are unified and simplified to target coverage problems. We propose a novel strategy for the deployment of clustered heads in grouped industrial mobile wireless networks (IMWNs) based on the improved maximal clique model and the iterative computation of new candidate cluster head positions. The maximal cliques are obtained via a double-layer Tabu search. Each cluster head updates its new position via an improved virtual force while moving with full coverage to find the minimal inter-cluster interference. Finally, we develop a simulation environment. The simulation results, based on a performance comparison, show the efficacy of the proposed strategies and their superiority over current approaches. PMID:29439439

  7. Singing together or apart: The effect of competitive and cooperative singing on social bonding within and between sub-groups of a university Fraternity

    PubMed Central

    Pearce, Eiluned; Launay, Jacques; van Duijn, Max; Rotkirch, Anna; David-Barrett, Tamas; Dunbar, Robin I M

    2016-01-01

    Singing together seems to facilitate social bonding, but it is unclear whether this is true in all contexts. Here we examine the social bonding outcomes of naturalistic singing behaviour in a European university Fraternity composed of exclusive ‘Cliques’: recognised sub-groups of 5-20 friends who adopt a special name and identity. Singing occurs frequently in this Fraternity, both ‘competitively’ (contests between Cliques) and ‘cooperatively’ (multiple Cliques singing together). Both situations were re-created experimentally in order to explore how competitive and cooperative singing affects feelings of closeness towards others. Participants were assigned to teams of four and were asked to sing together with another team either from the same Clique or from a different Clique. Participants (N = 88) felt significantly closer to teams from different Cliques after singing with them compared to before, regardless of whether they cooperated with (singing loudly together) or competed against (trying to singing louder than) the other team. In contrast, participants reported reduced closeness with other teams from their own Clique after competing with them. These results indicate that group singing can increase closeness to less familiar individuals regardless of whether they share a common motivation, but that singing competitively may reduce closeness within a very tight-knit group. PMID:27777494

  8. Visual attention: low-level and high-level viewpoints

    NASA Astrophysics Data System (ADS)

    Stentiford, Fred W. M.

    2012-06-01

    This paper provides a brief outline of the approaches to modeling human visual attention. Bottom-up and top-down mechanisms are described together with some of the problems that they face. It has been suggested in brain science that memory functions by trading measurement precision for associative power; sensory inputs from the environment are never identical on separate occasions, but the associations with memory compensate for the differences. A graphical representation for image similarity is described that relies on the size of maximally associative structures (cliques) that are found to reflect between pairs of images. This is applied to the recognition of movie posters, the location and recognition of characters, and the recognition of faces. The similarity mechanism is shown to model popout effects when constraints are placed on the physical separation of pixels that correspond to nodes in the maximal cliques. The effect extends to modeling human visual behaviour on the Poggendorff illusion.

  9. Coping with Cliques

    MedlinePlus

    ... the outside and know that a clique is bullying or intimidating others, let teachers or counselors know ... Teens What Stresses You Out About School? Shyness Cyberbullying Sexual Harassment and Sexual Bullying Dealing With Bullying ...

  10. Sociometric Clique Identification. Final Report.

    ERIC Educational Resources Information Center

    Kadushin, Charles

    This report consists of four parts. The first part is a non-technical summary of the basic problem and an attempted solution. The second part is a technical review of the literature and a description of the basic algorithm used in the solution. The third part describes the use of the Sociogram System. The fourth part describes the use of CHAIN, a…

  11. Matching and Vertex Packing: How Hard Are They?

    DTIC Science & Technology

    1991-01-01

    Theory, 29, Ann. Discrete Math ., North-Holland, Amsterdam, 1986. [2] M.D. Plummer, Matching theory - a sampler: from D~nes K~nig to the present...Ser. B, 28, 1980, 284-304. [20i N. Sbihi, Algorithme de recherche d’un stable de cardinalit6 maximum dans un graphe sans 6toile, Discrete Math ., 29...cliques and by finite families of graphs, Discrete Math ., 49, 1984, 45-59. [92] G. Cornu~jols, D. Hartvigsen and W.R. Pulleyblank, Packing subgraphs in

  12. Markov Dynamics as a Zooming Lens for Multiscale Community Detection: Non Clique-Like Communities and the Field-of-View Limit

    PubMed Central

    Schaub, Michael T.; Delvenne, Jean-Charles; Yaliraki, Sophia N.; Barahona, Mauricio

    2012-01-01

    In recent years, there has been a surge of interest in community detection algorithms for complex networks. A variety of computational heuristics, some with a long history, have been proposed for the identification of communities or, alternatively, of good graph partitions. In most cases, the algorithms maximize a particular objective function, thereby finding the ‘right’ split into communities. Although a thorough comparison of algorithms is still lacking, there has been an effort to design benchmarks, i.e., random graph models with known community structure against which algorithms can be evaluated. However, popular community detection methods and benchmarks normally assume an implicit notion of community based on clique-like subgraphs, a form of community structure that is not always characteristic of real networks. Specifically, networks that emerge from geometric constraints can have natural non clique-like substructures with large effective diameters, which can be interpreted as long-range communities. In this work, we show that long-range communities escape detection by popular methods, which are blinded by a restricted ‘field-of-view’ limit, an intrinsic upper scale on the communities they can detect. The field-of-view limit means that long-range communities tend to be overpartitioned. We show how by adopting a dynamical perspective towards community detection [1], [2], in which the evolution of a Markov process on the graph is used as a zooming lens over the structure of the network at all scales, one can detect both clique- or non clique-like communities without imposing an upper scale to the detection. Consequently, the performance of algorithms on inherently low-diameter, clique-like benchmarks may not always be indicative of equally good results in real networks with local, sparser connectivity. We illustrate our ideas with constructive examples and through the analysis of real-world networks from imaging, protein structures and the power grid, where a multiscale structure of non clique-like communities is revealed. PMID:22384178

  13. The Path Resistance Method for Bounding the Smallest Nontrivial Eigenvalue of a Laplacian

    NASA Technical Reports Server (NTRS)

    Guattery, Stephen; Leighton, Tom; Miller, Gary L.

    1997-01-01

    We introduce the path resistance method for lower bounds on the smallest nontrivial eigenvalue of the Laplacian matrix of a graph. The method is based on viewing the graph in terms of electrical circuits; it uses clique embeddings to produce lower bounds on lambda(sub 2) and star embeddings to produce lower bounds on the smallest Rayleigh quotient when there is a zero Dirichlet boundary condition. The method assigns priorities to the paths in the embedding; we show that, for an unweighted tree T, using uniform priorities for a clique embedding produces a lower bound on lambda(sub 2) that is off by at most an 0(log diameter(T)) factor. We show that the best bounds this method can produce for clique embeddings are the same as for a related method that uses clique embeddings and edge lengths to produce bounds.

  14. A tool for filtering information in complex systems

    NASA Astrophysics Data System (ADS)

    Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.

    2005-07-01

    We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. This paper was submitted directly (Track II) to the PNAS office.Abbreviations: MST, minimum spanning tree; PMFG, Planar Maximally Filtered Graph; r-clique, clique of r elements.

  15. Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function.

    PubMed

    Reimann, Michael W; Nolte, Max; Scolamiero, Martina; Turner, Katharine; Perin, Rodrigo; Chindemi, Giuseppe; Dłotko, Paweł; Levi, Ran; Hess, Kathryn; Markram, Henry

    2017-01-01

    The lack of a formal link between neural network structure and its emergent function has hampered our understanding of how the brain processes information. We have now come closer to describing such a link by taking the direction of synaptic transmission into account, constructing graphs of a network that reflect the direction of information flow, and analyzing these directed graphs using algebraic topology. Applying this approach to a local network of neurons in the neocortex revealed a remarkably intricate and previously unseen topology of synaptic connectivity. The synaptic network contains an abundance of cliques of neurons bound into cavities that guide the emergence of correlated activity. In response to stimuli, correlated activity binds synaptically connected neurons into functional cliques and cavities that evolve in a stereotypical sequence toward peak complexity. We propose that the brain processes stimuli by forming increasingly complex functional cliques and cavities.

  16. Amino-Acid Network Clique Analysis of Protein Mutation Non-Additive Effects: A Case Study of Lysozme.

    PubMed

    Ming, Dengming; Chen, Rui; Huang, He

    2018-05-10

    Optimizing amino-acid mutations in enzyme design has been a very challenging task in modern bio-industrial applications. It is well known that many successful designs often hinge on extensive correlations among mutations at different sites within the enzyme, however, the underpinning mechanism for these correlations is far from clear. Here, we present a topology-based model to quantitively characterize non-additive effects between mutations. The method is based on the molecular dynamic simulations and the amino-acid network clique analysis. It examines if the two mutation sites of a double-site mutation fall into to a 3-clique structure, and associates such topological property of mutational site spatial distribution with mutation additivity features. We analyzed 13 dual mutations of T4 phage lysozyme and found that the clique-based model successfully distinguishes highly correlated or non-additive double-site mutations from those additive ones whose component mutations have less correlation. We also applied the model to protein Eglin c whose structural topology is significantly different from that of T4 phage lysozyme, and found that the model can, to some extension, still identify non-additive mutations from additive ones. Our calculations showed that mutation non-additive effects may heavily depend on a structural topology relationship between mutation sites, which can be quantitatively determined using amino-acid network k -cliques. We also showed that double-site mutation correlations can be significantly altered by exerting a third mutation, indicating that more detailed physicochemical interactions should be considered along with the network clique-based model for better understanding of this elusive mutation-correlation principle.

  17. Communities as cliques

    PubMed Central

    Fried, Yael; Kessler, David A.; Shnerb, Nadav M.

    2016-01-01

    High-diversity species assemblages are very common in nature, and yet the factors allowing for the maintenance of biodiversity remain obscure. The competitive exclusion principle and May’s complexity-diversity puzzle both suggest that a community can support only a small number of species, turning the spotlight on the dynamics of local patches or islands, where stable and uninvadable (SU) subsets of species play a crucial role. Here we map the question of the number of different possible SUs a community can support to the geometric problem of finding maximal cliques of the corresponding graph. This enables us to solve for the number of SUs as a function of the species richness in the regional pool, N, showing that the growth of this number is subexponential in N, contrary to long-standing wisdom. To understand the dynamics under noise we examine the relaxation time to an SU. Symmetric systems relax rapidly, whereas in asymmetric systems the relaxation time grows much faster with N, suggesting an excitable dynamics under noise. PMID:27759102

  18. How can we establish more successful knowledge networks in developing countries? Lessons learnt from knowledge networks in Iran.

    PubMed

    Yazdizadeh, Bahareh; Majdzadeh, Reza; Alami, Ali; Amrolalaei, Sima

    2014-10-29

    Formal knowledge networks are considered among the solutions for strengthening knowledge translation and one of the elements of innovative systems in developing and developed countries. In the year 2000, knowledge networks were established in Iran's health system to organize, lead, empower, and coordinate efforts made by health-related research centers in the country. Since the assessment of a knowledge network is one of the main requirements for its success, the current study was designed in two qualitative and quantitative sections to identify the strengths and weaknesses of the established knowledge networks and to assess their efficiency. In the qualitative section, semi-structured, in-depth interviews were held with network directors and secretaries. The interviews were analyzed through the framework approach. To analyze effectiveness, social network analysis approach was used. That is, by considering the networks' research council members as 'nodes', and the numbers of their joint articles--before and after the network establishments--as 'relations or ties', indices of density, clique, and centrality were calculated for each network. In the qualitative section, non-transparency of management, lack of goals, administrative problems were among the most prevalent issues observed. Currently, the most important challenges are the policies related to them and their management. In the quantitative section, we observed that density and clique indices had risen for some networks; however, the centrality index for the same networks was not as high. Consequently the attribution of density and clique indices to these networks was not possible. Therefore, consolidating and revising policies relevant to the networks and preparing a guide for establishing managing networks could prove helpful. To develop knowledge and technology in a country, networks need to solve the problems they face in management and governance. That is, the first step towards the realization of true knowledge networks in health system.

  19. Cooperation, Competition, and the Structure of Student Cliques.

    ERIC Educational Resources Information Center

    Hansell, Stephen; And Others

    Research indicates substantial evidence that, compared with competition, cooperation increases mutual friendliness and contact between individuals. The effects of cooperative and competitive experiences on the structure of student cliques in the classroom were examined. Seven classrooms of fourth-, fifth-, and sixth-grade students (N=117) were…

  20. Influence of Adolescent Social Cliques on Vocational Identity.

    ERIC Educational Resources Information Center

    Johnson, John A.; Cheek, Jonathan M.

    While Holland's (1973) theory of personality types and vocational identity is widely used, the theory does not specify the developmental antecedents of the six personality types. To examine the relationship between membership in adolescent social cliques and vocational identity in early adulthood, four groups of college students (N=192)…

  1. Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Pinar, Ali; Sariyuce, Ahmet Erdem

    Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account formore » overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.« less

  2. The Evolution of Children's Friendship Cliques.

    ERIC Educational Resources Information Center

    Hallinan, Maureen T.

    This paper investigates the formation and evolution of friendship cliques among preadolescent youth in elementary and junior high grades 4 through 8. Two sets of data were collected: the first set consisted of cross sectional data from 51 classes (grades 5 through 8); the second set contained sociometric data collected from 11 classes (grades 4…

  3. Changing Neighborhood and Clique Structure in Two Missouri Communities, 1955-66.

    ERIC Educational Resources Information Center

    Lionberger, Herbert F.; Yeh, Chii-jeng

    A study was conducted of two Missouri communities to investigate neighborhood change between 1956 and 1966 and social cliques as possible emerging replacements for neighborhoods. Ozark, in an economically disadvantaged southern part of the State, has experienced drastic farm changes, from general to dairy farming and later to enterprises more…

  4. Project Trust: Breaking down Barriers between Middle School Children

    ERIC Educational Resources Information Center

    Batiuk, Mary Ellen; Boland, James A.; Wilcox, Norma

    2004-01-01

    This paper analyzes the success of a camp retreat weekend called Project Trust involving middle school students and teachers. The goal of the camp is to break down barriers between cliques identified as active in the school. The camp focuses on building team relationships across clique membership and incorporates elements of peace education and…

  5. Sparse cliques trump scale-free networks in coordination and competition

    PubMed Central

    Gianetto, David A.; Heydari, Babak

    2016-01-01

    Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game. PMID:26899456

  6. cWINNOWER algorithm for finding fuzzy dna motifs

    NASA Technical Reports Server (NTRS)

    Liang, S.; Samanta, M. P.; Biegel, B. A.

    2004-01-01

    The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if a clique consisting of a sufficiently large number of mutated copies of the motif (i.e., the signals) is present in the DNA sequence. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum detectable clique size qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12,000 for (l, d) = (15, 4). Copyright Imperial College Press.

  7. Sparse cliques trump scale-free networks in coordination and competition

    NASA Astrophysics Data System (ADS)

    Gianetto, David A.; Heydari, Babak

    2016-02-01

    Cooperative behavior, a natural, pervasive and yet puzzling phenomenon, can be significantly enhanced by networks. Many studies have shown how global network characteristics affect cooperation; however, it is difficult to understand how this occurs based on global factors alone, low-level network building blocks, or motifs are necessary. In this work, we systematically alter the structure of scale-free and clique networks and show, through a stochastic evolutionary game theory model, that cooperation on cliques increases linearly with community motif count. We further show that, for reactive stochastic strategies, network modularity improves cooperation in the anti-coordination Snowdrift game and the Prisoner’s Dilemma game but not in the Stag Hunt coordination game. We also confirm the negative effect of the scale-free graph on cooperation when effective payoffs are used. On the flip side, clique graphs are highly cooperative across social environments. Adding cycles to the acyclic scale-free graph increases cooperation when multiple games are considered; however, cycles have the opposite effect on how forgiving agents are when playing the Prisoner’s Dilemma game.

  8. Young Children's Cliques: A Study on Processes of Peer Acceptance and Cliques Aggregation

    ERIC Educational Resources Information Center

    Brighi, Antonella; Mazzanti, Chiara; Guarini, Annalisa; Sansavini, Alessandra

    2015-01-01

    A considerable amount of research has examined the link between children's peer acceptance, which refers to the degree of likability within the peer group, social functioning and emotional wellbeing, at a same age and in a long term perspective, pointing out to the contribution of peer acceptance for mental wellbeing. Our study proposes a…

  9. Acceleration of Binding Site Comparisons by Graph Partitioning.

    PubMed

    Krotzky, Timo; Klebe, Gerhard

    2015-08-01

    The comparison of protein binding sites is a prominent task in computational chemistry and has been studied in many different ways. For the automatic detection and comparison of putative binding cavities the Cavbase system has been developed which uses a coarse-grained set of pseudocenters to represent the physicochemical properties of a binding site and employs a graph-based procedure to calculate similarities between two binding sites. However, the comparison of two graphs is computationally quite demanding which makes large-scale studies such as the rapid screening of entire databases hardly feasible. In a recent work, we proposed the method Local Cliques (LC) for the efficient comparison of Cavbase binding sites. It employs a clique heuristic to detect the maximum common subgraph of two binding sites and an extended graph model to additionally compare the shape of individual surface patches. In this study, we present an alternative to further accelerate the LC method by partitioning the binding-site graphs into disjoint components prior to their comparisons. The pseudocenter sets are split with regard to their assigned phyiscochemical type, which leads to seven much smaller graphs than the original one. Applying this approach on the same test scenarios as in the former comprehensive way results in a significant speed-up without sacrificing accuracy. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. MASTtreedist: visualization of tree space based on maximum agreement subtree.

    PubMed

    Huang, Hong; Li, Yongji

    2013-01-01

    Phylogenetic tree construction process might produce many candidate trees as the "best estimates." As the number of constructed phylogenetic trees grows, the need to efficiently compare their topological or physical structures arises. One of the tree comparison's software tools, the Mesquite's Tree Set Viz module, allows the rapid and efficient visualization of the tree comparison distances using multidimensional scaling (MDS). Tree-distance measures, such as Robinson-Foulds (RF), for the topological distance among different trees have been implemented in Tree Set Viz. New and sophisticated measures such as Maximum Agreement Subtree (MAST) can be continuously built upon Tree Set Viz. MAST can detect the common substructures among trees and provide more precise information on the similarity of the trees, but it is NP-hard and difficult to implement. In this article, we present a practical tree-distance metric: MASTtreedist, a MAST-based comparison metric in Mesquite's Tree Set Viz module. In this metric, the efficient optimizations for the maximum weight clique problem are applied. The results suggest that the proposed method can efficiently compute the MAST distances among trees, and such tree topological differences can be translated as a scatter of points in two-dimensional (2D) space. We also provide statistical evaluation of provided measures with respect to RF-using experimental data sets. This new comparison module provides a new tree-tree pairwise comparison metric based on the differences of the number of MAST leaves among constructed phylogenetic trees. Such a new phylogenetic tree comparison metric improves the visualization of taxa differences by discriminating small divergences of subtree structures for phylogenetic tree reconstruction.

  11. Considering Popular Fiction and Library Practices of Recommendation: The Literary Status of "The Clique" and Its Historical Progenitors

    ERIC Educational Resources Information Center

    Pattee, Amy

    2008-01-01

    The Clique, a contemporary popular series for girls, has been criticized in the popular and professional media but includes thematic content similar to some of the more lauded mid-nineteenth-century domestic fiction for girls. By making a formal comparison of this popular series with domestic fiction for girls (much of which is now considered…

  12. The Use of British Nursery Rhymes and Contemporary Technology as Venues for Creating and Expressing Hidden Literacies throughout Time by Children, Adolescents, and Adults

    ERIC Educational Resources Information Center

    Hazlett, Lisa A.

    2009-01-01

    Power and status are captivating, especially the desire for social status and its commensurate authority and security. Cliques, smaller clusters within larger peer groups sharing similar views, behaviors, and attitudes, are a means of attaining societal power. Because cliques are typically composed of the disenfranchised holding views different…

  13. Co-Workers' Perceptions of an Employee with Severe Disabilities: An Analysis of Social Interactions in a Work Setting.

    ERIC Educational Resources Information Center

    Yan, Xiaoyan; And Others

    1993-01-01

    This study explains the methodology of clique analysis and presents a study in which the use of clique analysis demonstrated that an employee with severe disabilities was perceived by co-workers as socially involved in the work setting at levels comparable to others in such areas as greetings and small talk, work-related conversation, and personal…

  14. Flow motifs reveal limitations of the static framework to represent human interactions

    NASA Astrophysics Data System (ADS)

    Rocha, Luis E. C.; Blondel, Vincent D.

    2013-04-01

    Networks are commonly used to define underlying interaction structures where infections, information, or other quantities may spread. Although the standard approach has been to aggregate all links into a static structure, some studies have shown that the time order in which the links are established may alter the dynamics of spreading. In this paper, we study the impact of the time ordering in the limits of flow on various empirical temporal networks. By using a random walk dynamics, we estimate the flow on links and convert the original undirected network (temporal and static) into a directed flow network. We then introduce the concept of flow motifs and quantify the divergence in the representativity of motifs when using the temporal and static frameworks. We find that the regularity of contacts and persistence of vertices (common in email communication and face-to-face interactions) result on little differences in the limits of flow for both frameworks. On the other hand, in the case of communication within a dating site and of a sexual network, the flow between vertices changes significantly in the temporal framework such that the static approximation poorly represents the structure of contacts. We have also observed that cliques with 3 and 4 vertices containing only low-flow links are more represented than the same cliques with all high-flow links. The representativity of these low-flow cliques is higher in the temporal framework. Our results suggest that the flow between vertices connected in cliques depend on the topological context in which they are placed and in the time sequence in which the links are established. The structure of the clique alone does not completely characterize the potential of flow between the vertices.

  15. Combinatorial Problems of Applied Discrete Mathematics.

    DTIC Science & Technology

    1979-12-01

    204 .I (30) 3. Steiner, Combinatorische Aufgabe, Z. Reine Angew. Math. 45 (1853) 18 1—182. (31) IC. Takeuchi, A table of difference sets generating...Assoc. Fr. Ay. Sd . 1 (1900) 122— 123; 2 (1901) 170—203. • (33) R.M. Wilson, Cyclotomy and difference fam ilies in elementary Abelian groups , 3. Number...the differe nt cliques containing either A or B. Let us first introduce the following notations. If A is a vertex in G, then 1(A) denotes the set of

  16. Max-margin weight learning for medical knowledge network.

    PubMed

    Jiang, Jingchi; Xie, Jing; Zhao, Chao; Su, Jia; Guan, Yi; Yu, Qiubin

    2018-03-01

    The application of medical knowledge strongly affects the performance of intelligent diagnosis, and method of learning the weights of medical knowledge plays a substantial role in probabilistic graphical models (PGMs). The purpose of this study is to investigate a discriminative weight-learning method based on a medical knowledge network (MKN). We propose a training model called the maximum margin medical knowledge network (M 3 KN), which is strictly derived for calculating the weight of medical knowledge. Using the definition of a reasonable margin, the weight learning can be transformed into a margin optimization problem. To solve the optimization problem, we adopt a sequential minimal optimization (SMO) algorithm and the clique property of a Markov network. Ultimately, M 3 KN not only incorporates the inference ability of PGMs but also deals with high-dimensional logic knowledge. The experimental results indicate that M 3 KN obtains a higher F-measure score than the maximum likelihood learning algorithm of MKN for both Chinese Electronic Medical Records (CEMRs) and Blood Examination Records (BERs). Furthermore, the proposed approach is obviously superior to some classical machine learning algorithms for medical diagnosis. To adequately manifest the importance of domain knowledge, we numerically verify that the diagnostic accuracy of M 3 KN is gradually improved as the number of learned CEMRs increase, which contain important medical knowledge. Our experimental results show that the proposed method performs reliably for learning the weights of medical knowledge. M 3 KN outperforms other existing methods by achieving an F-measure of 0.731 for CEMRs and 0.4538 for BERs. This further illustrates that M 3 KN can facilitate the investigations of intelligent healthcare. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Clique-based data mining for related genes in a biomedical database.

    PubMed

    Matsunaga, Tsutomu; Yonemori, Chikara; Tomita, Etsuji; Muramatsu, Masaaki

    2009-07-01

    Progress in the life sciences cannot be made without integrating biomedical knowledge on numerous genes in order to help formulate hypotheses on the genetic mechanisms behind various biological phenomena, including diseases. There is thus a strong need for a way to automatically and comprehensively search from biomedical databases for related genes, such as genes in the same families and genes encoding components of the same pathways. Here we address the extraction of related genes by searching for densely-connected subgraphs, which are modeled as cliques, in a biomedical relational graph. We constructed a graph whose nodes were gene or disease pages, and edges were the hyperlink connections between those pages in the Online Mendelian Inheritance in Man (OMIM) database. We obtained over 20,000 sets of related genes (called 'gene modules') by enumerating cliques computationally. The modules included genes in the same family, genes for proteins that form a complex, and genes for components of the same signaling pathway. The results of experiments using 'metabolic syndrome'-related gene modules show that the gene modules can be used to get a coherent holistic picture helpful for interpreting relations among genes. We presented a data mining approach extracting related genes by enumerating cliques. The extracted gene sets provide a holistic picture useful for comprehending complex disease mechanisms.

  18. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors.

    PubMed

    Qu, Chen; Bi, Du-Yan; Sui, Ping; Chao, Ai-Nong; Wang, Yun-Fei

    2017-09-22

    The CMOS (Complementary Metal-Oxide-Semiconductor) is a new type of solid image sensor device widely used in object tracking, object recognition, intelligent navigation fields, and so on. However, images captured by outdoor CMOS sensor devices are usually affected by suspended atmospheric particles (such as haze), causing a reduction in image contrast, color distortion problems, and so on. In view of this, we propose a novel dehazing approach based on a local consistent Markov random field (MRF) framework. The neighboring clique in traditional MRF is extended to the non-neighboring clique, which is defined on local consistent blocks based on two clues, where both the atmospheric light and transmission map satisfy the character of local consistency. In this framework, our model can strengthen the restriction of the whole image while incorporating more sophisticated statistical priors, resulting in more expressive power of modeling, thus, solving inadequate detail recovery effectively and alleviating color distortion. Moreover, the local consistent MRF framework can obtain details while maintaining better results for dehazing, which effectively improves the image quality captured by the CMOS image sensor. Experimental results verified that the method proposed has the combined advantages of detail recovery and color preservation.

  19. Modeling the Webgraph: How Far We Are

    NASA Astrophysics Data System (ADS)

    Donato, Debora; Laura, Luigi; Leonardi, Stefano; Millozzi, Stefano

    The following sections are included: * Introduction * Preliminaries * WebBase * In-degree and out-degree * PageRank * Bipartite cliques * Strongly connected components * Stochastic models of the webgraph * Models of the webgraph * A multi-layer model * Large scale simulation * Algorithmic techniques for generating and measuring webgraphs * Data representation and multifiles * Generating webgraphs * Traversal with two bits for each node * Semi-external breadth first search * Semi-external depth first search * Computation of the SCCs * Computation of the bow-tie regions * Disjoint bipartite cliques * PageRank * Summary and outlook

  20. Listing All Maximal Cliques in Sparse Graphs in Near-optimal Time

    DTIC Science & Technology

    2011-01-01

    523 10 Arabisopsis thaliana 1745 3098 71 12 Drosophila melanogaster 7282 24894 176 12 Homo Sapiens 9527 31182 308 12 Schizosaccharomyces pombe 2031...clusters of actors [6,14,28,40] and may be used as features in exponential random graph models for statistical analysis of social networks [17,19,20,44,49...29. R. Horaud and T. Skordas. Stereo correspondence through feature grouping and maximal cliques. IEEE Trans. Patt. An. Mach. Int. 11(11):1168–1180

  1. World scientific collaboration in coronary heart disease research.

    PubMed

    Yu, Qi; Shao, Hongfang; He, Peifeng; Duan, Zhiguang

    2013-08-10

    Coronary heart disease (CHD) will continue to exert a heavy burden for countries all over the world. Scientific collaboration has become the only choice for progress in biomedicine. Unfortunately, there is a scarcity of scientific publications about scientific collaboration in CHD research. This study examines collaboration behaviors across multiple collaboration types in the CHD research. 294,756 records about CHD were retrieved from Web of Science. Methods such as co-authorship, social network analysis, connected component, cliques, and betweenness centrality were used in this study. Collaborations have increased at the author, institution and country/region levels in CHD research over the past three decades. 3000 most collaborative authors, 572 most collaborative institutions and 52 countries/regions are extracted from their corresponding collaboration network. 766 cliques are found in the most collaborative authors. 308 cliques are found in the most collaborative institutions. Western countries/regions represent the core of the world's collaboration. The United States ranks first in terms of number of multi-national publications, while Hungary leads in the ranking measured by their proportion of collaborative output. The rate of economic development in the countries/regions also affects the multi-national collaboration behavior. Collaborations among countries/regions need to be encouraged in the CHD research. The visualization of overlapping cliques in the most collaborative authors and institutions are considered "skeleton" of the collaboration network. Eastern countries/regions should strengthen cooperation with western countries/regions in the CHD research. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  3. Incremental k-core decomposition: Algorithms and evaluation

    DOE PAGES

    Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...

    2016-02-01

    A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less

  4. Mapping the distribution of packing topologies within protein interiors shows predominant preference for specific packing motifs

    PubMed Central

    2011-01-01

    Background Mapping protein primary sequences to their three dimensional folds referred to as the 'second genetic code' remains an unsolved scientific problem. A crucial part of the problem concerns the geometrical specificity in side chain association leading to densely packed protein cores, a hallmark of correctly folded native structures. Thus, any model of packing within proteins should constitute an indispensable component of protein folding and design. Results In this study an attempt has been made to find, characterize and classify recurring patterns in the packing of side chain atoms within a protein which sustains its native fold. The interaction of side chain atoms within the protein core has been represented as a contact network based on the surface complementarity and overlap between associating side chain surfaces. Some network topologies definitely appear to be preferred and they have been termed 'packing motifs', analogous to super secondary structures in proteins. Study of the distribution of these motifs reveals the ubiquitous presence of typical smaller graphs, which appear to get linked or coalesce to give larger graphs, reminiscent of the nucleation-condensation model in protein folding. One such frequently occurring motif, also envisaged as the unit of clustering, the three residue clique was invariably found in regions of dense packing. Finally, topological measures based on surface contact networks appeared to be effective in discriminating sequences native to a specific fold amongst a set of decoys. Conclusions Out of innumerable topological possibilities, only a finite number of specific packing motifs are actually realized in proteins. This small number of motifs could serve as a basis set in the construction of larger networks. Of these, the triplet clique exhibits distinct preference both in terms of composition and geometry. PMID:21605466

  5. Investigating Patterns of Participation in an Online Support Group for Problem Drinking: a Social Network Analysis.

    PubMed

    Urbanoski, Karen; van Mierlo, Trevor; Cunningham, John

    2017-10-01

    This study contributes to emerging literature on online health networks by modeling communication patterns between members of a moderated online support group for problem drinking. Using social network analysis, we described members' patterns of joint participation in threads, parsing out the role of site moderators, and explored differences in member characteristics by network position. Posts made to the online support group of Alcohol Help Centre during 2013 were structured as a two-mode network of members (n = 205) connected via threads (n = 506). Metrics included degree centrality, clique membership, and tie strength. The network consisted of one component and no cliques of members, although most made few posts and a small number communicated only with the site's moderators. Highly active members were older and tended to have started posting prior to 2013. The distribution of members across threads varied from threads containing posts by one member to others that connected multiple members. Moderators accounted for sizable proportions of the connectivity between both members and threads. After 5 years of operation, the AHC online support group appears to be fairly cohesive and stable, in the sense that there were no isolated subnetworks comprised of specific types of members or devoted to specific topics. Participation and connectedness at the member-level was varied, however, and tended to be low on average. The moderators were among the most central in the network, although there were also members who emerged as central and dedicated contributors to the online discussions across topics. Study findings highlight a number of areas for consideration by online support group developers and managers.

  6. Portfolios in Stochastic Local Search: Efficiently Computing Most Probable Explanations in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Roth, Dan; Wilkins, David C.

    2001-01-01

    Portfolio methods support the combination of different algorithms and heuristics, including stochastic local search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function - i.e. a ratio of two polynomials - of the probability of applying an additive search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGSs performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that stochastic local search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.

  7. The persistence of cliques in the post-communist state. The case of deniability in drug reimbursement policy in Poland.

    PubMed

    Ozierański, Piotr; King, Lawrence

    2016-06-01

    This article explores a key question in political sociology: Can post-communist policy-making be described with classical theories of the Western state or do we need a theory of the specificity of the post-communist state? In so doing, we consider Janine Wedel's clique theory, concerned with informal social actors and processes in post-communist transition. We conducted a case study of drug reimbursement policy in Poland, using 109 stakeholder interviews, official documents and media coverage. Drawing on 'sensitizing concepts' from Wedel's theory, especially the notion of 'deniability', we developed an explanation of why Poland's reimbursement policy combined suboptimal outcomes, procedural irregularities with limited accountability of key stakeholders. We argue that deniability was created through four main mechanisms: (1) blurred boundaries between different types of state authority allowing for the dispersion of blame for controversial policy decisions; (2) bridging different sectors by 'institutional nomads', who often escaped existing conflicts of interest regulations; (3) institutional nomads' 'flexible' methods of influence premised on managing roles and representations; and (4) coordination of resources and influence by elite cliques monopolizing exclusive policy expertise. Overall, the greatest power over drug reimbursement was often associated with lowest accountability. We suggest, therefore, that the clique theory can be generalized from its home domain of explanation in foreign aid and privatizations to more technologically advanced policies in Poland and other post-communist countries. This conclusion is not identical, however, with arguing the uniqueness of the post-communist state. Rather, we show potential for using Wedel's account to analyse policy-making in Western democracies and indicate scope for its possible integration with the classical theories of the state. © London School of Economics and Political Science 2016.

  8. Mathematical properties and bounds on haplotyping populations by pure parsimony.

    PubMed

    Wang, I-Lin; Chang, Chia-Yuan

    2011-06-01

    Although the haplotype data can be used to analyze the function of DNA, due to the significant efforts required in collecting the haplotype data, usually the genotype data is collected and then the population haplotype inference (PHI) problem is solved to infer haplotype data from genotype data for a population. This paper investigates the PHI problem based on the pure parsimony criterion (HIPP), which seeks the minimum number of distinct haplotypes to infer a given genotype data. We analyze the mathematical structure and properties for the HIPP problem, propose techniques to reduce the given genotype data into an equivalent one of much smaller size, and analyze the relations of genotype data using a compatible graph. Based on the mathematical properties in the compatible graph, we propose a maximal clique heuristic to obtain an upper bound, and a new polynomial-sized integer linear programming formulation to obtain a lower bound for the HIPP problem. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Computational social network modeling of terrorist recruitment.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Nina M.; Turnley, Jessica Glicken; Smrcka, Julianne D.

    2004-10-01

    The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the majormore » recruitment entity for terrorist organizations.« less

  10. Phase transitions in the q -voter model with noise on a duplex clique

    NASA Astrophysics Data System (ADS)

    Chmiel, Anna; Sznajd-Weron, Katarzyna

    2015-11-01

    We study a nonlinear q -voter model with stochastic noise, interpreted in the social context as independence, on a duplex network. To study the role of the multilevelness in this model we propose three methods of transferring the model from a mono- to a multiplex network. They take into account two criteria: one related to the status of independence (LOCAL vs GLOBAL) and one related to peer pressure (AND vs OR). In order to examine the influence of the presence of more than one level in the social network, we perform simulations on a particularly simple multiplex: a duplex clique, which consists of two fully overlapped complete graphs (cliques). Solving numerically the rate equation and simultaneously conducting Monte Carlo simulations, we provide evidence that even a simple rearrangement into a duplex topology may lead to significant changes in the observed behavior. However, qualitative changes in the phase transitions can be observed for only one of the considered rules: LOCAL&AND. For this rule the phase transition becomes discontinuous for q =5 , whereas for a monoplex such behavior is observed for q =6 . Interestingly, only this rule admits construction of realistic variants of the model, in line with recent social experiments.

  11. Empirical Study of User Preferences Based on Rating Data of Movies

    PubMed Central

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings. PMID:26735847

  12. Extracting Communities from Complex Networks by the k-Dense Method

    NASA Astrophysics Data System (ADS)

    Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro

    To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.

  13. Road networks as collections of minimum cost paths

    NASA Astrophysics Data System (ADS)

    Wegner, Jan Dirk; Montoya-Zegarra, Javier Alexander; Schindler, Konrad

    2015-10-01

    We present a probabilistic representation of network structures in images. Our target application is the extraction of urban roads from aerial images. Roads appear as thin, elongated, partially curved structures forming a loopy graph, and this complex layout requires a prior that goes beyond standard smoothness and co-occurrence assumptions. In the proposed model the network is represented as a union of 1D paths connecting distant (super-)pixels. A large set of putative candidate paths is constructed in such a way that they include the true network as much as possible, by searching for minimum cost paths in the foreground (road) likelihood. Selecting the optimal subset of candidate paths is posed as MAP inference in a higher-order conditional random field. Each path forms a higher-order clique with a type of clique potential, which attracts the member nodes of cliques with high cumulative road evidence to the foreground label. That formulation induces a robust PN -Potts model, for which a global MAP solution can be found efficiently with graph cuts. Experiments with two road data sets show that the proposed model significantly improves per-pixel accuracies as well as the overall topological network quality with respect to several baselines.

  14. Empirical Study of User Preferences Based on Rating Data of Movies.

    PubMed

    Zhao, YingSi; Shen, Bo

    2016-01-01

    User preference plays a prominent role in many fields, including electronic commerce, social opinion, and Internet search engines. Particularly in recommender systems, it directly influences the accuracy of the recommendation. Though many methods have been presented, most of these have only focused on how to improve the recommendation results. In this paper, we introduce an empirical study of user preferences based on a set of rating data about movies. We develop a simple statistical method to investigate the characteristics of user preferences. We find that the movies have potential characteristics of closure, which results in the formation of numerous cliques with a power-law size distribution. We also find that a user related to a small clique always has similar opinions on the movies in this clique. Then, we suggest a user preference model, which can eliminate the predictions that are considered to be impracticable. Numerical results show that the model can reflect user preference with remarkable accuracy when data elimination is allowed, and random factors in the rating data make prediction error inevitable. In further research, we will investigate many other rating data sets to examine the universality of our findings.

  15. Community Detection in Complex Networks via Clique Conductance.

    PubMed

    Lu, Zhenqi; Wahlström, Johan; Nehorai, Arye

    2018-04-13

    Network science plays a central role in understanding and modeling complex systems in many areas including physics, sociology, biology, computer science, economics, politics, and neuroscience. One of the most important features of networks is community structure, i.e., clustering of nodes that are locally densely interconnected. Communities reveal the hierarchical organization of nodes, and detecting communities is of great importance in the study of complex systems. Most existing community-detection methods consider low-order connection patterns at the level of individual links. But high-order connection patterns, at the level of small subnetworks, are generally not considered. In this paper, we develop a novel community-detection method based on cliques, i.e., local complete subnetworks. The proposed method overcomes the deficiencies of previous similar community-detection methods by considering the mathematical properties of cliques. We apply the proposed method to computer-generated graphs and real-world network datasets. When applied to networks with known community structure, the proposed method detects the structure with high fidelity and sensitivity. When applied to networks with no a priori information regarding community structure, the proposed method yields insightful results revealing the organization of these complex networks. We also show that the proposed method is guaranteed to detect near-optimal clusters in the bipartition case.

  16. A tool for filtering information in complex systems

    PubMed Central

    Tumminello, M.; Aste, T.; Di Matteo, T.; Mantegna, R. N.

    2005-01-01

    We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties. PMID:16027373

  17. A tool for filtering information in complex systems.

    PubMed

    Tumminello, M; Aste, T; Di Matteo, T; Mantegna, R N

    2005-07-26

    We introduce a technique to filter out complex data sets by extracting a subgraph of representative links. Such a filtering can be tuned up to any desired level by controlling the genus of the resulting graph. We show that this technique is especially suitable for correlation-based graphs, giving filtered graphs that preserve the hierarchical organization of the minimum spanning tree but containing a larger amount of information in their internal structure. In particular in the case of planar filtered graphs (genus equal to 0), triangular loops and four-element cliques are formed. The application of this filtering procedure to 100 stocks in the U.S. equity markets shows that such loops and cliques have important and significant relationships with the market structure and properties.

  18. Functional cliques in the amygdala and related brain networks driven by fear assessment acquired during movie viewing.

    PubMed

    Kinreich, Sivan; Intrator, Nathan; Hendler, Talma

    2011-01-01

    One of the greatest challenges involved in studying the brain mechanisms of fear is capturing the individual's unique instantaneous experience. Brain imaging studies to date commonly sacrifice valuable information regarding the individual real-time conscious experience, especially when focusing on elucidating the amygdala's activity. Here, we assumed that by using a minimally intrusive cue along with applying a robust clustering approach to probe the amygdala, it would be possible to rate fear in real time and to derive the related network of activation. During functional magnetic resonance imaging scanning, healthy volunteers viewed two excerpts from horror movies and were periodically auditory cued to rate their instantaneous experience of "I'm scared." Using graph theory and community mathematical concepts, data-driven clustering of the fear-related functional cliques in the amygdala was performed guided by the individually marked periods of heightened fear. Individually tailored functions derived from these amygdala activation cliques were subsequently applied as general linear model predictors to a whole-brain analysis to reveal the correlated networks. Our results suggest that by using a localized robust clustering approach, it is possible to probe activation in the right dorsal amygdala that is directly related to individual real-time emotional experience. Moreover, this fear-evoked amygdala revealed two opposing networks of co-activation and co-deactivation, which correspond to vigilance and rest-related circuits, respectively.

  19. Coming Out

    MedlinePlus

    ... about them. They're afraid they'll face bullying, harassment, discrimination, or even violence. Their families don' ... Use Anti-Gay Language? Sexual Harassment and Sexual Bullying Dealing With Bullying Coping With Cliques Transgender People ...

  20. On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs

    NASA Astrophysics Data System (ADS)

    Jiang, Minghui

    We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.

  1. Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster.

    PubMed

    Fan, Hangyu; Wang, Huandong; Li, Yong

    2018-01-23

    Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance.

  2. Transformation of general binary MRF minimization to the first-order case.

    PubMed

    Ishikawa, Hiroshi

    2011-06-01

    We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.

  3. Inferring gene ontologies from pairwise similarity data

    PubMed Central

    Kramer, Michael; Dutkowski, Janusz; Yu, Michael; Bafna, Vineet; Ideker, Trey

    2014-01-01

    Motivation: While the manually curated Gene Ontology (GO) is widely used, inferring a GO directly from -omics data is a compelling new problem. Recognizing that ontologies are a directed acyclic graph (DAG) of terms and hierarchical relations, algorithms are needed that: analyze a full matrix of gene–gene pairwise similarities from -omics data;infer true hierarchical structure in these data rather than enforcing hierarchy as a computational artifact; andrespect biological pleiotropy, by which a term in the hierarchy can relate to multiple higher level terms. Methods addressing these requirements are just beginning to emerge—none has been evaluated for GO inference. Methods: We consider two algorithms [Clique Extracted Ontology (CliXO), LocalFitness] that uniquely satisfy these requirements, compared with methods including standard clustering. CliXO is a new approach that finds maximal cliques in a network induced by progressive thresholding of a similarity matrix. We evaluate each method’s ability to reconstruct the GO biological process ontology from a similarity matrix based on (a) semantic similarities for GO itself or (b) three -omics datasets for yeast. Results: For task (a) using semantic similarity, CliXO accurately reconstructs GO (>99% precision, recall) and outperforms other approaches (<20% precision, <20% recall). For task (b) using -omics data, CliXO outperforms other methods using two -omics datasets and achieves ∼30% precision and recall using YeastNet v3, similar to an earlier approach (Network Extracted Ontology) and better than LocalFitness or standard clustering (20–25% precision, recall). Conclusion: This study provides algorithmic foundation for building gene ontologies by capturing hierarchical and pleiotropic structure embedded in biomolecular data. Contact: tideker@ucsd.edu PMID:24932003

  4. Correlation filtering in financial time series (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Aste, T.; Di Matteo, Tiziana; Tumminello, M.; Mantegna, R. N.

    2005-05-01

    We apply a method to filter relevant information from the correlation coefficient matrix by extracting a network of relevant interactions. This method succeeds to generate networks with the same hierarchical structure of the Minimum Spanning Tree but containing a larger amount of links resulting in a richer network topology allowing loops and cliques. In Tumminello et al.,1 we have shown that this method, applied to a financial portfolio of 100 stocks in the USA equity markets, is pretty efficient in filtering relevant information about the clustering of the system and its hierarchical structure both on the whole system and within each cluster. In particular, we have found that triangular loops and 4 element cliques have important and significant relations with the market structure and properties. Here we apply this filtering procedure to the analysis of correlation in two different kind of interest rate time series (16 Eurodollars and 34 US interest rates).

  5. Structural Transitions in Densifying Networks

    NASA Astrophysics Data System (ADS)

    Lambiotte, R.; Krapivsky, P. L.; Bhat, U.; Redner, S.

    2016-11-01

    We introduce a minimal generative model for densifying networks in which a new node attaches to a randomly selected target node and also to each of its neighbors with probability p . The networks that emerge from this copying mechanism are sparse for p <1/2 and dense (average degree increasing with number of nodes N ) for p ≥1/2 . The behavior in the dense regime is especially rich; for example, individual network realizations that are built by copying are disparate and not self-averaging. Further, there is an infinite sequence of structural anomalies at p =2/3 , 3/4 , 4/5 , etc., where the N dependences of the number of triangles (3-cliques), 4-cliques, undergo phase transitions. When linking to second neighbors of the target can occur, the probability that the resulting graph is complete—all nodes are connected—is nonzero as N →∞ .

  6. Higher-order clustering in networks

    NASA Astrophysics Data System (ADS)

    Yin, Hao; Benson, Austin R.; Leskovec, Jure

    2018-05-01

    A fundamental property of complex networks is the tendency for edges to cluster. The extent of the clustering is typically quantified by the clustering coefficient, which is the probability that a length-2 path is closed, i.e., induces a triangle in the network. However, higher-order cliques beyond triangles are crucial to understanding complex networks, and the clustering behavior with respect to such higher-order network structures is not well understood. Here we introduce higher-order clustering coefficients that measure the closure probability of higher-order network cliques and provide a more comprehensive view of how the edges of complex networks cluster. Our higher-order clustering coefficients are a natural generalization of the traditional clustering coefficient. We derive several properties about higher-order clustering coefficients and analyze them under common random graph models. Finally, we use higher-order clustering coefficients to gain new insights into the structure of real-world networks from several domains.

  7. An Amino Acid Code for β-sheet Packing Structure

    PubMed Central

    Joo, Hyun; Tsai, Jerry

    2014-01-01

    To understand the relationship between protein sequence and structure, this work extends the knob-socket model in an investigation of β-sheet packing. Over a comprehensive set of β-sheet folds, the contacts between residues were used to identify packing cliques: sets of residues that all contact each other. These packing cliques were then classified based on size and contact order. From this analysis, the 2 types of 4 residue packing cliques necessary to describe β-sheet packing were characterized. Both occur between 2 adjacent hydrogen bonded β-strands. First, defining the secondary structure packing within β-sheets, the combined socket or XY:HG pocket consists of 4 residues i,i+2 on one strand and j,j+2 on the other. Second, characterizing the tertiary packing between β-sheets, the knob-socket XY:H+B consists of a 3 residue XY:H socket (i,i+2 on one strand and j on the other) packed against a knob B residue (residue k distant in sequence). Depending on the packing depth of the knob B residue, 2 types of knob-sockets are found: side-chain and main-chain sockets. The amino acid composition of the pockets and knob-sockets reveal the sequence specificity of β-sheet packing. For β-sheet formation, the XY:HG pocket clearly shows sequence specificity of amino acids. For tertiary packing, the XY:H+B side-chain and main-chain sockets exhibit distinct amino acid preferences at each position. These relationships define an amino acid code for β-sheet structure and provide an intuitive topological mapping of β-sheet packing. PMID:24668690

  8. Fast determination of structurally cohesive subgroups in large networks

    PubMed Central

    Sinkovits, Robert S.; Moody, James; Oztan, B. Tolga; White, Douglas R.

    2016-01-01

    Structurally cohesive subgroups are a powerful and mathematically rigorous way to characterize network robustness. Their strength lies in the ability to detect strong connections among vertices that not only have no neighbors in common, but that may be distantly separated in the graph. Unfortunately, identifying cohesive subgroups is a computationally intensive problem, which has limited empirical assessments of cohesion to relatively small graphs of at most a few thousand vertices. We describe here an approach that exploits the properties of cliques, k-cores and vertex separators to iteratively reduce the complexity of the graph to the point where standard algorithms can be used to complete the analysis. As a proof of principle, we apply our method to the cohesion analysis of a 29,462-vertex biconnected component extracted from a 128,151-vertex co-authorship data set. PMID:28503215

  9. Applications of graph theory in protein structure identification

    PubMed Central

    2011-01-01

    There is a growing interest in the identification of proteins on the proteome wide scale. Among different kinds of protein structure identification methods, graph-theoretic methods are very sharp ones. Due to their lower costs, higher effectiveness and many other advantages, they have drawn more and more researchers’ attention nowadays. Specifically, graph-theoretic methods have been widely used in homology identification, side-chain cluster identification, peptide sequencing and so on. This paper reviews several methods in solving protein structure identification problems using graph theory. We mainly introduce classical methods and mathematical models including homology modeling based on clique finding, identification of side-chain clusters in protein structures upon graph spectrum, and de novo peptide sequencing via tandem mass spectrometry using the spectrum graph model. In addition, concluding remarks and future priorities of each method are given. PMID:22165974

  10. A heuristic for efficient data distribution management in distributed simulation

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Guha, Ratan K.

    2005-05-01

    In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.

  11. An integer programming formulation of the parsimonious loss of heterozygosity problem.

    PubMed

    Catanzaro, Daniele; Labbé, Martine; Halldórsson, Bjarni V

    2013-01-01

    A loss of heterozygosity (LOH) event occurs when, by the laws of Mendelian inheritance, an individual should be heterozygote at a given site but, due to a deletion polymorphism, is not. Deletions play an important role in human disease and their detection could provide fundamental insights for the development of new diagnostics and treatments. In this paper, we investigate the parsimonious loss of heterozygosity problem (PLOHP), i.e., the problem of partitioning suspected polymorphisms from a set of individuals into a minimum number of deletion areas. Specifically, we generalize Halldórsson et al.'s work by providing a more general formulation of the PLOHP and by showing how one can incorporate different recombination rates and prior knowledge about the locations of deletions. Moreover, we show that the PLOHP can be formulated as a specific version of the clique partition problem in a particular class of graphs called undirected catch-point interval graphs and we prove its general $({\\cal NP})$-hardness. Finally, we provide a state-of-the-art integer programming (IP) formulation and strengthening valid inequalities to exactly solve real instances of the PLOHP containing up to 9,000 individuals and 3,000 SNPs. Our results give perspectives on the mathematics of the PLOHP and suggest new directions on the development of future efficient exact solution approaches.

  12. Office Politics

    ERIC Educational Resources Information Center

    Storm, Paula; Kelly, Robert; deVries, Susann

    2008-01-01

    People and organizations are inherently political. Library workplace environments have zones of tension and dynamics just like any corporation, often leading to the formation of political camps. These different cliques influence productivity and work-related issues and, at worst, give meetings the feel of the Camp David negotiations. Politics are…

  13. Optimal parallel solution of sparse triangular systems

    NASA Technical Reports Server (NTRS)

    Alvarado, Fernando L.; Schreiber, Robert

    1990-01-01

    A method for the parallel solution of triangular sets of equations is described that is appropriate when there are many right-handed sides. By preprocessing, the method can reduce the number of parallel steps required to solve Lx = b compared to parallel forward or backsolve. Applications are to iterative solvers with triangular preconditioners, to structural analysis, or to power systems applications, where there may be many right-handed sides (not all available a priori). The inverse of L is represented as a product of sparse triangular factors. The problem is to find a factored representation of this inverse of L with the smallest number of factors (or partitions), subject to the requirement that no new nonzero elements be created in the formation of these inverse factors. A method from an earlier reference is shown to solve this problem. This method is improved upon by constructing a permutation of the rows and columns of L that preserves triangularity and allow for the best possible such partition. A number of practical examples and algorithmic details are presented. The parallelism attainable is illustrated by means of elimination trees and clique trees.

  14. Data-Driven Packet Loss Estimation for Node Healthy Sensing in Decentralized Cluster

    PubMed Central

    Fan, Hangyu; Wang, Huandong; Li, Yong

    2018-01-01

    Decentralized clustering of modern information technology is widely adopted in various fields these years. One of the main reason is the features of high availability and the failure-tolerance which can prevent the entire system form broking down by a failure of a single point. Recently, toolkits such as Akka are used by the public commonly to easily build such kind of cluster. However, clusters of such kind that use Gossip as their membership managing protocol and use link failure detecting mechanism to detect link failures cannot deal with the scenario that a node stochastically drops packets and corrupts the member status of the cluster. In this paper, we formulate the problem to be evaluating the link quality and finding a max clique (NP-Complete) in the connectivity graph. We then proposed an algorithm that consists of two models driven by data from application layer to respectively solving these two problems. Through simulations with statistical data and a real-world product, we demonstrate that our algorithm has a good performance. PMID:29360792

  15. 3D Markov Process for Traffic Flow Prediction in Real-Time.

    PubMed

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-25

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.

  16. 3D Markov Process for Traffic Flow Prediction in Real-Time

    PubMed Central

    Ko, Eunjeong; Ahn, Jinyoung; Kim, Eun Yi

    2016-01-01

    Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1) a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2) the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further. PMID:26821025

  17. Clique of Functional Hubs Orchestrates Population Bursts in Developmentally Regulated Neural Networks

    PubMed Central

    Luccioli, Stefano; Ben-Jacob, Eshel; Barzilai, Ari; Bonifazi, Paolo; Torcini, Alessandro

    2014-01-01

    It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrate population activity. PMID:25255443

  18. Paradoxical Inequalities: Adolescent Peer Relations in Indian Secondary Schools

    ERIC Educational Resources Information Center

    Milner, Murray, Jr.

    2013-01-01

    Peer relationships in secondary schools in two different cultural areas of India are compared. A general theory of status relations and a specification of the distinctive cultural features of each area are used to explain the observed differences in peer inequality, clique formation, petty deviance, putdowns, fashion consciousness, romantic…

  19. Incorporating Covariates into Stochastic Blockmodels

    ERIC Educational Resources Information Center

    Sweet, Tracy M.

    2015-01-01

    Social networks in education commonly involve some form of grouping, such as friendship cliques or teacher departments, and blockmodels are a type of statistical social network model that accommodate these grouping or blocks by assuming different within-group tie probabilities than between-group tie probabilities. We describe a class of models,…

  20. Teenage Social Relationships: Effect on Social Adjustment

    ERIC Educational Resources Information Center

    Wyche, Yolandria; McGahey, James Todd; Jenkins, Marvin

    2017-01-01

    High school female students often have challenges transitioning to high school. There are many possible obstacles that exist but some female students may experience difficulties with maintaining interpersonal relationship with their female peers. It is very common for high school settings to have various types of social cliques that exist. In some…

  1. NEEDED RESEARCH ON DIFFUSION WITHIN EDUCATIONAL ORGANIZATIONS.

    ERIC Educational Resources Information Center

    JAIN, NEMI C.; ROGERS, EVERETT M.

    IN SPITE OF THE VOLUME OF RESEARCH ATTENTION DEVOTED TO THE DIFFUSION OF INNOVATIONS, RELATIVELY LITTLE EMPHASIS HAS BEEN PLACED UPON DIFFUSION WITHIN ORGANIZATIONAL STRUCTURES. METHODOLOGICALLY, RELATIONAL ANALYSIS IN WHICH THE UNIT OF ANALYSIS IS A TWO-PERSON INTERACTING PAIR, A MULTIPLE PERSON COMMUNICATION CHAIN, OR CLIQUES OR SUBSYSTEMS IS…

  2. Discovering protein complexes in protein interaction networks via exploring the weak ties effect

    PubMed Central

    2012-01-01

    Background Studying protein complexes is very important in biological processes since it helps reveal the structure-functionality relationships in biological networks and much attention has been paid to accurately predict protein complexes from the increasing amount of protein-protein interaction (PPI) data. Most of the available algorithms are based on the assumption that dense subgraphs correspond to complexes, failing to take into account the inherence organization within protein complex and the roles of edges. Thus, there is a critical need to investigate the possibility of discovering protein complexes using the topological information hidden in edges. Results To provide an investigation of the roles of edges in PPI networks, we show that the edges connecting less similar vertices in topology are more significant in maintaining the global connectivity, indicating the weak ties phenomenon in PPI networks. We further demonstrate that there is a negative relation between the weak tie strength and the topological similarity. By using the bridges, a reliable virtual network is constructed, in which each maximal clique corresponds to the core of a complex. By this notion, the detection of the protein complexes is transformed into a classic all-clique problem. A novel core-attachment based method is developed, which detects the cores and attachments, respectively. A comprehensive comparison among the existing algorithms and our algorithm has been made by comparing the predicted complexes against benchmark complexes. Conclusions We proved that the weak tie effect exists in the PPI network and demonstrated that the density is insufficient to characterize the topological structure of protein complexes. Furthermore, the experimental results on the yeast PPI network show that the proposed method outperforms the state-of-the-art algorithms. The analysis of detected modules by the present algorithm suggests that most of these modules have well biological significance in context of complexes, suggesting that the roles of edges are critical in discovering protein complexes. PMID:23046740

  3. Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan

    2010-01-01

    For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  4. The Evaluation of Classroom Social Structure by Three-Way Multidimensional Scaling of Sociometric Data.

    ERIC Educational Resources Information Center

    Langeheine, Rolf

    1978-01-01

    A three-way multidimensional scaling model is presented as a method for identifying classroom cliques, by simultaneous analysis of three variables (for example, chooser/choosen/criteria). Two scaling models--Carroll and Chang's INDSCAL and Lingoes' PINDIS--are presented and applied to two sets of empirical data. (CP)

  5. Peer Bonds in Urban School Communities: An Exploratory Study

    ERIC Educational Resources Information Center

    Leach, Nicole

    2018-01-01

    The literature identifies three main types of peer associations: cliques, crowds, and dyadic friendships. When schools create learning communities, an additional type of peer association may emerge that is not based on interactions but instead is based on membership in a shared community. The aim of this study is to qualitatively explore the…

  6. Conflicts, Commitments, and Cliques in the University: Moral Seduction as a Threat to Trustee Independence

    ERIC Educational Resources Information Center

    Bastedo, Michael N.

    2009-01-01

    The ability of trustees to make independent judgments in the best interests of the university is a fundamental characteristic of an effective governing board. Trustee independence is increasingly threatened, however, as the university becomes more deeply embedded in government, industry, networks, and the professions. This topic is investigated…

  7. The Structure of Positive Interpersonal Relations in Small Groups.

    ERIC Educational Resources Information Center

    Davis, James A.; Leinhardt, Samuel

    The authors sought to test Homans' proposition that small groups inevitably generate a social structure which combines subgroups (cliques) and a ranking system. We present a graph theoretical model of such a structure and prove that a necessary and sufficient condition for its existence is the absence of seven particular triad types. Expected…

  8. Clique-Based Neural Associative Memories with Local Coding and Precoding.

    PubMed

    Mofrad, Asieh Abolpour; Parker, Matthew G; Ferdosi, Zahra; Tadayon, Mohammad H

    2016-08-01

    Techniques from coding theory are able to improve the efficiency of neuroinspired and neural associative memories by forcing some construction and constraints on the network. In this letter, the approach is to embed coding techniques into neural associative memory in order to increase their performance in the presence of partial erasures. The motivation comes from recent work by Gripon, Berrou, and coauthors, which revisited Willshaw networks and presented a neural network with interacting neurons that partitioned into clusters. The model introduced stores patterns as small-size cliques that can be retrieved in spite of partial error. We focus on improving the success of retrieval by applying two techniques: doing a local coding in each cluster and then applying a precoding step. We use a slightly different decoding scheme, which is appropriate for partial erasures and converges faster. Although the ideas of local coding and precoding are not new, the way we apply them is different. Simulations show an increase in the pattern retrieval capacity for both techniques. Moreover, we use self-dual additive codes over field [Formula: see text], which have very interesting properties and a simple-graph representation.

  9. cWINNOWER Algorithm for Finding Fuzzy DNA Motifs

    NASA Technical Reports Server (NTRS)

    Liang, Shoudan

    2003-01-01

    The cWINNOWER algorithm detects fuzzy motifs in DNA sequences rich in protein-binding signals. A signal is defined as any short nucleotide pattern having up to d mutations differing from a motif of length l. The algorithm finds such motifs if multiple mutated copies of the motif (i.e., the signals) are present in the DNA sequence in sufficient abundance. The cWINNOWER algorithm substantially improves the sensitivity of the winnower method of Pevzner and Sze by imposing a consensus constraint, enabling it to detect much weaker signals. We studied the minimum number of detectable motifs qc as a function of sequence length N for random sequences. We found that qc increases linearly with N for a fast version of the algorithm based on counting three-member sub-cliques. Imposing consensus constraints reduces qc, by a factor of three in this case, which makes the algorithm dramatically more sensitive. Our most sensitive algorithm, which counts four-member sub-cliques, needs a minimum of only 13 signals to detect motifs in a sequence of length N = 12000 for (l,d) = (15,4).

  10. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic.

    PubMed

    Xie, Kun; Fox, Grace E; Liu, Jun; Lyu, Cheng; Lee, Jason C; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies-the long-presumed computational motif-are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic ( N = 2 i -1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors-the synaptic switch for learning and memory-were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques-which preferentially encode specific and low-combinatorial features and project inter-cortically-is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6-which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems-is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain's basic computational algorithm is indeed organized by the power-of-two-based permutation logic. This simple mathematical logic can account for brain computation across the entire evolutionary spectrum, ranging from the simplest neural networks to the most complex.

  11. mvp - an open-source preprocessor for cleaning duplicate records and missing values in mass spectrometry data.

    PubMed

    Lee, Geunho; Lee, Hyun Beom; Jung, Byung Hwa; Nam, Hojung

    2017-07-01

    Mass spectrometry (MS) data are used to analyze biological phenomena based on chemical species. However, these data often contain unexpected duplicate records and missing values due to technical or biological factors. These 'dirty data' problems increase the difficulty of performing MS analyses because they lead to performance degradation when statistical or machine-learning tests are applied to the data. Thus, we have developed missing values preprocessor (mvp), an open-source software for preprocessing data that might include duplicate records and missing values. mvp uses the property of MS data in which identical chemical species present the same or similar values for key identifiers, such as the mass-to-charge ratio and intensity signal, and forms cliques via graph theory to process dirty data. We evaluated the validity of the mvp process via quantitative and qualitative analyses and compared the results from a statistical test that analyzed the original and mvp-applied data. This analysis showed that using mvp reduces problems associated with duplicate records and missing values. We also examined the effects of using unprocessed data in statistical tests and examined the improved statistical test results obtained with data preprocessed using mvp.

  12. Click or Clique? Using Educational Technology to Address Students' Anxieties about Peer Evaluation

    ERIC Educational Resources Information Center

    Walker, Ruth; Barwell, Graham

    2009-01-01

    Peer bias is recognised as a primary factor in negative student perceptions of peer assessment strategies. This study trialled the use of classroom response systems, widely known as clickers, in small seminar classes in order to actively engage students in their subject's assessment process while providing the anonymity that would lessen the…

  13. The Use of Natural Supports To Increase Integration in Supported Employment Settings for Youth in Transition. Final Report.

    ERIC Educational Resources Information Center

    Storey, Keith

    This final report briefly describes activities of a project which developed and evaluated specific natural support intervention procedures to increase the social integration of employees with severe disabilities using single-subject, clique analysis, and social validation methodologies. The project resulted in the publication of 6 journal articles…

  14. Eye-Rollers, Risk-Takers, and Turn Sharks: Target Students in a Professional Science Education Program

    ERIC Educational Resources Information Center

    Martin, Sonya N.; Milne, Catherine; Scantlebury, Kathryn

    2006-01-01

    In classrooms from kindergarten to graduate school, researchers have identified target students as students who monopolize material and human resources. Classroom structures that privilege the voice and actions of target students can cause divisive social dynamics that may generate cliques. This study focuses on the emergence of target students,…

  15. Translations on Red Flag No. 12, 5 December 1977

    DTIC Science & Technology

    1978-01-25

    counterrevolutionary nature, the "gang of four," who were but Jackals in the same lair as the Lin Piao antiparty clique, panicked. The gang did...said that I was an ambitious man daring to think and act. But the class enemies accused me of "wanting to become a golden phoenix and an official

  16. Cliques and Cohesion in a Clinical Psychology Graduate Cohort: A Longitudinal Social Network Analysis

    ERIC Educational Resources Information Center

    Kunze, Kimberley Annette

    2013-01-01

    To date, no published research has utilized social network analysis (SNA) to analyze graduate cohorts in clinical psychology. The purpose of this research is to determine how issues of likability among students correlate with other measures, such as disclosure, health, spiritual maturity, help in projects, familiarity, and ease of providing…

  17. The Cool vs. The Uncool. Your Middle School Classroom

    ERIC Educational Resources Information Center

    Barnes, Peter

    2005-01-01

    Social cliques start around fourth or fifth grade and get worse through middle school and beyond. The cool vs. the uncool. Nerds, jocks, popular kids and outsiders--students are categorized by their peers and excluded by those different from them. Students who are not part of the "cool" crowd feel isolated and lonely and are often subjected to…

  18. Teenage Behavior: It's Not Biology, Psychology, or Family Values

    ERIC Educational Resources Information Center

    Milner, Murray, Jr.

    2006-01-01

    This article examines the explanations behind these questions: (1) Why do American teenagers behave the way they do?; (2) Why are many obsessed with the brands of clothes they wear, their lunchtime seatmates, the parties they are invited to, the latest popular music, the intrigues of school cliques, and who is hooking up with whom?; (3) Why do…

  19. Liaison Roles in the Communication Structure of a Formal Organization: A Pilot Study.

    ERIC Educational Resources Information Center

    Schwartz, Donald F.

    The purpose of this study was first to map the functional communication structure of a 142-member formal organization, then to analyze that structure to identify work groups (Cliques) and interlinking liaison role persons, and finally to describe certain differences between liaison persons and nonliaison members of the work groups as perceived by…

  20. Obstructions to the realization of distance graphs with large chromatic numbers on spheres of small radii

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kupavskii, A B; Raigorodskii, A M

    2013-10-31

    We investigate in detail some properties of distance graphs constructed on the integer lattice. Such graphs find wide applications in problems of combinatorial geometry, in particular, such graphs were employed to answer Borsuk's question in the negative and to obtain exponential estimates for the chromatic number of the space. This work is devoted to the study of the number of cliques and the chromatic number of such graphs under certain conditions. Constructions of sequences of distance graphs are given, in which the graphs have unit length edges and contain a large number of triangles that lie on a sphere of radius 1/√3more » (which is the minimum possible). At the same time, the chromatic numbers of the graphs depend exponentially on their dimension. The results of this work strengthen and generalize some of the results obtained in a series of papers devoted to related issues. Bibliography: 29 titles.« less

  1. Method for identification of rigid domains and hinge residues in proteins based on exhaustive enumeration.

    PubMed

    Sim, Jaehyun; Sim, Jun; Park, Eunsung; Lee, Julian

    2015-06-01

    Many proteins undergo large-scale motions where relatively rigid domains move against each other. The identification of rigid domains, as well as the hinge residues important for their relative movements, is important for various applications including flexible docking simulations. In this work, we develop a method for protein rigid domain identification based on an exhaustive enumeration of maximal rigid domains, the rigid domains not fully contained within other domains. The computation is performed by mapping the problem to that of finding maximal cliques in a graph. A minimal set of rigid domains are then selected, which cover most of the protein with minimal overlap. In contrast to the results of existing methods that partition a protein into non-overlapping domains using approximate algorithms, the rigid domains obtained from exact enumeration naturally contain overlapping regions, which correspond to the hinges of the inter-domain bending motion. The performance of the algorithm is demonstrated on several proteins. © 2015 Wiley Periodicals, Inc.

  2. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-07-17

    The explosion of biological data has largely influenced the focus of today’s biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  3. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-06-01

    The explosion of biological data has largely influenced the focus of today's biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  4. What Is a "Good" Social Network for a System?: The Flow of Know-How for Organizational Change. Working Paper #48

    ERIC Educational Resources Information Center

    Frank, Kenneth

    2014-01-01

    This study concerns how intra-organizational networks affect the implementation of policies and practices in organizations. In particular, we attend to the role of the informal subgroup or clique in cultivating and distributing locally adapted and integrated knowledge, or know-how. We develop two hypotheses based on the importance of…

  5. Team knowledge representation: a network perspective.

    PubMed

    Espinosa, J Alberto; Clark, Mark A

    2014-03-01

    We propose a network perspective of team knowledge that offers both conceptual and methodological advantages, expanding explanatory value through representation and measurement of component structure and content. Team knowledge has typically been conceptualized and measured with relatively simple aggregates, without fully accounting for differing knowledge configurations among team members. Teams with similar aggregate values of team knowledge may have very different team dynamics depending on how knowledge isolates, cliques, and densities are distributed across the team; which members are the most knowledgeable; who shares knowledge with whom; and how knowledge clusters are distributed. We illustrate our proposed network approach through a sample of 57 teams, including how to compute, analyze, and visually represent team knowledge. Team knowledge network structures (isolation, centrality) are associated with outcomes of, respectively, task coordination, strategy coordination, and the proportion of team knowledge cliques, all after controlling for shared team knowledge. Network analysis helps to represent, measure, and understand the relationship of team knowledge to outcomes of interest to team researchers, members, and managers. Our approach complements existing team knowledge measures. Researchers and managers can apply network concepts and measures to help understand where team knowledge is held within a team and how this relational structure may influence team coordination, cohesion, and performance.

  6. Coloring geographical threshold graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan; Percus, Allon; Muller, Tobias

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyzemore » the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.« less

  7. Korean Affairs Report

    DTIC Science & Technology

    1985-06-20

    Public Circle Support 47 SADR Support 48 Saint Lucia Support 49 Swedish Communists Solidarity to Kim Il-song (KCNA, 29 May 85) 50 Briefs ’ War ...and war rackets and respond to our peace proposal for holding North-South parliamentary talks and announcing a joint declaration of nonaggression...counter the new war provocation maneuvers of the U.S. imperialists and the puppet clique, and more powerfully implement the three revolutions—ideological

  8. Social Circles Detection from Ego Network and Profile Information

    DTIC Science & Technology

    2014-12-19

    response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing... algorithm used to infer k-clique communities is expo- nential, which makes this technique unfeasible when treating egonets with a large number of users...atic when considering RBMs. This inconvenient was positively solved implementing a sparsity treatment with the RBM algorithm . (ii) The ground truth was

  9. Life in an Unjust Community: A Hollywood View of High School Moral Life

    ERIC Educational Resources Information Center

    Resnick, David

    2008-01-01

    This article analyses the film "Mean girls" (2004) as a window on popular notions of the moral life of American high schools, which straddles Kohlberg's Stage 2 and 3. The film presents loyalty to peer group cliques as a key value, even as it offers an individualist, relativist critique of that loyalty. Gossip is the main transgression in this…

  10. A higher order conditional random field model for simultaneous classification of land cover and land use

    NASA Astrophysics Data System (ADS)

    Albert, Lena; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    We propose a new approach for the simultaneous classification of land cover and land use considering spatial as well as semantic context. We apply a Conditional Random Fields (CRF) consisting of a land cover and a land use layer. In the land cover layer of the CRF, the nodes represent super-pixels; in the land use layer, the nodes correspond to objects from a geospatial database. Intra-layer edges of the CRF model spatial dependencies between neighbouring image sites. All spatially overlapping sites in both layers are connected by inter-layer edges, which leads to higher order cliques modelling the semantic relation between all land cover and land use sites in the clique. A generic formulation of the higher order potential is proposed. In order to enable efficient inference in the two-layer higher order CRF, we propose an iterative inference procedure in which the two classification tasks mutually influence each other. We integrate contextual relations between land cover and land use in the classification process by using contextual features describing the complex dependencies of all nodes in a higher order clique. These features are incorporated in a discriminative classifier, which approximates the higher order potentials during the inference procedure. The approach is designed for input data based on aerial images. Experiments are carried out on two test sites to evaluate the performance of the proposed method. The experiments show that the classification results are improved compared to the results of a non-contextual classifier. For land cover classification, the result is much more homogeneous and the delineation of land cover segments is improved. For the land use classification, an improvement is mainly achieved for land use objects showing non-typical characteristics or similarities to other land use classes. Furthermore, we have shown that the size of the super-pixels has an influence on the level of detail of the classification result, but also on the degree of smoothing induced by the segmentation method, which is especially beneficial for land cover classes covering large, homogeneous areas.

  11. Korean Affairs Report, Number 320.

    DTIC Science & Technology

    1983-11-03

    search of the Academy of Social Sciences, speaks: [Begin recording] Now, the traitorous puppet clique of Chon Tu-hwan, who was greeted with a bomb...the desperate position of a trouble-maker becoming more delinquent after being foresaken by family and neighbors. For this reason, it is a childish ...business of the current house sitting, unless the ruling party "changes its mind." There appears to be consensus among political observers that

  12. The Private Lives of Minerals: Social Network Analysis Applied to Mineralogy and Petrology

    NASA Astrophysics Data System (ADS)

    Hazen, R. M.; Morrison, S. M.; Fox, P. A.; Golden, J. J.; Downs, R. T.; Eleish, A.; Prabhu, A.; Li, C.; Liu, C.

    2016-12-01

    Comprehensive databases of mineral species (rruff.info/ima) and their geographic localities and co-existing mineral assemblages (mindat.org) reveal patterns of mineral association and distribution that mimic social networks, as commonly applied to such varied topics as social media interactions, the spread of disease, terrorism networks, and research collaborations. Applying social network analysis (SNA) to common assemblages of rock-forming igneous and regional metamorphic mineral species, we find patterns of cohesion, segregation, density, and cliques that are similar to those of human social networks. These patterns highlight classic trends in lithologic evolution and are illustrated with sociograms, in which mineral species are the "nodes" and co-existing species form "links." Filters based on chemistry, age, structural group, and other parameters highlight visually both familiar and new aspects of mineralogy and petrology. We quantify sociograms with SNA metrics, including connectivity (based on the frequency of co-occurrence of mineral pairs), homophily (the extent to which co-existing mineral species share compositional and other characteristics), network closure (based on the degree of network interconnectivity), and segmentation (as revealed by isolated "cliques" of mineral species). Exploitation of large and growing mineral data resources with SNA offers promising avenues for discovering previously hidden trends in mineral diversity-distribution systematics, as well as providing new pedagogical approaches to teaching mineralogy and petrology.

  13. The discovery of structural form

    PubMed Central

    Kemp, Charles; Tenenbaum, Joshua B.

    2008-01-01

    Algorithms for finding structure in data have become increasingly important both as tools for scientific data analysis and as models of human learning, yet they suffer from a critical limitation. Scientists discover qualitatively new forms of structure in observed data: For instance, Linnaeus recognized the hierarchical organization of biological species, and Mendeleev recognized the periodic structure of the chemical elements. Analogous insights play a pivotal role in cognitive development: Children discover that object category labels can be organized into hierarchies, friendship networks are organized into cliques, and comparative relations (e.g., “bigger than” or “better than”) respect a transitive order. Standard algorithms, however, can only learn structures of a single form that must be specified in advance: For instance, algorithms for hierarchical clustering create tree structures, whereas algorithms for dimensionality-reduction create low-dimensional spaces. Here, we present a computational model that learns structures of many different forms and that discovers which form is best for a given dataset. The model makes probabilistic inferences over a space of graph grammars representing trees, linear orders, multidimensional spaces, rings, dominance hierarchies, cliques, and other forms and successfully discovers the underlying structure of a variety of physical, biological, and social domains. Our approach brings structure learning methods closer to human abilities and may lead to a deeper computational understanding of cognitive development. PMID:18669663

  14. JPRS Report, East Asia, Korea: Kulloja, No. 2, February 1988.

    DTIC Science & Technology

    1989-03-02

    war to make cadres and party members cherish their loyalty to the party and the leader as a firm faith, by intensifying and developing the work of...South Korean puppet clique are ceaselessly continuing their new war provocation maneuvers. The revolutionary duty confronting us and the prevailing...revolutionary cause led by the leader. During the fatherland liberation war , our people who, having inherited that spirit, acted as human bombs to defend

  15. Robust Inference of Genetic Exchange Communities from Microbial Genomes Using TF-IDF.

    PubMed

    Cong, Yingnan; Chan, Yao-Ban; Phillips, Charles A; Langston, Michael A; Ragan, Mark A

    2017-01-01

    Bacteria and archaea can exchange genetic material across lineages through processes of lateral genetic transfer (LGT). Collectively, these exchange relationships can be modeled as a network and analyzed using concepts from graph theory. In particular, densely connected regions within an LGT network have been defined as genetic exchange communities (GECs). However, it has been problematic to construct networks in which edges solely represent LGT. Here we apply term frequency-inverse document frequency (TF-IDF), an alignment-free method originating from document analysis, to infer regions of lateral origin in bacterial genomes. We examine four empirical datasets of different size (number of genomes) and phyletic breadth, varying a key parameter (word length k ) within bounds established in previous work. We map the inferred lateral regions to genes in recipient genomes, and construct networks in which the nodes are groups of genomes, and the edges natively represent LGT. We then extract maximum and maximal cliques (i.e., GECs) from these graphs, and identify nodes that belong to GECs across a wide range of k . Most surviving lateral transfer has happened within these GECs. Using Gene Ontology enrichment tests we demonstrate that biological processes associated with metabolism, regulation and transport are often over-represented among the genes affected by LGT within these communities. These enrichments are largely robust to change of k .

  16. Social Network Analysis in Frontier Capital Markets

    DTIC Science & Technology

    2012-06-01

    developed by Watts and Strogatz measures the extent to which clusters or cliques exist in a network [WS98]. The clustering coefficent of each individual...Coefficient Watts- Strogatz 0.8039 0.8222 0.7227 Total Degree Centralization 0.0618 0.0940 0.0612 Betweenness Centralization 0.0909 0.1256 0.0646 Closeness...Fragmentation 0.6099 0.5304 0.5308 Clustering Coefficient Watts- Strogatz 0.5281 0.6607 0.6360 Total Degree Centralization 0.0153 0.0360 0.0171

  17. Beyond the Golden Rule: A Parent's Guide to Preventing and Responding to Prejudice

    ERIC Educational Resources Information Center

    Williams, Dana

    2013-01-01

    Whether one is a parent of a 3-year-old who is curious about why a friend's skin is brown, the parent of a 9-year-old who has been called a slur because of his religion, or the parent of a 15-year-old who snubs those outside of her social clique at school, this book is designed to help teach children to honor the differences in themselves and in…

  18. Systematic review of social network analysis in adolescent cigarette smoking behavior.

    PubMed

    Seo, Dong-Chul; Huang, Yan

    2012-01-01

    Social networks are important in adolescent smoking behavior. Previous research indicates that peer context is a major causal factor of adolescent smoking behavior. To date, however, little is known about the influence of peer group structure on adolescent smoking behavior. Studies that examined adolescent social networks with regard to their cigarette smoking behavior were identified through online and manual literature searches. Ten social network analysis studies involving a total of 28,263 adolescents were included in the final review. Of the 10 reviewed studies, 6 identify clique members, liaisons, and isolates as contributing factors to adolescent cigarette smoking. Significantly higher rates of smoking are noted among isolates than clique members or liaisons in terms of peer network structure. Eight of the reviewed studies indicate that peer selection or influence precedes adolescents' smoking behavior and intent to smoke. Such peer selection or influence accounts for a large portion of similarities among smoking adolescents. Adolescents who are identified as isolates are more likely to smoke and engage in risk-taking behaviors than others in the peer network structure. Given that the vast majority of current adult smokers started their smoking habits during adolescence, adolescent smoking prevention efforts will likely benefit from incorporating social network analytic approaches and focusing the efforts on isolates and other vulnerable adolescents from a peer selection and influence perspective. © 2011, American School Health Association.

  19. Translations on North Korea No. 601

    DTIC Science & Technology

    1978-07-13

    Boosting War Fever"] [Text] On 21 June, with the anniversary of the outbreak of the 24 June war just a few days away, the South Korean puppet clique...conducted the puppet farce of shooting matches between ministerial posts of the puppet adminis- tration, thus boosting war fever. At the war racket site...the puppet "prime minister" led the way in openly inciting war , raving about the "threat of southward aggression," the "nation’s stability" and the

  20. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    In this paper, we investigate the use of Bayesian networks to construct large-scale diagnostic systems. In particular, we consider the development of large-scale Bayesian networks by composition. This compositional approach reflects how (often redundant) subsystems are architected to form systems such as electrical power systems. We develop high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems. The largest among these 24 Bayesian networks contains over 1,000 random variables. Another BN represents the real-world electrical power system ADAPT, which is representative of electrical power systems deployed in aerospace vehicles. In addition to demonstrating the scalability of the compositional approach, we briefly report on experimental results from the diagnostic competition DXC, where the ProADAPT team, using techniques discussed here, obtained the highest scores in both Tier 1 (among 9 international competitors) and Tier 2 (among 6 international competitors) of the industrial track. While we consider diagnosis of power systems specifically, we believe this work is relevant to other system health management problems, in particular in dependable systems such as aircraft and spacecraft. (See CASI ID 20100021910 for supplemental data disk.)

  1. Network collaboration of organisations for homeless individuals in the Montreal region

    PubMed Central

    Fleury, Marie-Josée; Grenier, Guy; Lesage, Alain; Ma, Nan; Ngui, André Ngamini

    2014-01-01

    Introduction We know little about the intensity and determinants of interorganisational collaboration within the homeless network. This study describes the characteristics and relationships (along with the variables predicting their degree of interorganisational collaboration) of 68 organisations of such a network in Montreal (Quebec, Canada). Theory and methods Data were collected primarily through a self-administered questionnaire. Descriptive analyses were conducted followed by social network and multivariate analyses. Results The Montreal homeless network has a high density (50.5%) and a decentralised structure and maintains a mostly informal collaboration with the public and cross-sectorial sectors. The network density showed more frequent contacts among four types of organisations which could point to the existence of cliques. Four variables predicted interorganisational collaboration: organisation type, number of services offered, volume of referrals and satisfaction with the relationships with public organisations. Conclusions and discussion The Montreal homeless network seems adequate to address non-complex homelessness problems. Considering, however, that most homeless individuals present chronic and complex profiles, it appears necessary to have a more formal and better integrated network of homeless organisations, particularly in the health and social service sectors, in order to improve services. PMID:24520216

  2. Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen

    2006-12-01

    Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimizationmore » problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.« less

  3. Genome-wide screen identifies a novel prognostic signature for breast cancer survival

    DOE PAGES

    Mao, Xuan Y.; Lee, Matthew J.; Zhu, Jeffrey; ...

    2017-01-21

    Large genomic datasets in combination with clinical data can be used as an unbiased tool to identify genes important in patient survival and discover potential therapeutic targets. We used a genome-wide screen to identify 587 genes significantly and robustly deregulated across four independent breast cancer (BC) datasets compared to normal breast tissue. Gene expression of 381 genes was significantly associated with relapse-free survival (RFS) in BC patients. We used a gene co-expression network approach to visualize the genetic architecture in normal breast and BCs. In normal breast tissue, co-expression cliques were identified enriched for cell cycle, gene transcription, cell adhesion,more » cytoskeletal organization and metabolism. In contrast, in BC, only two major co-expression cliques were identified enriched for cell cycle-related processes or blood vessel development, cell adhesion and mammary gland development processes. Interestingly, gene expression levels of 7 genes were found to be negatively correlated with many cell cycle related genes, highlighting these genes as potential tumor suppressors and novel therapeutic targets. A forward-conditional Cox regression analysis was used to identify a 12-gene signature associated with RFS. A prognostic scoring system was created based on the 12-gene signature. This scoring system robustly predicted BC patient RFS in 60 sampling test sets and was further validated in TCGA and METABRIC BC data. Our integrated study identified a 12-gene prognostic signature that could guide adjuvant therapy for BC patients and includes novel potential molecular targets for therapy.« less

  4. Genome-wide screen identifies a novel prognostic signature for breast cancer survival

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Xuan Y.; Lee, Matthew J.; Zhu, Jeffrey

    Large genomic datasets in combination with clinical data can be used as an unbiased tool to identify genes important in patient survival and discover potential therapeutic targets. We used a genome-wide screen to identify 587 genes significantly and robustly deregulated across four independent breast cancer (BC) datasets compared to normal breast tissue. Gene expression of 381 genes was significantly associated with relapse-free survival (RFS) in BC patients. We used a gene co-expression network approach to visualize the genetic architecture in normal breast and BCs. In normal breast tissue, co-expression cliques were identified enriched for cell cycle, gene transcription, cell adhesion,more » cytoskeletal organization and metabolism. In contrast, in BC, only two major co-expression cliques were identified enriched for cell cycle-related processes or blood vessel development, cell adhesion and mammary gland development processes. Interestingly, gene expression levels of 7 genes were found to be negatively correlated with many cell cycle related genes, highlighting these genes as potential tumor suppressors and novel therapeutic targets. A forward-conditional Cox regression analysis was used to identify a 12-gene signature associated with RFS. A prognostic scoring system was created based on the 12-gene signature. This scoring system robustly predicted BC patient RFS in 60 sampling test sets and was further validated in TCGA and METABRIC BC data. Our integrated study identified a 12-gene prognostic signature that could guide adjuvant therapy for BC patients and includes novel potential molecular targets for therapy.« less

  5. Markov models for fMRI correlation structure: Is brain functional connectivity small world, or decomposable into networks?

    PubMed

    Varoquaux, G; Gramfort, A; Poline, J B; Thirion, B

    2012-01-01

    Correlations in the signal observed via functional Magnetic Resonance Imaging (fMRI), are expected to reveal the interactions in the underlying neural populations through hemodynamic response. In particular, they highlight distributed set of mutually correlated regions that correspond to brain networks related to different cognitive functions. Yet graph-theoretical studies of neural connections give a different picture: that of a highly integrated system with small-world properties: local clustering but with short pathways across the complete structure. We examine the conditional independence properties of the fMRI signal, i.e. its Markov structure, to find realistic assumptions on the connectivity structure that are required to explain the observed functional connectivity. In particular we seek a decomposition of the Markov structure into segregated functional networks using decomposable graphs: a set of strongly-connected and partially overlapping cliques. We introduce a new method to efficiently extract such cliques on a large, strongly-connected graph. We compare methods learning different graph structures from functional connectivity by testing the goodness of fit of the model they learn on new data. We find that summarizing the structure as strongly-connected networks can give a good description only for very large and overlapping networks. These results highlight that Markov models are good tools to identify the structure of brain connectivity from fMRI signals, but for this purpose they must reflect the small-world properties of the underlying neural systems. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Cooperation and Contagion in Web-Based, Networked Public Goods Experiments

    PubMed Central

    Suri, Siddharth; Watts, Duncan J.

    2011-01-01

    A longstanding idea in the literature on human cooperation is that cooperation should be reinforced when conditional cooperators are more likely to interact. In the context of social networks, this idea implies that cooperation should fare better in highly clustered networks such as cliques than in networks with low clustering such as random networks. To test this hypothesis, we conducted a series of web-based experiments, in which 24 individuals played a local public goods game arranged on one of five network topologies that varied between disconnected cliques and a random regular graph. In contrast with previous theoretical work, we found that network topology had no significant effect on average contributions. This result implies either that individuals are not conditional cooperators, or else that cooperation does not benefit from positive reinforcement between connected neighbors. We then tested both of these possibilities in two subsequent series of experiments in which artificial seed players were introduced, making either full or zero contributions. First, we found that although players did generally behave like conditional cooperators, they were as likely to decrease their contributions in response to low contributing neighbors as they were to increase their contributions in response to high contributing neighbors. Second, we found that positive effects of cooperation were contagious only to direct neighbors in the network. In total we report on 113 human subjects experiments, highlighting the speed, flexibility, and cost-effectiveness of web-based experiments over those conducted in physical labs. PMID:21412431

  7. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method

    PubMed Central

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-01-01

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli, and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs. PMID:29113310

  8. Cooperation and contagion in web-based, networked public goods experiments.

    PubMed

    Suri, Siddharth; Watts, Duncan J

    2011-03-11

    A longstanding idea in the literature on human cooperation is that cooperation should be reinforced when conditional cooperators are more likely to interact. In the context of social networks, this idea implies that cooperation should fare better in highly clustered networks such as cliques than in networks with low clustering such as random networks. To test this hypothesis, we conducted a series of web-based experiments, in which 24 individuals played a local public goods game arranged on one of five network topologies that varied between disconnected cliques and a random regular graph. In contrast with previous theoretical work, we found that network topology had no significant effect on average contributions. This result implies either that individuals are not conditional cooperators, or else that cooperation does not benefit from positive reinforcement between connected neighbors. We then tested both of these possibilities in two subsequent series of experiments in which artificial seed players were introduced, making either full or zero contributions. First, we found that although players did generally behave like conditional cooperators, they were as likely to decrease their contributions in response to low contributing neighbors as they were to increase their contributions in response to high contributing neighbors. Second, we found that positive effects of cooperation were contagious only to direct neighbors in the network. In total we report on 113 human subjects experiments, highlighting the speed, flexibility, and cost-effectiveness of web-based experiments over those conducted in physical labs.

  9. Inference of time-delayed gene regulatory networks based on dynamic Bayesian network hybrid learning method.

    PubMed

    Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui

    2017-10-06

    Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli , and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.

  10. Robust Inference of Genetic Exchange Communities from Microbial Genomes Using TF-IDF

    PubMed Central

    Cong, Yingnan; Chan, Yao-ban; Phillips, Charles A.; Langston, Michael A.; Ragan, Mark A.

    2017-01-01

    Bacteria and archaea can exchange genetic material across lineages through processes of lateral genetic transfer (LGT). Collectively, these exchange relationships can be modeled as a network and analyzed using concepts from graph theory. In particular, densely connected regions within an LGT network have been defined as genetic exchange communities (GECs). However, it has been problematic to construct networks in which edges solely represent LGT. Here we apply term frequency-inverse document frequency (TF-IDF), an alignment-free method originating from document analysis, to infer regions of lateral origin in bacterial genomes. We examine four empirical datasets of different size (number of genomes) and phyletic breadth, varying a key parameter (word length k) within bounds established in previous work. We map the inferred lateral regions to genes in recipient genomes, and construct networks in which the nodes are groups of genomes, and the edges natively represent LGT. We then extract maximum and maximal cliques (i.e., GECs) from these graphs, and identify nodes that belong to GECs across a wide range of k. Most surviving lateral transfer has happened within these GECs. Using Gene Ontology enrichment tests we demonstrate that biological processes associated with metabolism, regulation and transport are often over-represented among the genes affected by LGT within these communities. These enrichments are largely robust to change of k. PMID:28154557

  11. Seldon v.3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Nina; Ko, Teresa; Shneider, Max

    Seldon is an agent-based social simulation framework that uniquely integrates concepts from a variety of different research areas including psychology, social science, and agent-based modeling. Development has been taking place for a number of years, previously focusing on gang and terrorist recruitment. The toolkit consists of simple agents (individuals) and abstract agents (groups of individuals representing social/institutional concepts) that interact according to exchangeable rule sets (i.e. linear attraction, linear reinforcement). Each agent has a set of customizable attributes that get modified during the interactions. Interactions create relationships between agents, and each agent has a maximum amount of relationship energy thatmore » it can expend. As relationships evolve, they form multiple levels of social networks (i.e. acquaintances, friends, cliques) that in turn drive future interactions. Agents can also interact randomly if they are not connected through a network, mimicking the chance interactions that real people have in everyday life. We are currently integrating Seldon with the cognitive framework (also developed at Sandia). Each individual agent has a lightweight cognitive model that is created automatically from textual sources. Cognitive information is exchanged during interactions, and can also be injected into a running simulation. The entire framework has been parallelized to allow for larger simulations in an HPC environment. We have also added more detail to the agents themselves (a"Big Five" personality model) and their interactions (an enhanced relationship model) for a more realistic representation.« less

  12. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    NASA Technical Reports Server (NTRS)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    This CD contains files that support the talk (see CASI ID 20100021404). There are 24 models that relate to the ADAPT system and 1 Excel worksheet. In the paper an investigation into the use of Bayesian networks to construct large-scale diagnostic systems is described. The high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems are described in the talk. The data in the CD are the models of the 24 different power systems.

  13. Network analysis of the COSMOS galaxy field

    NASA Astrophysics Data System (ADS)

    de Regt, R.; Apunevych, S.; von Ferber, C.; Holovatch, Yu; Novosyadlyj, B.

    2018-07-01

    The galaxy data provided by COSMOS survey for 1°×1° field of sky are analysed by methods of complex networks. Three galaxy samples (slices) with redshifts ranging within intervals 0.88÷0.91, 0.91÷0.94, and 0.94÷0.97 are studied as two-dimensional projections for the spatial distributions of galaxies. We construct networks and calculate network measures for each sample, in order to analyse the network similarity of different samples, distinguish various topological environments, and find associations between galaxy properties (colour index and stellar mass) and their topological environments. Results indicate a high level of similarity between geometry and topology for different galaxy samples and no clear evidence of evolutionary trends in network measures. The distribution of local clustering coefficient C manifests three modes which allow for discrimination between stand-alone singlets and dumbbells (0 ≤ C ≤ 0.1), intermediately packed (0.1 < C < 0.9) and clique (0.9 ≤ C ≤ 1) like galaxies. Analysing astrophysical properties of galaxies (colour index and stellar masses), we show that distributions are similar in all slices, however weak evolutionary trends can also be seen across redshift slices. To specify different topological environments, we have extracted selections of galaxies from each sample according to different modes of C distribution. We have found statistically significant associations between evolutionary parameters of galaxies and selections of C: the distribution of stellar mass for galaxies with interim C differs from the corresponding distributions for stand-alone and clique galaxies, and this difference holds for all redshift slices. The colour index realizes somewhat different behaviour.

  14. The formation of continuous opinion dynamics based on a gambling mechanism and its sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Alexandre Wang, Qiuping; Li, Wei; Cai, Xu

    2017-09-01

    The formation of continuous opinion dynamics is investigated based on a virtual gambling mechanism where agents fight for a limited resource. We propose a model with agents holding opinions between -1 and 1. Agents are segregated into two cliques according to the sign of their opinions. Local communication happens only when the opinion distance between corresponding agents is no larger than a pre-defined confidence threshold. Theoretical analysis regarding special cases provides a deep understanding of the roles of both the resource allocation parameter and confidence threshold in the formation of opinion dynamics. For a sparse network, the evolution of opinion dynamics is negligible in the region of low confidence threshold when the mindless agents are absent. Numerical results also imply that, in the presence of economic agents, high confidence threshold is required for apparent clustering of agents in opinion. Moreover, a consensus state is generated only when the following three conditions are satisfied simultaneously: mindless agents are absent, the resource is concentrated in one clique, and confidence threshold tends to a critical value(=1.25+2/ka ; k_a>8/3 , the average number of friends of individual agents). For fixed a confidence threshold and resource allocation parameter, the most chaotic steady state of the dynamics happens when the fraction of mindless agents is about 0.7. It is also demonstrated that economic agents are more likely to win at gambling, compared to mindless ones. Finally, the importance of three involved parameters in establishing the uncertainty of model response is quantified in terms of Latin hypercube sampling-based sensitivity analysis.

  15. Visualizing collaborative electronic health record usage for hospitalized patients with heart failure.

    PubMed

    Soulakis, Nicholas D; Carson, Matthew B; Lee, Young Ji; Schneider, Daniel H; Skeehan, Connor T; Scholtens, Denise M

    2015-03-01

    To visualize and describe collaborative electronic health record (EHR) usage for hospitalized patients with heart failure. We identified records of patients with heart failure and all associated healthcare provider record usage through queries of the Northwestern Medicine Enterprise Data Warehouse. We constructed a network by equating access and updates of a patient's EHR to a provider-patient interaction. We then considered shared patient record access as the basis for a second network that we termed the provider collaboration network. We calculated network statistics, the modularity of provider interactions, and provider cliques. We identified 548 patient records accessed by 5113 healthcare providers in 2012. The provider collaboration network had 1504 nodes and 83 998 edges. We identified 7 major provider collaboration modules. Average clique size was 87.9 providers. We used a graph database to demonstrate an ad hoc query of our provider-patient network. Our analysis suggests a large number of healthcare providers across a wide variety of professions access records of patients with heart failure during their hospital stay. This shared record access tends to take place not only in a pairwise manner but also among large groups of providers. EHRs encode valuable interactions, implicitly or explicitly, between patients and providers. Network analysis provided strong evidence of multidisciplinary record access of patients with heart failure across teams of 100+ providers. Further investigation may lead to clearer understanding of how record access information can be used to strategically guide care coordination for patients hospitalized for heart failure. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  16. Modeling Temporal Variation in Social Network: An Evolutionary Web Graph Approach

    NASA Astrophysics Data System (ADS)

    Mitra, Susanta; Bagchi, Aditya

    A social network is a social structure between actors (individuals, organization or other social entities) and indicates the ways in which they are connected through various social relationships like friendships, kinships, professional, academic etc. Usually, a social network represents a social community, like a club and its members or a city and its citizens etc. or a research group communicating over Internet. In seventies Leinhardt [1] first proposed the idea of representing a social community by a digraph. Later, this idea became popular among other research workers like, network designers, web-service application developers and e-learning modelers. It gave rise to a rapid proliferation of research work in the area of social network analysis. Some of the notable structural properties of a social network are connectedness between actors, reachability between a source and a target actor, reciprocity or pair-wise connection between actors with bi-directional links, centrality of actors or the important actors having high degree or more connections and finally the division of actors into sub-structures or cliques or strongly-connected components. The cycles present in a social network may even be nested [2, 3]. The formal definition of these structural properties will be provided in Sect. 8.2.1. The division of actors into cliques or sub-groups can be a very important factor for understanding a social structure, particularly the degree of cohesiveness in a community. The number, size, and connections among the sub-groups in a network are useful in understanding how the network, as a whole, is likely to behave.

  17. Network analysis of the COSMOS galaxy field

    NASA Astrophysics Data System (ADS)

    de Regt, R.; Apunevych, S.; Ferber, C. von; Holovatch, Yu; Novosyadlyj, B.

    2018-03-01

    The galaxy data provided by COSMOS survey for 1° × 1° field of sky are analysed by methods of complex networks. Three galaxy samples (slices) with redshifts ranging within intervals 0.88÷0.91, 0.91÷0.94 and 0.94÷0.97 are studied as two-dimensional projections for the spatial distributions of galaxies. We construct networks and calculate network measures for each sample, in order to analyse the network similarity of different samples, distinguish various topological environments, and find associations between galaxy properties (colour index and stellar mass) and their topological environments. Results indicate a high level of similarity between geometry and topology for different galaxy samples and no clear evidence of evolutionary trends in network measures. The distribution of local clustering coefficient C manifests three modes which allow for discrimination between stand-alone singlets and dumbbells (0 ≤ C ≤ 0.1), intermediately packed (0.1 < C < 0.9) and clique (0.9 ≤ C ≤ 1) like galaxies. Analysing astrophysical properties of galaxies (colour index and stellar masses), we show that distributions are similar in all slices, however weak evolutionary trends can also be seen across redshift slices. To specify different topological environments we have extracted selections of galaxies from each sample according to different modes of C distribution. We have found statistically significant associations between evolutionary parameters of galaxies and selections of C: the distribution of stellar mass for galaxies with interim C differ from the corresponding distributions for stand-alone and clique galaxies, and this difference holds for all redshift slices. The colour index realises somewhat different behaviour.

  18. The Galactic Club or Galactic Cliques? Exploring the limits of interstellar hegemony and the Zoo Hypothesis

    NASA Astrophysics Data System (ADS)

    Forgan, Duncan H.

    2017-10-01

    The Zoo solution to Fermi's Paradox proposes that extraterrestrial intelligences (ETIs) have agreed to not contact the Earth. The strength of this solution depends on the ability for ETIs to come to agreement, and establish/police treaties as part of a so-called `Galactic Club'. These activities are principally limited by the causal connectivity of a civilization to its neighbours at its inception, i.e. whether it comes to prominence being aware of other ETIs and any treaties or agreements in place. If even one civilization is not causally connected to the other members of a treaty, then they are free to operate beyond it and contact the Earth if wished, which makes the Zoo solution `soft'. We should therefore consider how likely this scenario is, as this will give us a sense of the Zoo solution's softness, or general validity. We implement a simple toy model of ETIs arising in a Galactic Habitable Zone, and calculate the properties of the groups of culturally connected civilizations established therein. We show that for most choices of civilization parameters, the number of culturally connected groups is >1, meaning that the Galaxy is composed of multiple Galactic Cliques rather than a single Galactic Club. We find in our models for a single Galactic Club to establish interstellar hegemony, the number of civilizations must be relatively large, the mean civilization lifetime must be several millions of years, and the inter-arrival time between civilizations must be a few million years or less.

  19. Role of long- and short-range hydrophobic, hydrophilic and charged residues contact network in protein’s structural organization

    PubMed Central

    2012-01-01

    Background The three-dimensional structure of a protein can be described as a graph where nodes represent residues and the strength of non-covalent interactions between them are edges. These protein contact networks can be separated into long and short-range interactions networks depending on the positions of amino acids in primary structure. Long-range interactions play a distinct role in determining the tertiary structure of a protein while short-range interactions could largely contribute to the secondary structure formations. In addition, physico chemical properties and the linear arrangement of amino acids of the primary structure of a protein determines its three dimensional structure. Here, we present an extensive analysis of protein contact subnetworks based on the London van der Waals interactions of amino acids at different length scales. We further subdivided those networks in hydrophobic, hydrophilic and charged residues networks and have tried to correlate their influence in the overall topology and organization of a protein. Results The largest connected component (LCC) of long (LRN)-, short (SRN)- and all-range (ARN) networks within proteins exhibit a transition behaviour when plotted against different interaction strengths of edges among amino acid nodes. While short-range networks having chain like structures exhibit highly cooperative transition; long- and all-range networks, which are more similar to each other, have non-chain like structures and show less cooperativity. Further, the hydrophobic residues subnetworks in long- and all-range networks have similar transition behaviours with all residues all-range networks, but the hydrophilic and charged residues networks don’t. While the nature of transitions of LCC’s sizes is same in SRNs for thermophiles and mesophiles, there exists a clear difference in LRNs. The presence of larger size of interconnected long-range interactions in thermophiles than mesophiles, even at higher interaction strength between amino acids, give extra stability to the tertiary structure of the thermophiles. All the subnetworks at different length scales (ARNs, LRNs and SRNs) show assortativity mixing property of their participating amino acids. While there exists a significant higher percentage of hydrophobic subclusters over others in ARNs and LRNs; we do not find the assortative mixing behaviour of any the subclusters in SRNs. The clustering coefficient of hydrophobic subclusters in long-range network is the highest among types of subnetworks. There exist highly cliquish hydrophobic nodes followed by charged nodes in LRNs and ARNs; on the other hand, we observe the highest dominance of charged residues cliques in short-range networks. Studies on the perimeter of the cliques also show higher occurrences of hydrophobic and charged residues’ cliques. Conclusions The simple framework of protein contact networks and their subnetworks based on London van der Waals force is able to capture several known properties of protein structure as well as can unravel several new features. The thermophiles do not only have the higher number of long-range interactions; they also have larger cluster of connected residues at higher interaction strengths among amino acids, than their mesophilic counterparts. It can reestablish the significant role of long-range hydrophobic clusters in protein folding and stabilization; at the same time, it shed light on the higher communication ability of hydrophobic subnetworks over the others. The results give an indication of the controlling role of hydrophobic subclusters in determining protein’s folding rate. The occurrences of higher perimeters of hydrophobic and charged cliques imply the role of charged residues as well as hydrophobic residues in stabilizing the distant part of primary structure of a protein through London van der Waals interaction. PMID:22720789

  20. Generalized epidemic process on modular networks.

    PubMed

    Chung, Kihong; Baek, Yongjoo; Kim, Daniel; Ha, Meesoon; Jeong, Hawoong

    2014-05-01

    Social reinforcement and modular structure are two salient features observed in the spreading of behavior through social contacts. In order to investigate the interplay between these two features, we study the generalized epidemic process on modular networks with equal-sized finite communities and adjustable modularity. Using the analytical approach originally applied to clique-based random networks, we show that the system exhibits a bond-percolation type continuous phase transition for weak social reinforcement, whereas a discontinuous phase transition occurs for sufficiently strong social reinforcement. Our findings are numerically verified using the finite-size scaling analysis and the crossings of the bimodality coefficient.

  1. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    NASA Astrophysics Data System (ADS)

    Poljak, Nikola

    2016-11-01

    The problem of determining the angle θ at which a point mass launched from ground level with a given speed v0 will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of θ = π/4, producing a maximum range of D max = v0 2 / g , with g being the free-fall acceleration. Conceptually and calculationally more difficult problems have been suggested to improve student proficiency in projectile motion, with the most famous example being the Tarzan swing problem. The problem of determining the maximum distance of a point mass thrown from constant-speed circular motion is presented and analyzed in detail in this text. The calculational results confirm several conceptually derived conclusions regarding the initial throw position and provide some details on the angles and the way of throwing (underhand or overhand) that produce the maximum throw distance.

  2. An annealed chaotic maximum neural network for bipartite subgraph problem.

    PubMed

    Wang, Jiahai; Tang, Zheng; Wang, Ronglong

    2004-04-01

    In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.

  3. Constraint Programming to Solve Maximal Density Still Life

    NASA Astrophysics Data System (ADS)

    Chu, Geoffrey; Petrie, Karen Elizabeth; Yorke-Smith, Neil

    The Maximum Density Still Life problem fills a finite Game of Life board with a stable pattern of cells that has as many live cells as possible. Although simple to state, this problem is computationally challenging for any but the smallest sizes of board. Especially difficult is to prove that the maximum number of live cells has been found. Various approaches have been employed. The most successful are approaches based on Constraint Programming (CP). We describe the Maximum Density Still Life problem, introduce the concept of constraint programming, give an overview on how the problem can be modelled and solved with CP, and report on best-known results for the problem.

  4. Being "chill" with teachers and "frozen" by peers in science: overcoming social and educational barriers in a learning community

    NASA Astrophysics Data System (ADS)

    Kim, Hannah; Scantlebury, Kathryn

    2013-09-01

    This forum discusses the issue of `othering' and how intersectionality is a useful analytical framework for understanding the students' immigrant experiences in, and out of, the science classroom. We use a feminist perspective to discuss Minjung's study because gender is a key aspect of one's identity other aspects such as race, religion, socio-economic status, and age have assumed a significant status in gender studies. Lastly we examine the supports and barriers that cliques can produce and propose the importance of building a learning community in the science classroom to engage all students.

  5. Interval Graph Limits

    PubMed Central

    Diaconis, Persi; Holmes, Susan; Janson, Svante

    2015-01-01

    We work out a graph limit theory for dense interval graphs. The theory developed departs from the usual description of a graph limit as a symmetric function W (x, y) on the unit square, with x and y uniform on the interval (0, 1). Instead, we fix a W and change the underlying distribution of the coordinates x and y. We find choices such that our limits are continuous. Connections to random interval graphs are given, including some examples. We also show a continuity result for the chromatic number and clique number of interval graphs. Some results on uniqueness of the limit description are given for general graph limits. PMID:26405368

  6. A distributed-memory approximation algorithm for maximum weight perfect bipartite matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Buluc, Aydin; Li, Xiaoye S.

    We design and implement an efficient parallel approximation algorithm for the problem of maximum weight perfect matching in bipartite graphs, i.e. the problem of finding a set of non-adjacent edges that covers all vertices and has maximum weight. This problem differs from the maximum weight matching problem, for which scalable approximation algorithms are known. It is primarily motivated by finding good pivots in scalable sparse direct solvers before factorization where sequential implementations of maximum weight perfect matching algorithms, such as those available in MC64, are widely used due to the lack of scalable alternatives. To overcome this limitation, we proposemore » a fully parallel distributed memory algorithm that first generates a perfect matching and then searches for weightaugmenting cycles of length four in parallel and iteratively augments the matching with a vertex disjoint set of such cycles. For most practical problems the weights of the perfect matchings generated by our algorithm are very close to the optimum. An efficient implementation of the algorithm scales up to 256 nodes (17,408 cores) on a Cray XC40 supercomputer and can solve instances that are too large to be handled by a single node using the sequential algorithm.« less

  7. Phase diagrams for an evolutionary prisoner's dilemma game on two-dimensional lattices

    NASA Astrophysics Data System (ADS)

    Szabó, György; Vukov, Jeromos; Szolnoki, Attila

    2005-10-01

    The effects of payoffs and noise on the maintenance of cooperative behavior are studied in an evolutionary prisoner’s dilemma game with players located on the sites of different two-dimensional lattices. This system exhibits a phase transition from a mixed state of cooperators and defectors to a homogeneous one where only the defectors remain alive. Using Monte Carlo simulations and the generalized mean-field approximations we have determined the phase boundaries (critical points) separating the two phases on the plane of the temperature (noise) and temptation to choose defection. In the zero temperature limit the cooperation can be sustained only for those connectivity structures where three-site clique percolation occurs.

  8. Seeding for pervasively overlapping communities

    NASA Astrophysics Data System (ADS)

    Lee, Conrad; Reid, Fergal; McDaid, Aaron; Hurley, Neil

    2011-06-01

    In some social and biological networks, the majority of nodes belong to multiple communities. It has recently been shown that a number of the algorithms specifically designed to detect overlapping communities do not perform well in such highly overlapping settings. Here, we consider one class of these algorithms, those which optimize a local fitness measure, typically by using a greedy heuristic to expand a seed into a community. We perform synthetic benchmarks which indicate that an appropriate seeding strategy becomes more important as the extent of community overlap increases. We find that distinct cliques provide the best seeds. We find further support for this seeding strategy with benchmarks on a Facebook network and the yeast interactome.

  9. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    ERIC Educational Resources Information Center

    Poljak, Nikola

    2016-01-01

    The problem of determining the angle ? at which a point mass launched from ground level with a given speed v[subscript 0] will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of ? = p/4, producing a maximum range of D[subscript max] = v[superscript…

  10. Brain Computation Is Organized via Power-of-Two-Based Permutation Logic

    PubMed Central

    Xie, Kun; Fox, Grace E.; Liu, Jun; Lyu, Cheng; Lee, Jason C.; Kuang, Hui; Jacobs, Stephanie; Li, Meng; Liu, Tianming; Song, Sen; Tsien, Joe Z.

    2016-01-01

    There is considerable scientific interest in understanding how cell assemblies—the long-presumed computational motif—are organized so that the brain can generate intelligent cognition and flexible behavior. The Theory of Connectivity proposes that the origin of intelligence is rooted in a power-of-two-based permutation logic (N = 2i–1), producing specific-to-general cell-assembly architecture capable of generating specific perceptions and memories, as well as generalized knowledge and flexible actions. We show that this power-of-two-based permutation logic is widely used in cortical and subcortical circuits across animal species and is conserved for the processing of a variety of cognitive modalities including appetitive, emotional and social information. However, modulatory neurons, such as dopaminergic (DA) neurons, use a simpler logic despite their distinct subtypes. Interestingly, this specific-to-general permutation logic remained largely intact although NMDA receptors—the synaptic switch for learning and memory—were deleted throughout adulthood, suggesting that the logic is developmentally pre-configured. Moreover, this computational logic is implemented in the cortex via combining a random-connectivity strategy in superficial layers 2/3 with nonrandom organizations in deep layers 5/6. This randomness of layers 2/3 cliques—which preferentially encode specific and low-combinatorial features and project inter-cortically—is ideal for maximizing cross-modality novel pattern-extraction, pattern-discrimination and pattern-categorization using sparse code, consequently explaining why it requires hippocampal offline-consolidation. In contrast, the nonrandomness in layers 5/6—which consists of few specific cliques but a higher portion of more general cliques projecting mostly to subcortical systems—is ideal for feedback-control of motivation, emotion, consciousness and behaviors. These observations suggest that the brain’s basic computational algorithm is indeed organized by the power-of-two-based permutation logic. This simple mathematical logic can account for brain computation across the entire evolutionary spectrum, ranging from the simplest neural networks to the most complex. PMID:27895562

  11. Systems biological approach to investigate the lack of familial link between Down's Syndrome & Neural Tube Disorders.

    PubMed

    Ragunath, Pk; Abhinand, Pa

    2013-01-01

    Systems Biology involves the study of the interactions of biological systems and ultimately their functions. Down's syndrome (DS) is one of the most common genetic disorders which are caused by complete, or occasionally partial, triplication of chromosome 21, characterized by cognitive and language dysfunction coupled with sensory and neuromotor deficits. Neural Tube Disorders (NTDs) are a group of congenital malformations of the central nervous system and neighboring structures related to defective neural tube closure during the first trimester of pregnancy usually occurring between days 18-29 of gestation. Several studies in the past have provided considerable evidence that abnormal folate and methyl metabolism are associated with onset of DS & NTDs. There is a possible common etiological pathway for both NTDs and Down's syndrome. But, various research studies over the years have indicated very little evidence for familial link between the two disorders. Our research aimed at the gene expression profiling of microarray datasets pertaining to the two disorders to identify genes whose expression levels are significantly altered in these conditions. The genes which were 1.5 fold unregulated and having a p-value <0.05 were filtered out and gene interaction network were constructed for both NTDs and DS. The top ranked dense clique for both the disorders were recognized and over representation analysis was carried out for each of the constituent genes. The comprehensive manual analysis of these genes yields a hypothetical understanding of the lack of familial link between DS and NTDs. There were no genes involved with folic acid present in the dense cliques. Only - CBL, EGFR genes were commonly present, which makes the allelic variants of these genes - good candidates for future studies regarding the familial link between DS and NTDs. NTD - Neural Tube Disorders, DS - Down's Syndrome, MTHFR - Methylenetetrahydrofolate reductase, MTRR- 5 - methyltetrahydrofolate-homocysteine methyltransferase reductase.

  12. Combinatorial Algorithms for Portfolio Optimization Problems - Case of Risk Moderate Investor

    NASA Astrophysics Data System (ADS)

    Juarna, A.

    2017-03-01

    Portfolio optimization problem is a problem of finding optimal combination of n stocks from N ≥ n available stocks that gives maximal aggregate return and minimal aggregate risk. In this paper given N = 43 from the IDX (Indonesia Stock Exchange) group of the 45 most-traded stocks, known as the LQ45, with p = 24 data of monthly returns for each stock, spanned over interval 2013-2014. This problem actually is a combinatorial one where its algorithm is constructed based on two considerations: risk moderate type of investor and maximum allowed correlation coefficient between every two eligible stocks. The main outputs resulted from implementation of the algorithms is a multiple curve of three portfolio’s attributes, e.g. the size, the ratio of return to risk, and the percentage of negative correlation coefficient for every two chosen stocks, as function of maximum allowed correlation coefficient between each two stocks. The output curve shows that the portfolio contains three stocks with ratio of return to risk at 14.57 if the maximum allowed correlation coefficient between every two eligible stocks is negative and contains 19 stocks with maximum allowed correlation coefficient 0.17 to get maximum ratio of return to risk at 25.48.

  13. Computational, Integrative, and Comparative Methods for the Elucidation of Genetic Coexpression Networks

    DOE PAGES

    Baldwin, Nicole E.; Chesler, Elissa J.; Kirov, Stefan; ...

    2005-01-01

    Gene expression microarray data can be used for the assembly of genetic coexpression network graphs. Using mRNA samples obtained from recombinant inbred Mus musculus strains, it is possible to integrate allelic variation with molecular and higher-order phenotypes. The depth of quantitative genetic analysis of microarray data can be vastly enhanced utilizing this mouse resource in combination with powerful computational algorithms, platforms, and data repositories. The resulting network graphs transect many levels of biological scale. This approach is illustrated with the extraction of cliques of putatively co-regulated genes and their annotation using gene ontology analysis and cis -regulatory element discovery. Themore » causal basis for co-regulation is detected through the use of quantitative trait locus mapping.« less

  14. Extreme values and the level-crossing problem: An application to the Feller process

    NASA Astrophysics Data System (ADS)

    Masoliver, Jaume

    2014-04-01

    We review the question of the extreme values attained by a random process. We relate it to level crossings to one boundary (first-passage problems) as well as to two boundaries (escape problems). The extremes studied are the maximum, the minimum, the maximum absolute value, and the range or span. We specialize in diffusion processes and present detailed results for the Wiener and Feller processes.

  15. Exploration of the Maximum Entropy/Optimal Projection Approach to Control Design Synthesis for Large Space Structures.

    DTIC Science & Technology

    1985-02-01

    Energy Analysis , a branch of dynamic modal analysis developed for analyzing acoustic vibration problems, its present stage of development embodies a...Maximum Entropy Stochastic Modelling and Reduced-Order Design Synthesis is a rigorous new approach to this class of problems. Inspired by Statistical

  16. Forseeable Problems in a System of Maximum Access.

    ERIC Educational Resources Information Center

    Pemberton, John de J., Jr.

    A maximum-access cable television system will eliminate some legal and regulatory problems and introduce others. The operator of a system will no longer be responsible for and in control of what is transmitted over his system. With access unlimited and unrestricted, such regulations of content as the "fairness doctrine" and "equal…

  17. Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm

    NASA Technical Reports Server (NTRS)

    LeTallec, Patrick; Tidriri, Moulay D.

    1996-01-01

    In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.

  18. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  19. Gene Regulatory Network Inferences Using a Maximum-Relevance and Maximum-Significance Strategy

    PubMed Central

    Liu, Wei; Zhu, Wen; Liao, Bo; Chen, Xiangtao

    2016-01-01

    Recovering gene regulatory networks from expression data is a challenging problem in systems biology that provides valuable information on the regulatory mechanisms of cells. A number of algorithms based on computational models are currently used to recover network topology. However, most of these algorithms have limitations. For example, many models tend to be complicated because of the “large p, small n” problem. In this paper, we propose a novel regulatory network inference method called the maximum-relevance and maximum-significance network (MRMSn) method, which converts the problem of recovering networks into a problem of how to select the regulator genes for each gene. To solve the latter problem, we present an algorithm that is based on information theory and selects the regulator genes for a specific gene by maximizing the relevance and significance. A first-order incremental search algorithm is used to search for regulator genes. Eventually, a strict constraint is adopted to adjust all of the regulatory relationships according to the obtained regulator genes and thus obtain the complete network structure. We performed our method on five different datasets and compared our method to five state-of-the-art methods for network inference based on information theory. The results confirm the effectiveness of our method. PMID:27829000

  20. Mothers' maximum drinks ever consumed in 24 hours predicts mental health problems in adolescent offspring

    PubMed Central

    Malone, Stephen M.; McGue, Matt; Iacono, William G.

    2009-01-01

    Background The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcoholism, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with externalizing disorders in general. In a previous study we found that the maximum number of drinks fathers had ever consumed in 24 hrs was associated with externalizing behaviors and disorders in preadolescent and adolescent children. The purpose of the present study was to determine whether maternal maximum consumption has similar correlates. Method We examined associations between maternal maximum consumption and alcohol dependence, respectively, and disruptive disorders and substance-related problems in two large independent population-based cohorts of 17-year-old adolescents. Results Maximum consumption was associated with conduct disorder, disruptive disorders in general, early substance use and misuse, and substance disorders in adolescent children regardless of sex. Associations were consistent across cohorts, providing internal replication. They also paralleled our previous findings regarding paternal status. They could not be explained by maternal alcohol dependence, effects of drinking during pregnancy, or paternal maximum consumption. They were not simple artifacts of the fact that maximum consumption is a continuous measure while alcohol dependence is dichotomous. Conclusions Despite deriving from a single question about lifetime behavior, parental maximum consumption appears to reflect vulnerability for mental health problems, especially substance-related ones, more directly than a diagnosis of alcohol dependence. PMID:20085606

  1. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  2. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  3. Optimizing any-aged management of mixed-species stands: II. effects of decision criteria

    Treesearch

    Robert G. Haight; Robert A. Monserud

    1990-01-01

    The effects of maximum present value and maximum volume objectives on the efficiencies of alternative silvicultural systems are determined by solving any-aged management problems for mixed-conifer stands in the Northern Rocky Mountains. Any-aged management problems are formulated with periodic planting and harvesting controls and without constraints on the stand age or...

  4. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  5. Centralities in simplicial complexes. Applications to protein interaction networks.

    PubMed

    Estrada, Ernesto; Ross, Grant J

    2018-02-07

    Complex networks can be used to represent complex systems which originate in the real world. Here we study a transformation of these complex networks into simplicial complexes, where cliques represent the simplices of the complex. We extend the concept of node centrality to that of simplicial centrality and study several mathematical properties of degree, closeness, betweenness, eigenvector, Katz, and subgraph centrality for simplicial complexes. We study the degree distributions of these centralities at the different levels. We also compare and describe the differences between the centralities at the different levels. Using these centralities we study a method for detecting essential proteins in PPI networks of cells and explain the varying abilities of the centrality measures at the different levels in identifying these essential proteins. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. An evaluation of exact methods for the multiple subset maximum cardinality selection problem.

    PubMed

    Brusco, Michael J; Köhn, Hans-Friedrich; Steinley, Douglas

    2016-05-01

    The maximum cardinality subset selection problem requires finding the largest possible subset from a set of objects, such that one or more conditions are satisfied. An important extension of this problem is to extract multiple subsets, where the addition of one more object to a larger subset would always be preferred to increases in the size of one or more smaller subsets. We refer to this as the multiple subset maximum cardinality selection problem (MSMCSP). A recently published branch-and-bound algorithm solves the MSMCSP as a partitioning problem. Unfortunately, the computational requirement associated with the algorithm is often enormous, thus rendering the method infeasible from a practical standpoint. In this paper, we present an alternative approach that successively solves a series of binary integer linear programs to obtain a globally optimal solution to the MSMCSP. Computational comparisons of the methods using published similarity data for 45 food items reveal that the proposed sequential method is computationally far more efficient than the branch-and-bound approach. © 2016 The British Psychological Society.

  7. Post optimization paradigm in maximum 3-satisfiability logic programming

    NASA Astrophysics Data System (ADS)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  8. Integration of ethnic minorities during group-work for vocational teachers-in-training in health studies.

    PubMed

    Goth, Ursula Småland; Bergsli, Oddhild; Johanesen, Else Marie

    2017-01-28

    To determine how to enhance integration of minority students in health education, and thereby improve intercultural communication skills and cultural sensitivity in a sample of health teacher students in Norway. After a group-work intervention and for a period of six months afterwards we followed an "action research" approach and observed 47 health teachers-in-training in their first year at the Oslo and Akershus University College during classroom interactions. Data were qualitative and comprised student self-reports and survey results along with observations from three teachers, the authors of the study. Data were analyzed using a constant comparative approach with opinion categorization and an open coding procedure, with separate analyses performed on observations from minority students, majority students, and teachers. Both ethnic majority and minority students experienced an increase in intercultural knowledge and problem-solving ability after the experience of an early intervention in their first academic year of tertiary education. Students reacted favorably to the intervention and noted in class assessments both the challenges and rewards of overcoming cultural barriers. Teacher observation notes confirmed that early intervention led to an increase in interaction and cross-cultural engagement between minority and majority students compared to previous years' classes without the intervention. Early classroom intervention to promote intercultural engagement can prevent clique formation along majority/minority lines. The method used here, tailored group assignments in ethnically diverse working groups at the very beginning of students' tertiary academic career, can be an effective approach to cultivating attitudes and skills fostering intercultural awareness and sensitivity.

  9. Integration of ethnic minorities during group-work for vocational teachers-in-training in health studies

    PubMed Central

    Bergsli, Oddhild; Johanesen, Else Marie

    2017-01-01

    Objectives To determine how to enhance integration of minority students in health education, and thereby improve intercultural communication skills and cultural sensitivity in a sample of health teacher students in Norway. Methods After a group-work intervention and for a period of six months afterwards we followed an “action research” approach and observed 47 health teachers-in-training in their first year at the Oslo and Akershus University College during classroom interactions. Data were qualitative and comprised student self-reports and survey results along with observations from three teachers, the authors of the study. Data were analyzed using a constant comparative approach with opinion categorization and an open coding procedure, with separate analyses performed on observations from minority students, majority students, and teachers. Results Both ethnic majority and minority students experienced an increase in intercultural knowledge and problem-solving ability after the experience of an early intervention in their first academic year of tertiary education. Students reacted favorably to the intervention and noted in class assessments both the challenges and rewards of overcoming cultural barriers. Teacher observation notes confirmed that early intervention led to an increase in interaction and cross-cultural engagement between minority and majority students compared to previous years’ classes without the intervention. Conclusions Early classroom intervention to promote intercultural engagement can prevent clique formation along majority/minority lines. The method used here, tailored group assignments in ethnically diverse working groups at the very beginning of students’ tertiary academic career, can be an effective approach to cultivating attitudes and skills fostering intercultural awareness and sensitivity. PMID:28132033

  10. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models

    PubMed Central

    Grün, Sonja; Helias, Moritz

    2017-01-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition. PMID:28968396

  11. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    PubMed

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  12. Stability in the drinking habits of older problem-drinkers recruited from nontreatment settings.

    PubMed

    Walton, M A; Mudd, S A; Blow, F C; Chermack, S T; Gomberg, E S

    2000-03-01

    Few prospective studies have examined older problem-drinkers not currently in treatment to determine the stability in alcohol problems over time. Seventy-eight currently drinking, older adults meeting a diagnosis of alcohol abuse or dependence were recruited via advertising to complete a health interview; 48 were reinterviewed approximately 3 years later. Participants were categorized based on alcohol consumption (risk) and alcohol-related diagnostic symptoms (problem) at baseline and follow-up. At follow-up, few older adults (11.4%) were resolved using both risk and problem criteria. Alcohol risk/problem groups were not significantly stable between baseline and follow-up. Health problems was the most common reason for changing drinking habits. Average and maximum consumption at baseline and follow-up were significant markers of follow-up risk group and follow-up alcohol-related consequences, respectively, with maximum consumption being more robust. The course of alcohol problems among older adults fluctuates over time, and heavy drinking appears to be the best indicator of problem continuation.

  13. Real-Time System Verification by Kappa-Induction

    NASA Technical Reports Server (NTRS)

    Pike, Lee S.

    2005-01-01

    We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.

  14. Emergence of Leadership in Communication

    PubMed Central

    Allahverdyan, Armen E.; Galstyan, Aram

    2016-01-01

    We study a neuro-inspired model that mimics a discussion (or information dissemination) process in a network of agents. During their interaction, agents redistribute activity and network weights, resulting in emergence of leader(s). The model is able to reproduce the basic scenarios of leadership known in nature and society: laissez-faire (irregular activity, weak leadership, sizable inter-follower interaction, autonomous sub-leaders); participative or democratic (strong leadership, but with feedback from followers); and autocratic (no feedback, one-way influence). Several pertinent aspects of these scenarios are found as well—e.g., hidden leadership (a hidden clique of agents driving the official autocratic leader), and successive leadership (two leaders influence followers by turns). We study how these scenarios emerge from inter-agent dynamics and how they depend on behavior rules of agents—in particular, on their inertia against state changes. PMID:27532484

  15. Emergence of Leadership in Communication.

    PubMed

    Allahverdyan, Armen E; Galstyan, Aram

    2016-01-01

    We study a neuro-inspired model that mimics a discussion (or information dissemination) process in a network of agents. During their interaction, agents redistribute activity and network weights, resulting in emergence of leader(s). The model is able to reproduce the basic scenarios of leadership known in nature and society: laissez-faire (irregular activity, weak leadership, sizable inter-follower interaction, autonomous sub-leaders); participative or democratic (strong leadership, but with feedback from followers); and autocratic (no feedback, one-way influence). Several pertinent aspects of these scenarios are found as well-e.g., hidden leadership (a hidden clique of agents driving the official autocratic leader), and successive leadership (two leaders influence followers by turns). We study how these scenarios emerge from inter-agent dynamics and how they depend on behavior rules of agents-in particular, on their inertia against state changes.

  16. Influences of adding negative couplings between cliques of Kuramoto-like oscillators

    NASA Astrophysics Data System (ADS)

    Yang, Li-xin; Lin, Xiao-lin; Jiang, Jun

    2018-06-01

    We study the dynamics in a clustered network of coupled oscillators by considering positive and negative coupling schemes. Second order oscillators can be interpreted as a model of consumers and generators working in a power network. Numerical results indicate that coupling strategies play an important role in the synchronizability of the clustered power network. It is found that the synchronizability can be enhanced as the positive intragroup connections increase. Meanwhile, when the intragroup interactions are positive and the probability p that two nodes belonging to different clusters are connected is increased, the synchronization has better performance. Besides, when the intragroup connections are negative, it is observed that the power network has poor synchronizability as the probability p increases. Our simulation results can help us understand the collective behavior of the power network with positive and negative couplings.

  17. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  18. Text Summarization Model based on Maximum Coverage Problem and its Variant

    NASA Astrophysics Data System (ADS)

    Takamura, Hiroya; Okumura, Manabu

    We discuss text summarization in terms of maximum coverage problem and its variant. To solve the optimization problem, we applied some decoding algorithms including the ones never used in this summarization formulation, such as a greedy algorithm with performance guarantee, a randomized algorithm, and a branch-and-bound method. We conduct comparative experiments. On the basis of the experimental results, we also augment the summarization model so that it takes into account the relevance to the document cluster. Through experiments, we showed that the augmented model is at least comparable to the best-performing method of DUC'04.

  19. Achieving Crossed Strong Barrier Coverage in Wireless Sensor Network.

    PubMed

    Han, Ruisong; Yang, Wei; Zhang, Li

    2018-02-10

    Barrier coverage has been widely used to detect intrusions in wireless sensor networks (WSNs). It can fulfill the monitoring task while extending the lifetime of the network. Though barrier coverage in WSNs has been intensively studied in recent years, previous research failed to consider the problem of intrusion in transversal directions. If an intruder knows the deployment configuration of sensor nodes, then there is a high probability that it may traverse the whole target region from particular directions, without being detected. In this paper, we introduce the concept of crossed barrier coverage that can overcome this defect. We prove that the problem of finding the maximum number of crossed barriers is NP-hard and integer linear programming (ILP) is used to formulate the optimization problem. The branch-and-bound algorithm is adopted to determine the maximum number of crossed barriers. In addition, we also propose a multi-round shortest path algorithm (MSPA) to solve the optimization problem, which works heuristically to guarantee efficiency while maintaining near-optimal solutions. Several conventional algorithms for finding the maximum number of disjoint strong barriers are also modified to solve the crossed barrier problem and for the purpose of comparison. Extensive simulation studies demonstrate the effectiveness of MSPA.

  20. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    NASA Astrophysics Data System (ADS)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  1. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  2. Tin Cans Revisited.

    ERIC Educational Resources Information Center

    Verderber, Nadine L.

    1992-01-01

    Presents the use of spreadsheets as an alternative method for precalculus students to solve maximum or minimum problems involving surface area and volume. Concludes that students with less technical backgrounds can solve problems normally requiring calculus and suggests sources for additional problems. (MDH)

  3. Flight trajectories with maximum tangential thrust in a central Newtonian field

    NASA Astrophysics Data System (ADS)

    Azizov, A. G.; Korshunova, N. A.

    1983-07-01

    The paper examines the two-dimensional problem of determining the optimal trajectories of a point moving with a limited per-second mass consumption in a central Newtonian field. It is shown that one of the cases in which the variational equations in the Meier formulation can be integrated in quadratures is motion with maximum tangential thrust. Trajectories corresponding to this motion are determined. By way of application, attention is given to the problem of determining the thrust which assures maximum kinetic energy for the point at the moment t = t1, corresponding to the mass consumption M0 - M1, where M0 and M1 are, respectively, the initial and final mass.

  4. A stochastic maximum principle for backward control systems with random default time

    NASA Astrophysics Data System (ADS)

    Shen, Yang; Kuen Siu, Tak

    2013-05-01

    This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.

  5. Canonical Statistical Model for Maximum Expected Immission of Wire Conductor in an Aperture Enclosure

    NASA Technical Reports Server (NTRS)

    Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.

    2016-01-01

    Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khosla, D.; Singh, M.

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less

  7. Directed network modules

    NASA Astrophysics Data System (ADS)

    Palla, Gergely; Farkas, Illés J.; Pollner, Péter; Derényi, Imre; Vicsek, Tamás

    2007-06-01

    A search technique locating network modules, i.e. internally densely connected groups of nodes in directed networks is introduced by extending the clique percolation method originally proposed for undirected networks. After giving a suitable definition for directed modules we investigate their percolation transition in the Erdos-Rényi graph both analytically and numerically. We also analyse four real-world directed networks, including Google's own web-pages, an email network, a word association graph and the transcriptional regulatory network of the yeast Saccharomyces cerevisiae. The obtained directed modules are validated by additional information available for the nodes. We find that directed modules of real-world graphs inherently overlap and the investigated networks can be classified into two major groups in terms of the overlaps between the modules. Accordingly, in the word-association network and Google's web-pages, overlaps are likely to contain in-hubs, whereas the modules in the email and transcriptional regulatory network tend to overlap via out-hubs.

  8. How Relations are Built within a SNS World -- Social Network Analysis on Mixi --

    NASA Astrophysics Data System (ADS)

    Matsuo, Yutaka; Yasud, Yuki

    Our purpose here is to (1) investigate the structure of the personal networks developed on mixi, a Japanese social networking service (SNS), and (2) to consider the governing mechanism which guides participants of a SNS to form an aggregate network. Our findings are as follows:the clustering coefficient of the network is as high as 0.33 while the characteristic path lenght is as low as 5.5. A network among central users (over 300 edges) consist of two cliques, which seems to be very fragile. Community-affiliation network suggests there are several easy-entry communities which later lead users to more high-entry, unique-theme communities. The analysis on connectedness within a community reveals the importance of real-world interaction. Lastly, we depict a probable image of the entire ecology on {\\\\em mixi} among users and communities, which contributes broadly to social systems on the Web.

  9. Mapping Engagement in Twitter-Based Support Networks for Adult Smoking Cessation

    PubMed Central

    Pechmann, Cornelia; Wang, Cheng; Pan, Li; Delucchi, Kevin; Prochaska, Judith J.

    2016-01-01

    We examined engagement in novel quit-smoking private social support networks on Twitter, January 2012 to April 2014. We mapped communication patterns within 8 networks of adult smokers (n = 160) with network ties defined by participants’ tweets over 3 time intervals, and examined tie reciprocity, tie strength, in-degree centrality (popularity), 3-person triangles, 4-person cliques, network density, and abstinence status. On average, more than 50% of ties were reciprocated in most networks and most ties were between abstainers and nonabstainers. Tweets formed into more aggregated patterns especially early in the study. Across networks, 35.00% (7 days after the quit date), 49.38% (30 days), and 46.88% (60 days) abstained from smoking. We demonstrated that abstainers and nonabstainers engaged with one another in dyads and small groups. This study preliminarily suggests potential for Twitter as a platform for adult smoking-cessation interventions. PMID:27310342

  10. Inner-City Youth Development Organizations: Strengthening Programs for Adolescent Girls

    PubMed Central

    Hirsch, Barton J.; Roffman, Jennifer G.; Deutsch, Nancy L.; Flynn, Cathy A.; Loder, Tondra L.; Pagano, Maria E.

    2012-01-01

    The challenges of early adolescence are intensified for girls of color who live in disadvantaged urban communities. One response to the needs of these girls comes from the Boys & Girls Clubs of America (BGCA), a youth development organization that has a long-standing presence in inner-city neighborhoods. A gender equity initiative designed to strengthen programming for minority girls at a BGCA affiliate in a major urban center was examined. Drawing on initial qualitative findings, a conceptual framework is presented for understanding the ways in which the clubs can affect urban early adolescent girls’ self-esteem. Several strategic choices confronting this initiative then are considered. The authors emphasize the creation of a “home place” that enables the development of self via organizational responsiveness to girls’ voices, strong bonds between girls and staff, adaptive peer friendship cliques, and the development of programs that fuse the interests of girls and adult staff. PMID:23565020

  11. The network of concepts in written texts

    NASA Astrophysics Data System (ADS)

    Caldeira, S. M. G.; Petit Lobão, T. C.; Andrade, R. F. S.; Neme, A.; Miranda, J. G. V.

    2006-02-01

    Complex network theory is used to investigate the structure of meaningful concepts in written texts of individual authors. Networks have been constructed after a two phase filtering, where words with less meaning contents are eliminated and all remaining words are set to their canonical form, without any number, gender or time flexion. Each sentence in the text is added to the network as a clique. A large number of written texts have been scrutinised, and it is found that texts have small-world as well as scale-free structures. The growth process of these networks has also been investigated, and a universal evolution of network quantifiers have been found among the set of texts written by distinct authors. Further analyses, based on shuffling procedures taken either on the texts or on the constructed networks, provide hints on the role played by the word frequency and sentence length distributions to the network structure.

  12. Mapping Engagement in Twitter-Based Support Networks for Adult Smoking Cessation.

    PubMed

    Lakon, Cynthia M; Pechmann, Cornelia; Wang, Cheng; Pan, Li; Delucchi, Kevin; Prochaska, Judith J

    2016-08-01

    We examined engagement in novel quit-smoking private social support networks on Twitter, January 2012 to April 2014. We mapped communication patterns within 8 networks of adult smokers (n = 160) with network ties defined by participants' tweets over 3 time intervals, and examined tie reciprocity, tie strength, in-degree centrality (popularity), 3-person triangles, 4-person cliques, network density, and abstinence status. On average, more than 50% of ties were reciprocated in most networks and most ties were between abstainers and nonabstainers. Tweets formed into more aggregated patterns especially early in the study. Across networks, 35.00% (7 days after the quit date), 49.38% (30 days), and 46.88% (60 days) abstained from smoking. We demonstrated that abstainers and nonabstainers engaged with one another in dyads and small groups. This study preliminarily suggests potential for Twitter as a platform for adult smoking-cessation interventions.

  13. The Thinnest Path Problem

    DTIC Science & Technology

    2016-07-22

    their corresponding transmission powers . At first glance, one may wonder whether the thinnest path problem is simply a shortest path problem with the...nature of the shortest path problem. Another aspect that complicates the problem is the choice of the transmission power at each node (within a maximum...fixed transmission power at each node (in this case, the resulting hypergraph degenerates to a standard graph), the thinnest path problem is NP

  14. Optimal Control for Stochastic Delay Evolution Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com

    2016-08-15

    In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less

  15. A Direct Mapping of Max k-SAT and High Order Parity Checks to a Chimera Graph

    PubMed Central

    Chancellor, N.; Zohren, S.; Warburton, P. A.; Benjamin, S. C.; Roberts, S.

    2016-01-01

    We demonstrate a direct mapping of max k-SAT problems (and weighted max k-SAT) to a Chimera graph, which is the non-planar hardware graph of the devices built by D-Wave Systems Inc. We further show that this mapping can be used to map a similar class of maximum satisfiability problems where the clauses are replaced by parity checks over potentially large numbers of bits. The latter is of specific interest for applications in decoding for communication. We discuss an example in which the decoding of a turbo code, which has been demonstrated to perform near the Shannon limit, can be mapped to a Chimera graph. The weighted max k-SAT problem is the most general class of satisfiability problems, so our result effectively demonstrates how any satisfiability problem may be directly mapped to a Chimera graph. Our methods faithfully reproduce the low energy spectrum of the target problems, so therefore may also be used for maximum entropy inference. PMID:27857179

  16. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  17. User's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A user's manual for the FORTRAN IV computer program MMLE3 is described. It is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The theory and use of the program is described. The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program.

  18. An approach to optimal semi-active control of vibration energy harvesting based on MEMS

    NASA Astrophysics Data System (ADS)

    Rojas, Rafael A.; Carcaterra, Antonio

    2018-07-01

    In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.

  19. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets—consisting of 20 and 18 volumes, respectively—provided by the Internet Brain Segmentation Repository.

  20. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction.

    PubMed

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-07

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets-consisting of 20 and 18 volumes, respectively-provided by the Internet Brain Segmentation Repository.

  1. E-Learning Technologies: Employing Matlab Web Server to Facilitate the Education of Mathematical Programming

    ERIC Educational Resources Information Center

    Karagiannis, P.; Markelis, I.; Paparrizos, K.; Samaras, N.; Sifaleras, A.

    2006-01-01

    This paper presents new web-based educational software (webNetPro) for "Linear Network Programming." It includes many algorithms for "Network Optimization" problems, such as shortest path problems, minimum spanning tree problems, maximum flow problems and other search algorithms. Therefore, webNetPro can assist the teaching process of courses such…

  2. Exploiting Non-sequence Data in Dynamic Model Learning

    DTIC Science & Technology

    2013-10-01

    For our experiments here and in Section 3.5, we implement the proposed algorithms in MATLAB and use the maximum directed spanning tree solver...embarrassingly parallelizable, whereas PM’s maximum directed spanning tree procedure is harder to parallelize. In this experiment, our MATLAB ...some estimation problems, this approach is able to give unique and consistent estimates while the maximum- likelihood method gets entangled in

  3. ATAC Autocuer Modeling Analysis.

    DTIC Science & Technology

    1981-01-01

    the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of

  4. Guards, Galleries, Fortresses, and the Octoplex

    ERIC Educational Resources Information Center

    Michael, T. S.

    2011-01-01

    The art gallery problem asks for the maximum number of stationary guards required to protect the interior of a polygonal art gallery with "n" walls. This article explores solutions to this problem and several of its variants. In addition, some unsolved problems involving the guarding of geometric objects are presented.

  5. Necessary optimality conditions for infinite dimensional state constrained control problems

    NASA Astrophysics Data System (ADS)

    Frankowska, H.; Marchini, E. M.; Mazzola, M.

    2018-06-01

    This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.

  6. Maximum principle for a stochastic delayed system involving terminal state constraints.

    PubMed

    Wen, Jiaqiang; Shi, Yufeng

    2017-01-01

    We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.

  7. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less

  8. Application of the maximal covering location problem to habitat reserve site selection: a review

    Treesearch

    Stephanie A. Snyder; Robert G. Haight

    2016-01-01

    The Maximal Covering Location Problem (MCLP) is a classic model from the location science literature which has found wide application. One important application is to a fundamental problem in conservation biology, the Maximum Covering Species Problem (MCSP), which identifies land parcels to protect to maximize the number of species represented in the selected sites. We...

  9. Working on Extremum Problems with the Help of Dynamic Geometry Systems

    ERIC Educational Resources Information Center

    Gortcheva, Iordanka

    2013-01-01

    Two problems from high school mathematics on finding minimum or maximum are discussed. The focus is on students' approaches and difficulties in identifying a correct solution and how dynamic geometry systems can help.

  10. Three-player quantum Kolkata restaurant problem under decoherence

    NASA Astrophysics Data System (ADS)

    Ramzan, M.

    2013-01-01

    Effect of quantum decoherence in a three-player quantum Kolkata restaurant problem is investigated using tripartite entangled qutrit states. Different qutrit channels such as, amplitude damping, depolarizing, phase damping, trit-phase flip and phase flip channels are considered to analyze the behaviour of players payoffs. It is seen that Alice's payoff is heavily influenced by the amplitude damping channel as compared to the depolarizing and flipping channels. However, for higher level of decoherence, Alice's payoff is strongly affected by depolarizing noise. Whereas the behaviour of phase damping channel is symmetrical around 50% decoherence. It is also seen that for maximum decoherence ( p = 1), the influence of amplitude damping channel dominates over depolarizing and flipping channels. Whereas, phase damping channel has no effect on the Alice's payoff. Therefore, the problem becomes noiseless at maximum decoherence in case of phase damping channel. Furthermore, the Nash equilibrium of the problem does not change under decoherence.

  11. Graphs and matroids weighted in a bounded incline algebra.

    PubMed

    Lu, Ling-Xia; Zhang, Bei

    2014-01-01

    Firstly, for a graph weighted in a bounded incline algebra (or called a dioid), a longest path problem (LPP, for short) is presented, which can be considered the uniform approach to the famous shortest path problem, the widest path problem, and the most reliable path problem. The solutions for LPP and related algorithms are given. Secondly, for a matroid weighted in a linear matroid, the maximum independent set problem is studied.

  12. Regularized maximum pure-state input-output fidelity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Ernst, Moritz F.; Klesse, Rochus

    2017-12-01

    As a toy model for the capacity problem in quantum information theory we investigate finite and asymptotic regularizations of the maximum pure-state input-output fidelity F (N ) of a general quantum channel N . We show that the asymptotic regularization F ˜(N ) is lower bounded by the maximum output ∞ -norm ν∞(N ) of the channel. For N being a Pauli channel, we find that both quantities are equal.

  13. Optimal traffic resource allocation and management.

    DOT National Transportation Integrated Search

    2010-05-01

    "In this paper, we address the problem of determining the patrol routes of state troopers for maximum coverage of : highway spots with high frequencies of crashes (hot spots). We develop a mixed integer linear programming model : for this problem und...

  14. The jet engine design that can drastically reduce oxides of nitrogen

    NASA Technical Reports Server (NTRS)

    Ferri, A.; Agnone, A.

    1977-01-01

    The NOx pollution problem of hydrogen fueled turbojets and supersonic combustion ramjets (scramjets) was investigated to determine means of substantially alleviating the problem. Since the NOx reaction rates are much slower than the energy producing reactions, the NOx production depends mainly on the maximum local temperatures in the combustor and the NOx concentration is far from equilibrium at the end of a typical combustor (L approximately 1 ft). In diffusion flames, as used in present turbojets and scramjets combustor designs, the maximum local temperature occurs at the flame and is equal to the stoichiometric value. Whereas, in the heat conduction flames, wherein the flame propagates due to a heat conduction process away from the flame to the cooler oncoming premixed unburnt gases, the maximum temperature is lower than in the diffusion flame. Hence the corresponding pollution index is also lower.

  15. The iterative thermal emission method: A more implicit modification of IMC

    DOE PAGES

    Long, A. R.; Gentile, N. A.; Palmer, T. S.

    2014-08-19

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of “pseudo-scattering” introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in thatmore » they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem.« less

  16. Parallel 3D-TLM algorithm for simulation of the Earth-ionosphere cavity

    NASA Astrophysics Data System (ADS)

    Toledo-Redondo, Sergio; Salinas, Alfonso; Morente-Molinera, Juan Antonio; Méndez, Antonio; Fornieles, Jesús; Portí, Jorge; Morente, Juan Antonio

    2013-03-01

    A parallel 3D algorithm for solving time-domain electromagnetic problems with arbitrary geometries is presented. The technique employed is the Transmission Line Modeling (TLM) method implemented in Shared Memory (SM) environments. The benchmarking performed reveals that the maximum speedup depends on the memory size of the problem as well as multiple hardware factors, like the disposition of CPUs, cache, or memory. A maximum speedup of 15 has been measured for the largest problem. In certain circumstances of low memory requirements, superlinear speedup is achieved using our algorithm. The model is employed to model the Earth-ionosphere cavity, thus enabling a study of the natural electromagnetic phenomena that occur in it. The algorithm allows complete 3D simulations of the cavity with a resolution of 10 km, within a reasonable timescale.

  17. Perspectives on Inmate Communication and Interpersonal Relations in the Maximum Security Prison.

    ERIC Educational Resources Information Center

    Van Voorhis, Patricia; Meussling, Vonne

    In recent years, scholarly and applied inquiry has addressed the importance of interpersonal communication patterns and problems in maximum security institutions for males. As a result of this research, the number of programs designed to improve the interpersonal effectiveness of prison inmates has increased dramatically. Research suggests that…

  18. Methods for utilizing maximum power from a solar array

    NASA Technical Reports Server (NTRS)

    Decker, D. K.

    1972-01-01

    A preliminary study of maximum power utilization methods was performed for an outer planet spacecraft using an ion thruster propulsion system and a solar array as the primary energy source. The problems which arise from operating the array at or near the maximum power point of its 1-V characteristic are discussed. Two closed loop system configurations which use extremum regulators to track the array's maximum power point are presented. Three open loop systems are presented that either: (1) measure the maximum power of each array section and compute the total array power, (2) utilize a reference array to predict the characteristics of the solar array, or (3) utilize impedance measurements to predict the maximum power utilization. The advantages and disadvantages of each system are discussed and recommendations for further development are made.

  19. On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh

    2014-07-01

    We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for themore » parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.« less

  20. Analysis of dispatching rules in a stochastic dynamic job shop manufacturing system with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Sharma, Pankaj; Jain, Ajai

    2014-12-01

    Stochastic dynamic job shop scheduling problem with consideration of sequence-dependent setup times are among the most difficult classes of scheduling problems. This paper assesses the performance of nine dispatching rules in such shop from makespan, mean flow time, maximum flow time, mean tardiness, maximum tardiness, number of tardy jobs, total setups and mean setup time performance measures viewpoint. A discrete event simulation model of a stochastic dynamic job shop manufacturing system is developed for investigation purpose. Nine dispatching rules identified from literature are incorporated in the simulation model. The simulation experiments are conducted under due date tightness factor of 3, shop utilization percentage of 90% and setup times less than processing times. Results indicate that shortest setup time (SIMSET) rule provides the best performance for mean flow time and number of tardy jobs measures. The job with similar setup and modified earliest due date (JMEDD) rule provides the best performance for makespan, maximum flow time, mean tardiness, maximum tardiness, total setups and mean setup time measures.

  1. Collision avoidance for aircraft in abort landing

    NASA Astrophysics Data System (ADS)

    Mathwig, Jarret

    We study the collision avoidance between two aircraft flying in the same vertical plane: a host aircraft on a glide path and an intruder aircraft on a horizontal trajectory below that of the host aircraft and heading in the opposite direction. Assuming that the intruder aircraft is uncooperative, the host aircraft executes an optimal abort landing maneuver: it applies maximum thrust setting and maximum angle of attack lifting the flight path over the original path, thereby increasing the timewise minimum distance between the two aircraft and, in this way, avoiding the potential collision. In the presence of weak constraints on the aircraft and/or the environment, the angle of attack must be brought to the maximum value and kept there until the maximin point is reached. On the other hand, in the presence of strong constraints on the aircraft and the environment, desaturation of the angle of attack might have to take place before the maximin point is reached. This thesis includes four parts. In the first part, after an introduction and review of the available literature, we reformulate and solve the one-subarc Chebyshev maximin problem as a two-subarc Bolza-Pontryagin problem in which the avoidance and the recovery maneuvers are treated simultaneously. In the second part, we develop a guidance scheme (gamma guidance) capable of approximating the optimal trajectory in real time. In the third part, we present the algorithms employed to solve the one-subarc and two-subarc problems. In the fourth part, we decompose the two-subarc Bolza-Pontryagin problem into two one-subarc problems: the avoidance problem and the recovery problem, to be solved in sequence; remarkably, for problems where the ratio of total maneuver time to avoidance time is sufficiently large (≥5), this simplified procedure predicts accurately the location of the maximin point as well as the maximin distance.

  2. Competition in a Social Structure

    NASA Astrophysics Data System (ADS)

    Legara, Erika Fille; Longjas, Anthony; Batac, Rene

    Complex adaptive agents develop strategies in the presence of competition. In modern human societies, there is an inherent sense of locality when describing inter-agent dynamics because of its network structure. One then wonders whether the traditional advertising schemes that are globally publicized and target random individuals are as effective in attracting a larger portion of the population as those that take advantage of local neighborhoods, such as "word-of-mouth" marketing schemes. Here, we demonstrate using a differential equation model that schemes targeting local cliques within the network are more successful at gaining a larger share of the population than those that target users randomly at a global scale (e.g., television commercials, print ads, etc.). This suggests that success in the competition is dependent not only on the number of individuals in the population but also on how they are connected in the network. We further show that the model is general in nature by considering examples of competition dynamics, particularly those of business competition and language death.

  3. Network-based study reveals potential infection pathways of hepatitis-C leading to various diseases.

    PubMed

    Mukhopadhyay, Anirban; Maulik, Ujjwal

    2014-01-01

    Protein-protein interaction network-based study of viral pathogenesis has been gaining popularity among computational biologists in recent days. In the present study we attempt to investigate the possible pathways of hepatitis-C virus (HCV) infection by integrating the HCV-human interaction network, human protein interactome and human genetic disease association network. We have proposed quasi-biclique and quasi-clique mining algorithms to integrate these three networks to identify infection gateway host proteins and possible pathways of HCV pathogenesis leading to various diseases. Integrated study of three networks, namely HCV-human interaction network, human protein interaction network, and human proteins-disease association network reveals potential pathways of infection by the HCV that lead to various diseases including cancers. The gateway proteins have been found to be biologically coherent and have high degrees in human interactome compared to the other virus-targeted proteins. The analyses done in this study provide possible targets for more effective anti-hepatitis-C therapeutic involvement.

  4. Network-Based Study Reveals Potential Infection Pathways of Hepatitis-C Leading to Various Diseases

    PubMed Central

    Mukhopadhyay, Anirban; Maulik, Ujjwal

    2014-01-01

    Protein-protein interaction network-based study of viral pathogenesis has been gaining popularity among computational biologists in recent days. In the present study we attempt to investigate the possible pathways of hepatitis-C virus (HCV) infection by integrating the HCV-human interaction network, human protein interactome and human genetic disease association network. We have proposed quasi-biclique and quasi-clique mining algorithms to integrate these three networks to identify infection gateway host proteins and possible pathways of HCV pathogenesis leading to various diseases. Integrated study of three networks, namely HCV-human interaction network, human protein interaction network, and human proteins-disease association network reveals potential pathways of infection by the HCV that lead to various diseases including cancers. The gateway proteins have been found to be biologically coherent and have high degrees in human interactome compared to the other virus-targeted proteins. The analyses done in this study provide possible targets for more effective anti-hepatitis-C therapeutic involvement. PMID:24743187

  5. PRO_LIGAND: an approach to de novo molecular design. 2. Design of novel molecules from molecular field analysis (MFA) models and pharmacophores.

    PubMed

    Waszkowycz, B; Clark, D E; Frenkel, D; Li, J; Murray, C W; Robson, B; Westhead, D R

    1994-11-11

    A computational approach for molecular design, PRO_LIGAND, has been developed within the PROMETHEUS molecular design and simulation system in order to provide a unified framework for the de novo generation of diverse molecules which are either similar or complementary to a specified target. In this instance, the target is a pharmacophore derived from a series of active structures either by a novel interpretation of molecular field analysis data or by a pharmacophore-mapping procedure based on clique detection. After a brief introduction to PRO_LIGAND, a detailed description is given of the two pharmacophore generation procedures and their abilities are demonstrated by the elucidation of pharmacophores for steroid binding and ACE inhibition, respectively. As a further indication of its efficacy in aiding the rational drug design process, PRO_LIGAND is then employed to build novel organic molecules to satisfy the physicochemical constraints implied by the pharmacophores.

  6. Differences in Friendship Networks and Experiences of Cyberbullying Among Korean and Australian Adolescents.

    PubMed

    Lee, Jee Young; Kwon, Yeji; Yang, Soeun; Park, Sora; Kim, Eun-Mee; Na, Eun-Yeong

    2017-01-01

    Cyberbullying is one of the negative consequences of online social interaction. The digital environment enables adolescents to engage in online social interaction beyond the traditional physical boundaries of families, neighborhoods, and schools. The authors examined connections to friendship networks in both online and offline settings are related to their experiences as victims, perpetrators, and bystanders of cyberbullying. A comparative face-to-face survey of adolescents (12-15-year-olds) was conducted in Korea (n = 520) and Australia (n = 401). The results reveal that online networks are partially related to cyberbullying in both countries, showing the size of social network sites was significantly correlated with experience cyberbullying among adolescents in both countries. However there were cultural differences in the impact of friendship networks on cyberbullying. The size of the online and offline networks has a stronger impact on the cyberbullying experiences in Korea than it does in Australia. In particular, the number of friends in cliques was positively related to both bullying and victimization in Korea.

  7. Fractal multi-level organisation of human groups in a virtual world.

    PubMed

    Fuchs, Benedikt; Sornette, Didier; Thurner, Stefan

    2014-10-06

    Humans are fundamentally social. They form societies which consist of hierarchically layered nested groups of various quality, size, and structure. The anthropologic literature has classified these groups as support cliques, sympathy groups, bands, cognitive groups, tribes, linguistic groups, and so on. Anthropologic data show that, on average, each group consists of approximately three subgroups. However, a general understanding of the structural dependence of groups at different layers is largely missing. We extend these early findings to a very large high-precision large-scale internet-based social network data. We analyse the organisational structure of a complete, multi-relational, large social multiplex network of a human society consisting of about 400,000 odd players of an open-ended massive multiplayer online game for which we know all about their various group memberships at different layers. Remarkably, the online players' society exhibits the same type of structured hierarchical layers as found in hunter-gatherer societies. Our findings suggest that the hierarchical organisation of human society is deeply nested in human psychology.

  8. Fractal multi-level organisation of human groups in a virtual world

    PubMed Central

    Fuchs, Benedikt; Sornette, Didier; Thurner, Stefan

    2014-01-01

    Humans are fundamentally social. They form societies which consist of hierarchically layered nested groups of various quality, size, and structure. The anthropologic literature has classified these groups as support cliques, sympathy groups, bands, cognitive groups, tribes, linguistic groups, and so on. Anthropologic data show that, on average, each group consists of approximately three subgroups. However, a general understanding of the structural dependence of groups at different layers is largely missing. We extend these early findings to a very large high-precision large-scale internet-based social network data. We analyse the organisational structure of a complete, multi-relational, large social multiplex network of a human society consisting of about 400,000 odd players of an open-ended massive multiplayer online game for which we know all about their various group memberships at different layers. Remarkably, the online players' society exhibits the same type of structured hierarchical layers as found in hunter-gatherer societies. Our findings suggest that the hierarchical organisation of human society is deeply nested in human psychology. PMID:25283998

  9. Campaigns and Cliques: Variations in Effectiveness of an Antismoking Campaign as a Function of Adolescent Peer Group Identity

    PubMed Central

    Moran, Meghan Bridgid; Murphy, Sheila T.; Sussman, Steve

    2014-01-01

    Identity-based strategies have been suggested as a way to promote healthy behaviors when traditional approaches fall short. The truth® campaign, designed to reduce smoking in adolescents, is an example of a campaign that uses such a strategy to reach youth described as being outside the mainstream. This article examines the effectiveness of this strategy in promoting antitobacco company beliefs among youth. Survey data from 224 adolescents between 14 and 15 years of age were used to examine whether the truth® campaign was more or less effective at reaching and promoting antitobacco company beliefs among youth who identify with nonmainstream crowds (deviants and counterculture) versus those who identify with mainstream crowds (elites and academics). Analyses revealed that adolescents who identified as deviants and counterculture were more likely to have been persuaded by the truth® campaign. Social identity theory is used as a theoretical framework to understand these effects and to make recommendations for future health campaigns. PMID:23066900

  10. Campaigns and cliques: variations in effectiveness of an antismoking campaign as a function of adolescent peer group identity.

    PubMed

    Moran, Meghan Bridgid; Murphy, Sheila T; Sussman, Steve

    2012-01-01

    Identity-based strategies have been suggested as a way to promote healthy behaviors when traditional approaches fall short. The truth® campaign, designed to reduce smoking in adolescents, is an example of a campaign that uses such a strategy to reach youth described as being outside the mainstream. This article examines the effectiveness of this strategy in promoting antitobacco company beliefs among youth. Survey data from 224 adolescents between 14 and 15 years of age were used to examine whether the truth® campaign was more or less effective at reaching and promoting antitobacco company beliefs among youth who identify with nonmainstream crowds (deviants and counterculture) versus those who identify with mainstream crowds (elites and academics). Analyses revealed that adolescents who identified as deviants and counterculture were more likely to have been persuaded by the truth® campaign. Social identity theory is used as a theoretical framework to understand these effects and to make recommendations for future health campaigns.

  11. Fractal multi-level organisation of human groups in a virtual world

    NASA Astrophysics Data System (ADS)

    Fuchs, Benedikt; Sornette, Didier; Thurner, Stefan

    2014-10-01

    Humans are fundamentally social. They form societies which consist of hierarchically layered nested groups of various quality, size, and structure. The anthropologic literature has classified these groups as support cliques, sympathy groups, bands, cognitive groups, tribes, linguistic groups, and so on. Anthropologic data show that, on average, each group consists of approximately three subgroups. However, a general understanding of the structural dependence of groups at different layers is largely missing. We extend these early findings to a very large high-precision large-scale internet-based social network data. We analyse the organisational structure of a complete, multi-relational, large social multiplex network of a human society consisting of about 400,000 odd players of an open-ended massive multiplayer online game for which we know all about their various group memberships at different layers. Remarkably, the online players' society exhibits the same type of structured hierarchical layers as found in hunter-gatherer societies. Our findings suggest that the hierarchical organisation of human society is deeply nested in human psychology.

  12. Rclick: a web server for comparison of RNA 3D structures.

    PubMed

    Nguyen, Minh N; Verma, Chandra

    2015-03-15

    RNA molecules play important roles in key biological processes in the cell and are becoming attractive for developing therapeutic applications. Since the function of RNA depends on its structure and dynamics, comparing and classifying the RNA 3D structures is of crucial importance to molecular biology. In this study, we have developed Rclick, a web server that is capable of superimposing RNA 3D structures by using clique matching and 3D least-squares fitting. Our server Rclick has been benchmarked and compared with other popular servers and methods for RNA structural alignments. In most cases, Rclick alignments were better in terms of structure overlap. Our server also recognizes conformational changes between structures. For this purpose, the server produces complementary alignments to maximize the extent of detectable similarity. Various examples showcase the utility of our web server for comparison of RNA, RNA-protein complexes and RNA-ligand structures. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Optimization Research on Ampacity of Underground High Voltage Cable Based on Interior Point Method

    NASA Astrophysics Data System (ADS)

    Huang, Feng; Li, Jing

    2017-12-01

    The conservative operation method which takes unified current-carrying capacity as maximum load current can’t make full use of the overall power transmission capacity of the cable. It’s not the optimal operation state for the cable cluster. In order to improve the transmission capacity of underground cables in cluster, this paper regards the maximum overall load current as the objective function and the temperature of any cables lower than maximum permissible temperature as constraint condition. The interior point method which is very effective for nonlinear problem is put forward to solve the extreme value of the problem and determine the optimal operating current of each loop. The results show that the optimal solutions obtained with the purposed method is able to increase the total load current about 5%. It greatly improves the economic performance of the cable cluster.

  14. MAX 1991. The active sun: A plan for pursuing the study of the active sun at the time of the next maximum in solar activity, January 1985

    NASA Technical Reports Server (NTRS)

    Acton, L.

    1989-01-01

    The results of the discusions of a working group for the definition of a program for the forthcoming crest of solar activity, 1990 to 1993 are presented. The MAX '91 program described are intended to achieve important scientific goals within the context of the natural solar variability. The heart of the MAX '91 program is a series of campaigns oriented towards specific scientific problems, and taking place in the solar maximum period 1990 to 1993. These campaigns will take advantage of the load-carrying capability of the Space Shuttle to fly instruments with observational capabilities very different from those of the Solar Maximum Mission. Various combinations of instruments appropriate to the specific scientific problem of a given campaign would be flown on a Shuttle sortie mission.

  15. A novel minimum cost maximum power algorithm for future smart home energy management.

    PubMed

    Singaravelan, A; Kowsalya, M

    2017-11-01

    With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.

  16. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  17. Combining Experiments and Simulations Using the Maximum Entropy Principle

    PubMed Central

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124

  18. Sequential Monte Carlo for Maximum Weight Subgraphs with Application to Solving Image Jigsaw Puzzles.

    PubMed

    Adluru, Nagesh; Yang, Xingwei; Latecki, Longin Jan

    2015-05-01

    We consider a problem of finding maximum weight subgraphs (MWS) that satisfy hard constraints in a weighted graph. The constraints specify the graph nodes that must belong to the solution as well as mutual exclusions of graph nodes, i.e., pairs of nodes that cannot belong to the same solution. Our main contribution is a novel inference approach for solving this problem in a sequential monte carlo (SMC) sampling framework. Usually in an SMC framework there is a natural ordering of the states of the samples. The order typically depends on observations about the states or on the annealing setup used. In many applications (e.g., image jigsaw puzzle problems), all observations (e.g., puzzle pieces) are given at once and it is hard to define a natural ordering. Therefore, we relax the assumption of having ordered observations about states and propose a novel SMC algorithm for obtaining maximum a posteriori estimate of a high-dimensional posterior distribution. This is achieved by exploring different orders of states and selecting the most informative permutations in each step of the sampling. Our experimental results demonstrate that the proposed inference framework significantly outperforms loopy belief propagation in solving the image jigsaw puzzle problem. In particular, our inference quadruples the accuracy of the puzzle assembly compared to that of loopy belief propagation.

  19. Sequential Monte Carlo for Maximum Weight Subgraphs with Application to Solving Image Jigsaw Puzzles

    PubMed Central

    Adluru, Nagesh; Yang, Xingwei; Latecki, Longin Jan

    2015-01-01

    We consider a problem of finding maximum weight subgraphs (MWS) that satisfy hard constraints in a weighted graph. The constraints specify the graph nodes that must belong to the solution as well as mutual exclusions of graph nodes, i.e., pairs of nodes that cannot belong to the same solution. Our main contribution is a novel inference approach for solving this problem in a sequential monte carlo (SMC) sampling framework. Usually in an SMC framework there is a natural ordering of the states of the samples. The order typically depends on observations about the states or on the annealing setup used. In many applications (e.g., image jigsaw puzzle problems), all observations (e.g., puzzle pieces) are given at once and it is hard to define a natural ordering. Therefore, we relax the assumption of having ordered observations about states and propose a novel SMC algorithm for obtaining maximum a posteriori estimate of a high-dimensional posterior distribution. This is achieved by exploring different orders of states and selecting the most informative permutations in each step of the sampling. Our experimental results demonstrate that the proposed inference framework significantly outperforms loopy belief propagation in solving the image jigsaw puzzle problem. In particular, our inference quadruples the accuracy of the puzzle assembly compared to that of loopy belief propagation. PMID:26052182

  20. Computing the Envelope for Stepwise Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.

  1. Computing the Envelope for Stepwise-Constant Resource Allocations

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Computing tight resource-level bounds is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with nodes equal to the events and edges equal to the necessary predecessor links between events. A staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. Each stage has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible and promising for use in the inner loop of flexible-time scheduling algorithms.

  2. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  3. Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs

    NASA Astrophysics Data System (ADS)

    Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes

    We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.

  4. Minimum distance classification in remote sensing

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Landgrebe, D. A.

    1972-01-01

    The utilization of minimum distance classification methods in remote sensing problems, such as crop species identification, is considered. Literature concerning both minimum distance classification problems and distance measures is reviewed. Experimental results are presented for several examples. The objective of these examples is to: (a) compare the sample classification accuracy of a minimum distance classifier, with the vector classification accuracy of a maximum likelihood classifier, and (b) compare the accuracy of a parametric minimum distance classifier with that of a nonparametric one. Results show the minimum distance classifier performance is 5% to 10% better than that of the maximum likelihood classifier. The nonparametric classifier is only slightly better than the parametric version.

  5. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  6. Joint Transmitter and Receiver Power Allocation under Minimax MSE Criterion with Perfect and Imperfect CSI for MC-CDMA Transmissions

    NASA Astrophysics Data System (ADS)

    Kotchasarn, Chirawat; Saengudomlert, Poompat

    We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.

  7. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    NASA Astrophysics Data System (ADS)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  8. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  9. Mothers' Maximum Drinks Ever Consumed in 24 Hours Predicts Mental Health Problems in Adolescent Offspring

    ERIC Educational Resources Information Center

    Malone, Stephen M.; McGue, Matt; Iacono, William G.

    2010-01-01

    Background: The maximum number of alcoholic drinks consumed in a single 24-hr period is an alcoholism-related phenotype with both face and empirical validity. It has been associated with severity of withdrawal symptoms and sensitivity to alcohol, genes implicated in alcohol metabolism, and amplitude of a measure of brain activity associated with…

  10. Un-Building Blocks: A Model of Reverse Engineering and Applicable Heuristics

    DTIC Science & Technology

    2015-12-01

    CONCLUSIONS The machine does not isolate man from the great problems of nature but plunges him more deeply into them. Antoine de Saint-Exupery— Wind ...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Reverse engineering is the problem -solving activity that ensues when one takes a...Douglas Moses, Vice Provost for Academic Affairs iv THIS PAGE INTENTIONALLY LEFT BLANK v ABSTRACT Reverse engineering is the problem -solving

  11. Polarity related influence maximization in signed social networks.

    PubMed

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.

  12. Polarity Related Influence Maximization in Signed Social Networks

    PubMed Central

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986

  13. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  14. The iterative thermal emission method: A more implicit modification of IMC

    NASA Astrophysics Data System (ADS)

    Long, A. R.; Gentile, N. A.; Palmer, T. S.

    2014-11-01

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of ;pseudo-scattering; introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in that they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium).

  15. The iterative thermal emission method: A more implicit modification of IMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, A.R., E-mail: arlong.ne@tamu.edu; Gentile, N.A.; Palmer, T.S.

    2014-11-15

    For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of “pseudo-scattering” introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in thatmore » they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium)« less

  16. Information Retrieval Performance of Probabilistically Generated, Problem-Specific Computerized Provider Order Entry Pick-Lists: A Pilot Study

    PubMed Central

    Rothschild, Adam S.; Lehmann, Harold P.

    2005-01-01

    Objective: The aim of this study was to preliminarily determine the feasibility of probabilistically generating problem-specific computerized provider order entry (CPOE) pick-lists from a database of explicitly linked orders and problems from actual clinical cases. Design: In a pilot retrospective validation, physicians reviewed internal medicine cases consisting of the admission history and physical examination and orders placed using CPOE during the first 24 hours after admission. They created coded problem lists and linked orders from individual cases to the problem for which they were most indicated. Problem-specific order pick-lists were generated by including a given order in a pick-list if the probability of linkage of order and problem (PLOP) equaled or exceeded a specified threshold. PLOP for a given linked order-problem pair was computed as its prevalence among the other cases in the experiment with the given problem. The orders that the reviewer linked to a given problem instance served as the reference standard to evaluate its system-generated pick-list. Measurements: Recall, precision, and length of the pick-lists. Results: Average recall reached a maximum of .67 with a precision of .17 and pick-list length of 31.22 at a PLOP threshold of 0. Average precision reached a maximum of .73 with a recall of .09 and pick-list length of .42 at a PLOP threshold of .9. Recall varied inversely with precision in classic information retrieval behavior. Conclusion: We preliminarily conclude that it is feasible to generate problem-specific CPOE pick-lists probabilistically from a database of explicitly linked orders and problems. Further research is necessary to determine the usefulness of this approach in real-world settings. PMID:15684134

  17. Maximum correntropy square-root cubature Kalman filter with application to SINS/GPS integrated systems.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng

    2018-05-31

    For a nonlinear system, the cubature Kalman filter (CKF) and its square-root version are useful methods to solve the state estimation problems, and both can obtain good performance in Gaussian noises. However, their performances often degrade significantly in the face of non-Gaussian noises, particularly when the measurements are contaminated by some heavy-tailed impulsive noises. By utilizing the maximum correntropy criterion (MCC) to improve the robust performance instead of traditional minimum mean square error (MMSE) criterion, a new square-root nonlinear filter is proposed in this study, named as the maximum correntropy square-root cubature Kalman filter (MCSCKF). The new filter not only retains the advantage of square-root cubature Kalman filter (SCKF), but also exhibits robust performance against heavy-tailed non-Gaussian noises. A judgment condition that avoids numerical problem is also given. The results of two illustrative examples, especially the SINS/GPS integrated systems, demonstrate the desirable performance of the proposed filter. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  18. GreedyMAX-type Algorithms for the Maximum Independent Set Problem

    NASA Astrophysics Data System (ADS)

    Borowiecki, Piotr; Göring, Frank

    A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.

  19. Varied applications of a new maximum-likelihood code with complete covariance capability. [FERRET, for data adjustment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1978-01-01

    Applications of a new data-adjustment code are given. The method is based on a maximum-likelihood extension of generalized least-squares methods that allow complete covariance descriptions for the input data and the final adjusted data evaluations. The maximum-likelihood approach is used with a generalized log-normal distribution that provides a way to treat problems with large uncertainties and that circumvents the problem of negative values that can occur for physically positive quantities. The computer code, FERRET, is written to enable the user to apply it to a large variety of problems by modifying only the input subroutine. The following applications are discussed:more » A 75-group a priori damage function is adjusted by as much as a factor of two by use of 14 integral measurements in different reactor spectra. Reactor spectra and dosimeter cross sections are simultaneously adjusted on the basis of both integral measurements and experimental proton-recoil spectra. The simultaneous use of measured reaction rates, measured worths, microscopic measurements, and theoretical models are used to evaluate dosimeter and fission-product cross sections. Applications in the data reduction of neutron cross section measurements and in the evaluation of reactor after-heat are also considered. 6 figures.« less

  20. Simulation of water-table aquifers using specified saturated thickness

    USGS Publications Warehouse

    Sheets, Rodney A.; Hill, Mary C.; Haitjema, Henk M.; Provost, Alden M.; Masterson, John P.

    2014-01-01

    Simulating groundwater flow in a water-table (unconfined) aquifer can be difficult because the saturated thickness available for flow depends on model-calculated hydraulic heads. It is often possible to realize substantial time savings and still obtain accurate head and flow solutions by specifying an approximate saturated thickness a priori, thus linearizing this aspect of the model. This specified-thickness approximation often relies on the use of the “confined” option in numerical models, which has led to confusion and criticism of the method. This article reviews the theoretical basis for the specified-thickness approximation, derives an error analysis for relatively ideal problems, and illustrates the utility of the approximation with a complex test problem. In the transient version of our complex test problem, the specified-thickness approximation produced maximum errors in computed drawdown of about 4% of initial aquifer saturated thickness even when maximum drawdowns were nearly 20% of initial saturated thickness. In the final steady-state version, the approximation produced maximum errors in computed drawdown of about 20% of initial aquifer saturated thickness (mean errors of about 5%) when maximum drawdowns were about 35% of initial saturated thickness. In early phases of model development, such as during initial model calibration efforts, the specified-thickness approximation can be a very effective tool to facilitate convergence. The reduced execution time and increased stability obtained through the approximation can be especially useful when many model runs are required, such as during inverse model calibration, sensitivity and uncertainty analyses, multimodel analysis, and development of optimal resource management scenarios.

  1. Optimal birth control of age-dependent competitive species III. Overtaking problem

    NASA Astrophysics Data System (ADS)

    He, Ze-Rong; Cheng, Ji-Shu; Zhang, Chun-Guo

    2008-01-01

    A study is made of an overtaking optimal problem for a population system consisting of two competing species, which is controlled by fertilities. The existence of optimal policy is proved and a maximum principle is carefully derived under less restrictive conditions. Weak and strong turnpike properties of optimal trajectories are established.

  2. Design as a Fusion Problem

    DTIC Science & Technology

    2008-07-01

    consider a proof as a composition relative to some system of music or as a painting. From the Bayesian perspective, any sufciently complex problem has...these types algorithms based on maximum entropy analysis. An example is the Bar-Shalom- Campo Fusion Rule: Xf (kjk) = X2(kjk) + (P22 P21)U1[X1(kjk

  3. Bionomic Exploitation of a Ratio-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Patra, Bibek; Samanta, G. P.

    2008-01-01

    The present article deals with the problem of combined harvesting of a Michaelis-Menten-type ratio-dependent predator-prey system. The problem of determining the optimal harvest policy is solved by invoking Pontryagin's Maximum Principle. Dynamic optimization of the harvest policy is studied by taking the combined harvest effort as a dynamic…

  4. OVERVIEW OF TOTAL MAXIMUM DAILY LOAD (TMDL) PROBLEM AND SUPPORTING MODEL DEVELOPMENT

    EPA Science Inventory

    Approximately 18,900 impaired water bodies are on the 303(b) state lists required by the Clean Water Act. Of the 300 types of impairments on the 1996 and 1998 lists, 24% involve sediments, suspended solids, or turbidity. Nutrient problems account for 15% of the listings, and path...

  5. Computational procedure for finite difference solution of one-dimensional heat conduction problems reduces computer time

    NASA Technical Reports Server (NTRS)

    Iida, H. T.

    1966-01-01

    Computational procedure reduces the numerical effort whenever the method of finite differences is used to solve ablation problems for which the surface recession is large relative to the initial slab thickness. The number of numerical operations required for a given maximum space mesh size is reduced.

  6. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  7. Computational Efficiency of the Simplex Embedding Method in Convex Nondifferentiable Optimization

    NASA Astrophysics Data System (ADS)

    Kolosnitsyn, A. V.

    2018-02-01

    The simplex embedding method for solving convex nondifferentiable optimization problems is considered. A description of modifications of this method based on a shift of the cutting plane intended for cutting off the maximum number of simplex vertices is given. These modification speed up the problem solution. A numerical comparison of the efficiency of the proposed modifications based on the numerical solution of benchmark convex nondifferentiable optimization problems is presented.

  8. A Geographic Optimization Approach to Coast Guard Ship Basing

    DTIC Science & Technology

    2015-06-01

    information found an optimal result for partition- ing. Carlsson applies the travelling salesman problem (tries to find the shortest path to visit a list of...maximum 200 words) This thesis studies the problem of finding efficient ship base locations, area of operations (AO) among bases, and ship assignments...for a coast guard (CG) organization. This problem is faced by many CGs around the world and is motivated by the need to optimize operational outcomes

  9. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  10. The TSP-approach to approximate solving the m-Cycles Cover Problem

    NASA Astrophysics Data System (ADS)

    Gimadi, Edward Kh.; Rykov, Ivan; Tsidulko, Oxana

    2016-10-01

    In the m-Cycles Cover problem it is required to find a collection of m vertex-disjoint cycles that covers all vertices of the graph and the total weight of edges in the cover is minimum (or maximum). The problem is a generalization of the Traveling salesmen problem. It is strongly NP-hard. We discuss a TSP-approach that gives polynomial approximate solutions for this problem. It transforms an approximation TSP algorithm into an approximation m-CCP algorithm. In this paper we present a number of successful transformations with proven performance guarantees for the obtained solutions.

  11. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints

    PubMed Central

    2013-01-01

    Background Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. Methods In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Results Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies. PMID:23368729

  12. Adaptive bi-level programming for optimal gene knockouts for targeted overproduction under phenotypic constraints.

    PubMed

    Ren, Shaogang; Zeng, Bo; Qian, Xiaoning

    2013-01-01

    Optimization procedures to identify gene knockouts for targeted biochemical overproduction have been widely in use in modern metabolic engineering. Flux balance analysis (FBA) framework has provided conceptual simplifications for genome-scale dynamic analysis at steady states. Based on FBA, many current optimization methods for targeted bio-productions have been developed under the maximum cell growth assumption. The optimization problem to derive gene knockout strategies recently has been formulated as a bi-level programming problem in OptKnock for maximum targeted bio-productions with maximum growth rates. However, it has been shown that knockout mutants in fact reach the steady states with the minimization of metabolic adjustment (MOMA) from the corresponding wild-type strains instead of having maximal growth rates after genetic or metabolic intervention. In this work, we propose a new bi-level computational framework--MOMAKnock--which can derive robust knockout strategies under the MOMA flux distribution approximation. In this new bi-level optimization framework, we aim to maximize the production of targeted chemicals by identifying candidate knockout genes or reactions under phenotypic constraints approximated by the MOMA assumption. Hence, the targeted chemical production is the primary objective of MOMAKnock while the MOMA assumption is formulated as the inner problem of constraining the knockout metabolic flux to be as close as possible to the steady-state phenotypes of wide-type strains. As this new inner problem becomes a quadratic programming problem, a novel adaptive piecewise linearization algorithm is developed in this paper to obtain the exact optimal solution to this new bi-level integer quadratic programming problem for MOMAKnock. Our new MOMAKnock model and the adaptive piecewise linearization solution algorithm are tested with a small E. coli core metabolic network and a large-scale iAF1260 E. coli metabolic network. The derived knockout strategies are compared with those from OptKnock. Our preliminary experimental results show that MOMAKnock can provide improved targeted productions with more robust knockout strategies.

  13. A penny shaped crack in a filament-reinforced matrix. 2: The crack problem

    NASA Technical Reports Server (NTRS)

    Pacella, A. H.; Erdogan, F.

    1973-01-01

    The elastostatic interaction problem between a penny-shaped crack and a slender inclusion or filament in an elastic matrix was formulated. For a single filament as well as multiple identical filaments located symmetrically around the crack the problem is shown to reduce to a singular integral equation. The solution of the problem is obtained for various geometries and filament-to-matrix stiffness ratios, and the results relating to the angular variation of the stress intensity factor and the maximum filament stress are presented.

  14. A Computer Solution of the Parking Lot Problem.

    ERIC Educational Resources Information Center

    Rumble, Richard T.

    A computer program has been developed that will accept as inputs the physical description of a portion of land, and the parking design standards to be followed. The program will then give as outputs the numerical and graphical descriptions of the maximum-density parking lot for that portion of land. The problem has been treated as a standard…

  15. Let's Recycle! Lesson Plans for Grades K-6 and 7-12.

    ERIC Educational Resources Information Center

    Environmental Protection Agency, Washington, DC. Solid Waste Management Office.

    The purpose of this guide is to inform students of solid waste problems and disposal options. Lesson plans deal specifically with waste and recycling and include interdisciplinary approaches to these problems. The manual is divided in two sections - K-6 and 7-12. Activities are designed to allow the teacher maximum flexibility, and plans may be…

  16. Total maximum daily loads, sediment budgets, and tracking restoration progress of the north coast watersheds

    Treesearch

    Matthew S. Buffleben

    2012-01-01

    One of the predominate water quality problems for northern coastal California watersheds is the impairment of salmonid habitat. Most of the North Coast watersheds are listed as “impaired” under section 303(d) of Clean Water Act. The Clean Water Act requires states to address impaired waters by developing Total Maximum Daily Loads (TMDLs) or implementing...

  17. Maximum Principle in the Optimal Design of Plates with Stratified Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roubicek, Tomas

    2005-03-15

    An optimal design problem for a plate governed by a linear, elliptic equation with bounded thickness varying only in a single prescribed direction and with unilateral isoperimetrical-type constraints is considered. Using Murat-Tartar's homogenization theory for stratified plates and Young-measure relaxation theory, smoothness of the extended cost and constraint functionals is proved, and then the maximum principle necessary for an optimal relaxed design is derived.

  18. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    ERIC Educational Resources Information Center

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  19. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  20. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.

    PubMed

    Li, Yuhong; Jia, Fucang; Qin, Jing

    2016-10-01

    Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Artificial neural networks as quantum associative memory

    NASA Astrophysics Data System (ADS)

    Hamilton, Kathleen; Schrock, Jonathan; Imam, Neena; Humble, Travis

    We present results related to the recall accuracy and capacity of Hopfield networks implemented on commercially available quantum annealers. The use of Hopfield networks and artificial neural networks as content-addressable memories offer robust storage and retrieval of classical information, however, implementation of these models using currently available quantum annealers faces several challenges: the limits of precision when setting synaptic weights, the effects of spurious spin-glass states and minor embedding of densely connected graphs into fixed-connectivity hardware. We consider neural networks which are less than fully-connected, and also consider neural networks which contain multiple sparsely connected clusters. We discuss the effect of weak edge dilution on the accuracy of memory recall, and discuss how the multiple clique structure affects the storage capacity. Our work focuses on storage of patterns which can be embedded into physical hardware containing n < 1000 qubits. This work was supported by the United States Department of Defense and used resources of the Computational Research and Development Programs as Oak Ridge National Laboratory under Contract No. DE-AC0500OR22725 with the U. S. Department of Energy.

  2. Graph rigidity, cyclic belief propagation, and point pattern matching.

    PubMed

    McAuley, Julian J; Caetano, Tibério S; Barbosa, Marconi S

    2008-11-01

    A recent paper [1] proposed a provably optimal polynomial time method for performing near-isometric point pattern matching by means of exact probabilistic inference in a chordal graphical model. Its fundamental result is that the chordal graph in question is shown to be globally rigid, implying that exact inference provides the same matching solution as exact inference in a complete graphical model. This implies that the algorithm is optimal when there is no noise in the point patterns. In this paper, we present a new graph that is also globally rigid but has an advantage over the graph proposed in [1]: Its maximal clique size is smaller, rendering inference significantly more efficient. However, this graph is not chordal, and thus, standard Junction Tree algorithms cannot be directly applied. Nevertheless, we show that loopy belief propagation in such a graph converges to the optimal solution. This allows us to retain the optimality guarantee in the noiseless case, while substantially reducing both memory requirements and processing time. Our experimental results show that the accuracy of the proposed solution is indistinguishable from that in [1] when there is noise in the point patterns.

  3. The Evolution of Networks in Extreme and Isolated Environment

    NASA Technical Reports Server (NTRS)

    Johnson, Jeffrey C.; Boster, James S.; Palinkas, Lawrence A.

    2000-01-01

    This article reports on the evolution of network structure as it relates to the formal and informal aspects of social roles in well bounded, isolated groups. Research was conducted at the Amundsen-Scott South Pole Station over a 3-year period. Data was collected on crewmembers' networks of social interaction and personal advice over each of the 8.5-month winters during a time of complete isolation. In addition, data was collected on informal social role structure (e.g., instrumental leadership, expressive leadership). It was hypothesized that development and maintenance of a cohesive group structure was related to the presence of and group consensus on various informal social roles. The study found that core-periphery structures (i.e., reflecting cohesion) in winter-over groups were associated with the presence of critically important informal social roles (e.g., expressive leadership) and high group consensus on such informal roles. On the other hand, the evolution of clique structures (i.e., lack of cohesion) were associated with the absence of critical roles and a lack of consensus on these roles, particularly the critically important role of instrumental leader.

  4. A network analysis of the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Qiang; Zhuang, Xin-Tian; Yao, Shuang

    2009-07-01

    In many practical important cases, a massive dataset can be represented as a very large network with certain attributes associated with its vertices and edges. Stock markets generate huge amounts of data, which can be use for constructing the network reflecting the market’s behavior. In this paper, we use a threshold method to construct China’s stock correlation network and then study the network’s structural properties and topological stability. We conduct a statistical analysis of this network and show that it follows a power-law model. We also detect components, cliques and independent sets in this network. These analyses allows one to apply a new data mining technique of classifying financial instruments based on stock price data, which provides a deeper insight into the internal structure of the stock market. Moreover, we test the topological stability of this network and find that it displays a topological robustness against random vertex failures, but it is also fragile to intentional attacks. Such a network stability property would be also useful for portfolio investment and risk management.

  5. Use of social network analysis and global sensitivity and uncertainty analyses to better understand an influenza outbreak.

    PubMed

    Liu, Jianhua; Jiang, Hongbo; Zhang, Hao; Guo, Chun; Wang, Lei; Yang, Jing; Nie, Shaofa

    2017-06-27

    In the summer of 2014, an influenza A(H3N2) outbreak occurred in Yichang city, Hubei province, China. A retrospective study was conducted to collect and interpret hospital and epidemiological data on it using social network analysis and global sensitivity and uncertainty analyses. Results for degree (χ2=17.6619, P<0.0001) and betweenness(χ2=21.4186, P<0.0001) centrality suggested that the selection of sampling objects were different between traditional epidemiological methods and newer statistical approaches. Clique and network diagrams demonstrated that the outbreak actually consisted of two independent transmission networks. Sensitivity analysis showed that the contact coefficient (k) was the most important factor in the dynamic model. Using uncertainty analysis, we were able to better understand the properties and variations over space and time on the outbreak. We concluded that use of newer approaches were significantly more efficient for managing and controlling infectious diseases outbreaks, as well as saving time and public health resources, and could be widely applied on similar local outbreaks.

  6. Exploiting mineral data: applications to the diversity, distribution, and social networks of copper mineral

    NASA Astrophysics Data System (ADS)

    Morrison, S. M.; Downs, R. T.; Golden, J. J.; Pires, A.; Fox, P. A.; Ma, X.; Zednik, S.; Eleish, A.; Prabhu, A.; Hummer, D. R.; Liu, C.; Meyer, M.; Ralph, J.; Hystad, G.; Hazen, R. M.

    2016-12-01

    We have developed a comprehensive database of copper (Cu) mineral characteristics. These data include crystallographic, paragenetic, chemical, locality, age, structural complexity, and physical property information for the 689 Cu mineral species approved by the International Mineralogical Association (rruff.info/ima). Synthesis of this large, varied dataset allows for in-depth exploration of statistical trends and visualization techniques. With social network analysis (SNA) and cluster analysis of minerals, we create sociograms and chord diagrams. SNA visualizations illustrate the relationships and connectivity between mineral species, which often form cliques associated with rock type and/or geochemistry. Using mineral ecology statistics, we analyze mineral-locality frequency distribution and predict the number of missing mineral species, visualized with accumulation curves. By assembly of 2-dimensional KLEE diagrams of co-existing elements in minerals, we illustrate geochemical trends within a mineral system. To explore mineral age and chemical oxidation state, we create skyline diagrams and compare trends with varying chemistry. These trends illustrate mineral redox changes through geologic time and correlate with significant geologic occurrences, such as the Great Oxidation Event (GOE) or Wilson Cycles.

  7. Structural and chemical orders in N i 64.5 Z r 35.5 metallic glass by molecular dynamics simulation

    DOE PAGES

    Tang, L.; Wen, T. Q.; Wang, N.; ...

    2018-03-06

    The atomic structure of Ni 64.5Zr 35.5 metallic glass has been investigated by molecular dynamics (MD) simulations. The calculated structure factors from the MD glassy sample at room temperature agree well with the X-ray diffraction (XRD) and neutron diffraction (ND) experimental data. Using the pairwise cluster alignment and clique analysis methods, we show that there are three types dominant short-range order (SRO) motifs around Ni atoms in the glass sample of Ni 64.5Zr 35.5, i.e., Mixed- Icosahedron(ICO)-Cube, Twined-Cube and icosahedron-like clusters. Furthermore, chemical order and medium-range order (MRO) analysis show that the Mixed-ICOCube and Twined-Cube clusters exhibit the characteristics ofmore » the crystalline B2 phase. In conclusion, our simulation results suggest that the weak glass-forming ability (GFA) of Ni 64.5Zr 35.5 can be attributed to the competition between the glass forming ICO SRO and the crystalline Mixed-ICO-Cube and Twined-Cube motifs.« less

  8. Structural and chemical orders in N i 64.5 Z r 35.5 metallic glass by molecular dynamics simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, L.; Wen, T. Q.; Wang, N.

    The atomic structure of Ni 64.5Zr 35.5 metallic glass has been investigated by molecular dynamics (MD) simulations. The calculated structure factors from the MD glassy sample at room temperature agree well with the X-ray diffraction (XRD) and neutron diffraction (ND) experimental data. Using the pairwise cluster alignment and clique analysis methods, we show that there are three types dominant short-range order (SRO) motifs around Ni atoms in the glass sample of Ni 64.5Zr 35.5, i.e., Mixed- Icosahedron(ICO)-Cube, Twined-Cube and icosahedron-like clusters. Furthermore, chemical order and medium-range order (MRO) analysis show that the Mixed-ICOCube and Twined-Cube clusters exhibit the characteristics ofmore » the crystalline B2 phase. In conclusion, our simulation results suggest that the weak glass-forming ability (GFA) of Ni 64.5Zr 35.5 can be attributed to the competition between the glass forming ICO SRO and the crystalline Mixed-ICO-Cube and Twined-Cube motifs.« less

  9. A contextual approach to social skills assessment in the peer group: who is the best judge?

    PubMed

    Kwon, Kyongboon; Kim, Elizabeth Moorman; Sheridan, Susan M

    2012-09-01

    Using a contextual approach to social skills assessment in the peer group, this study examined the criterion-related validity of contextually relevant social skills and the incremental validity of peers and teachers as judges of children's social skills. Study participants included 342 (180 male and 162 female) students and their classroom teachers (N = 22) from rural communities. As expected, contextually relevant social skills were significantly related to a variety of social status indicators (i.e., likability, peer- and teacher-assessed popularity, reciprocated friendships, clique centrality) and positive school functioning (i.e., school liking and academic competence). Peer-assessed social skills, not teacher-assessed social skills, demonstrated consistent incremental validity in predicting various indicators of social status outcomes; peer- and teacher-assessed social skills alike showed incremental validity in predicting positive school functioning. The relation between contextually relevant social skills and study outcomes did not vary by child gender. Findings are discussed in terms of the significance of peers in the assessment of children's social skills in the peer group as well as the usefulness of a contextual approach to social skills assessment.

  10. A low complexity visualization tool that helps to perform complex systems analysis

    NASA Astrophysics Data System (ADS)

    Beiró, M. G.; Alvarez-Hamelin, J. I.; Busch, J. R.

    2008-12-01

    In this paper, we present an extension of large network visualization (LaNet-vi), a tool to visualize large scale networks using the k-core decomposition. One of the new features is how vertices compute their angular position. While in the later version it is done using shell clusters, in this version we use the angular coordinate of vertices in higher k-shells, and arrange the highest shell according to a cliques decomposition. The time complexity goes from O(n\\sqrt n) to O(n) upon bounds on a heavy-tailed degree distribution. The tool also performs a k-core-connectivity analysis, highlighting vertices that are not k-connected; e.g. this property is useful to measure robustness or quality of service (QoS) capabilities in communication networks. Finally, the actual version of LaNet-vi can draw labels and all the edges using transparencies, yielding an accurate visualization. Based on the obtained figure, it is possible to distinguish different sources and types of complex networks at a glance, in a sort of 'network iris-print'.

  11. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data.

    PubMed

    Ikegami, Takashi; Mototake, Yoh-Ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-12-28

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  12. Towards a Computational Analysis of Status and Leadership Styles on FDA Panels

    NASA Astrophysics Data System (ADS)

    Broniatowski, David A.; Magee, Christopher L.

    Decisions by committees of technical experts are increasingly impacting society. These decision-makers are typically embedded within a web of social relations. Taken as a whole, these relations define an implicit social structure which can influence the decision outcome. Aspects of this structure are founded on interpersonal affinity between parties to the negotiation, on assigned roles, and on the recognition of status characteristics, such as relevant domain expertise. This paper build upon a methodology aimed at extracting an explicit representation of such social structures using meeting transcripts as a data source. Whereas earlier results demonstrated that the method presented here can identify groups of decision-makers with a contextual affinity (i.e., membership in a given medical specialty or voting clique), we now can extract meaningful status hierarchies, and can identify differing facilitation styles among committee chairs. Use of this method is demonstrated on the transcripts of U.S. Food and Drug Administration (FDA) advisory panel meeting transcripts; nevertheless, the approach presented here is extensible to other domains and requires only a meeting transcript as input.

  13. An incremental community detection method for social tagging systems using locality-sensitive hashing.

    PubMed

    Wu, Zhenyu; Zou, Ming

    2014-10-01

    An increasing number of users interact, collaborate, and share information through social networks. Unprecedented growth in social networks is generating a significant amount of unstructured social data. From such data, distilling communities where users have common interests and tracking variations of users' interests over time are important research tracks in fields such as opinion mining, trend prediction, and personalized services. However, these tasks are extremely difficult considering the highly dynamic characteristics of the data. Existing community detection methods are time consuming, making it difficult to process data in real time. In this paper, dynamic unstructured data is modeled as a stream. Tag assignments stream clustering (TASC), an incremental scalable community detection method, is proposed based on locality-sensitive hashing. Both tags and latent interactions among users are incorporated in the method. In our experiments, the social dynamic behaviors of users are first analyzed. The proposed TASC method is then compared with state-of-the-art clustering methods such as StreamKmeans and incremental k-clique; results indicate that TASC can detect communities more efficiently and effectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Wildlife governance in the 21st century—Will sustainable use endure?

    USGS Publications Warehouse

    Decker, Daniel J.; Organ, John F.; Forstchen, Ann; Jacobson, Cynthia A.; Siemer, William F.; Smith, Christian A.; Lederle, Patrick E.; Schiavone, Michael V.

    2017-01-01

    In light of the trajectory of wildlife governance in the United States, the future of sustainable use of wildlife is a topic of substantial interest in the wildlife conservation community. We examine sustainable-use principles with respect to “good governance” considerations and public trust administration principles to assess how sustainable use might fare in the 21st century. We conclude that sustainable-use principles are compatible with recently articulated wildlife governance principles and could serve to mitigate broad values and norm shifts in American society that affect social acceptability of particular uses. Wildlife governance principles emphasize inclusive discourse among diverse wildlife interests, which could minimize isolated exchanges among cliques of like-minded people pursuing their ambitions without seeking opportunity for sharing or understanding diverse views. Aligning governance practices with wildlife governance principles can help avoid such isolation. In summary, sustainable use of wildlife is likely to endure as long as society 1) believes the long-term sustainability of wildlife is not jeopardized, and 2) accepts practices associated with such use as legitimate. These are 2 criteria needing constant attention.

  15. Life as an emergent phenomenon: studies from a large-scale boid simulation and web data

    NASA Astrophysics Data System (ADS)

    Ikegami, Takashi; Mototake, Yoh-ichi; Kobori, Shintaro; Oka, Mizuki; Hashimoto, Yasuhiro

    2017-11-01

    A large group with a special structure can become the mother of emergence. We discuss this hypothesis in relation to large-scale boid simulations and web data. In the boid swarm simulations, the nucleation, organization and collapse dynamics were found to be more diverse in larger flocks than in smaller flocks. In the second analysis, large web data, consisting of shared photos with descriptive tags, tended to group together users with similar tendencies, allowing the network to develop a core-periphery structure. We show that the generation rate of novel tags and their usage frequencies are high in the higher-order cliques. In this case, novelty is not considered to arise randomly; rather, it is generated as a result of a large and structured network. We contextualize these results in terms of adjacent possible theory and as a new way to understand collective intelligence. We argue that excessive information and material flow can become a source of innovation. This article is part of the themed issue 'Reconceptualizing the origins of life'.

  16. Structural and chemical orders in N i64.5Z r35.5 metallic glass by molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Tang, L.; Wen, T. Q.; Wang, N.; Sun, Y.; Zhang, F.; Yang, Z. J.; Ho, K. M.; Wang, C. Z.

    2018-03-01

    The atomic structure of N i64.5Z r35.5 metallic glass has been investigated by molecular dynamics (MD) simulations. The calculated structure factors from the MD glassy sample at room temperature agree well with the x-ray diffraction (XRD) and neutron diffraction (ND) experimental data. Using the pairwise cluster alignment and clique analysis methods, we show that there are three types of dominant short-range order (SRO) motifs around Ni atoms in the glass sample of N i64.5Z r35.5 , i.e., mixed-icosahedron(ICO)-cube, intertwined-cube, and icosahedronlike clusters. Furthermore, chemical order and medium-range order (MRO) analysis show that the mixed-ICO-cube and intertwined-cube clusters exhibit the characteristics of the crystalline B2 phase. Our simulation results suggest that the weak glass-forming ability (GFA) of N i64.5Z r35.5 can be attributed to the competition between the glass forming ICO SRO and the crystalline mixed-ICO-cube and intertwined-cube motifs.

  17. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  18. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han Yuecai; Hu Yaozhong; Song Jian, E-mail: jsong2@math.rutgers.edu

    2013-04-15

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need tomore » develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.« less

  19. Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Agapiou, Sergios; Burger, Martin; Dashti, Masoumeh; Helin, Tapio

    2018-04-01

    We consider the inverse problem of recovering an unknown functional parameter u in a separable Banach space, from a noisy observation vector y of its image through a known possibly non-linear map {{\\mathcal G}} . We adopt a Bayesian approach to the problem and consider Besov space priors (see Lassas et al (2009 Inverse Problems Imaging 3 87-122)), which are well-known for their edge-preserving and sparsity-promoting properties and have recently attracted wide attention especially in the medical imaging community. Our key result is to show that in this non-parametric setup the maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. This is done independently for the so-called weak and strong MAP estimates, which as we show coincide in our context. In addition, we prove a form of weak consistency for the MAP estimators in the infinitely informative data limit. Our results are remarkable for two reasons: first, the prior distribution is non-Gaussian and does not meet the smoothness conditions required in previous research on non-parametric MAP estimates. Second, the result analytically justifies existing uses of the MAP estimate in finite but high dimensional discretizations of Bayesian inverse problems with the considered Besov priors.

  20. Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI

    2006-07-01

    This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for themore » refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)« less

  1. Should fluid dynamics be included in computer models of RF cardiac ablation by irrigated-tip electrodes?

    PubMed

    González-Suárez, Ana; Pérez, Juan J; Berjano, Enrique

    2018-04-20

    Although accurate modeling of the thermal performance of irrigated-tip electrodes in radiofrequency cardiac ablation requires the solution of a triple coupled problem involving simultaneous electrical conduction, heat transfer, and fluid dynamics, in certain cases it is difficult to combine the software with the expertise necessary to solve these coupled problems, so that reduced models have to be considered. We here focus on a reduced model which avoids the fluid dynamics problem by setting a constant temperature at the electrode tip. Our aim was to compare the reduced and full models in terms of predicting lesion dimensions and the temperatures reached in tissue and blood. The results showed that the reduced model overestimates the lesion surface width by up to 5 mm (i.e. 70%) for any electrode insertion depth and blood flow rate. Likewise, it drastically overestimates the maximum blood temperature by more than 15 °C in all cases. However, the reduced model is able to predict lesion depth reasonably well (within 0.1 mm of the full model), and also the maximum tissue temperature (difference always less than 3 °C). These results were valid throughout the entire ablation time (60 s) and regardless of blood flow rate and electrode insertion depth (ranging from 0.5 to 1.5 mm). The findings suggest that the reduced model is not able to predict either the lesion surface width or the maximum temperature reached in the blood, and so would not be suitable for the study of issues related to blood temperature, such as the incidence of thrombus formation during ablation. However, it could be used to study issues related to maximum tissue temperature, such as the steam pop phenomenon.

  2. Procedures for estimating the frequency of commercial airline flights encountering high cabin ozone levels

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1979-01-01

    Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.

  3. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  4. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  5. Programmer's manual for MMLE3, a general FORTRAN program for maximum likelihood parameter estimation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.

    1981-01-01

    The MMLE3 is a maximum likelihood parameter estimation program capable of handling general bilinear dynamic equations of arbitrary order with measurement noise and/or state noise (process noise). The basic MMLE3 program is quite general and, therefore, applicable to a wide variety of problems. The basic program can interact with a set of user written problem specific routines to simplify the use of the program on specific systems. A set of user routines for the aircraft stability and control derivative estimation problem is provided with the program. The implementation of the program on specific computer systems is discussed. The structure of the program is diagrammed, and the function and operation of individual routines is described. Complete listings and reference maps of the routines are included on microfiche as a supplement. Four test cases are discussed; listings of the input cards and program output for the test cases are included on microfiche as a supplement.

  6. Thermal design of composite material high temperature attachments

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An evaluation has been made of the thermal aspects of utilizing advanced filamentary composite materials as primary structures on the shuttle vehicle. The technical objectives of this study are to: (1) establish and design concepts for maintaining material temperatures within allowable limits at TPS attachments and or penetrations applicable to the space shuttle; and (2) verify the thermal design analysis by testing selected concepts. Specific composite materials being evaluated are boron epoxy, graphite/epoxy, boron polyimide, and boron aluminum; graphite/polyimide has been added to this list for property data identification and preliminary evaluation of thermal design problems. The TPS standoff to composite structure attachment over-temperature problem is directly related to TPS maximum surface temperature. To provide a thermally comprehensive evaluation of attachment temperature characteristics, maximum surface temperatures of 900 F, 1200 F, 1800 F, 2500 F and 3000 F are considered in this study. This range of surface temperatures and the high and low maximum temperature capability of the selected composite materials will result in a wide range of thermal requirements for composite/TPS standoff attachments.

  7. The Maximum Mass of a Planet

    NASA Astrophysics Data System (ADS)

    Schlaufman, Kevin C.

    2018-06-01

    Giant planet occurrence is a steeply increasing function of FGK dwarf host star metallicity, and this is interpreted as support for the core-accretion model of giant planet formation. On the other hand, the occurrence of low-mass stellar companions to FGK dwarf stars does not appear to depend on stellar metallicity. The mass at which objects no longer prefer metal-rich FGK dwarf host stars can therefore be used to infer the maximum mass of objects that form like planets through core accretion. I'll show that objects more massive than about 10 M_Jup do not orbit metal-rich host stars and that this transition is coincident with a minimum in the occurrence rate of such objects. These facts suggest that the maximum mass of a celestial body formed through core accretion like a planet is less than 10 M_Jup. This observation can be used to infer the properties of protoplanetary disks and reveals that the Type I and Type II disk migration problems---two major issues for the modern model of planet formation---are not problems at all.

  8. On the maximum energy achievable in the first order Fermi acceleration at shocks

    NASA Astrophysics Data System (ADS)

    Grozny, I.; Diamond, P.; Malkov, M.

    2002-11-01

    Astrophysical shocks are considered as the sites of cosmic ray (CR) production. The primary mechanism is the diffusive shock (Fermi) acceleration which operates via multiple shock recrossing by a particle. Its efficiency, the rate of energy gain, and the maximum energy are thus determined by the transport mechanisms (confinement to the shock) of these particles in a turbulent shock environment. The turbulence is believed to be generated by accelerated particles themselves. Moreover, in the most interesting case of efficient acceleration the entire MHD shock structure is dominated by their pressure. This makes this problem one of the challenging strongly nonlinear problems of astrophysics. We suggest a physical model that describes particle acceleration, shock structure and the CR driven turbulence on an equal footing. The key new element in this scheme is nonlinear cascading of the MHD turbulence on self-excited (via modulational and Drury instability) sound-like perturbations which gives rise to a significant enrichment of the long wave part of the MHD spectrum. This is critical for the calculation of the maximum energy.

  9. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  10. Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.

  11. Are Exaggerated Health Complaints Continuous or Categorical? A Taxometric Analysis of the Health Problem Overstatement Scale

    ERIC Educational Resources Information Center

    Walters, Glenn D.; Berry, David T. R.; Lanyon, Richard I.; Murphy, Michael P.

    2009-01-01

    A taxometric analysis of 3 factor scales extracted from the Health Problem Overstatement (HPO) scale of the Psychological Screening Inventory (PSI; R. I. Lanyon, 1970, 1978) was performed on the data from 1,240 forensic and psychiatric patients. Mean above minus below a cut, maximum covariance, and latent-mode factor analyses produced results…

  12. Problem-Based Learning on Students' Critical Thinking Skills in Teaching Business Education in Malaysia: A Literature Review

    ERIC Educational Resources Information Center

    Zabit, Mohd Nazir Md

    2010-01-01

    This review forms the background to explore and to gain empirical support among lecturers to improve the students' critical thinking skills in business education courses in Malaysia, in which the main teaching and learning methodology is Problem-Based Learning (PBL). The PBL educational approach is known to have maximum positive impacts in…

  13. Examination of the Views of High School Teachers and Students with Regard to Discipline Perception and Discipline Problems

    ERIC Educational Resources Information Center

    Sadik, Fatma; Yalcin, Onur

    2018-01-01

    This research is a qualitative study comparatively examining the views of high school teachers and students related to discipline perception and discipline problems. The study has been realized at a vocational school during the 2014/2015 school term. Maximum diversity and criterion sampling methods have been followed for the formation of the study…

  14. Approximated analytical solution to an Ebola optimal control problem

    NASA Astrophysics Data System (ADS)

    Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.

    2016-11-01

    An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.

  15. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  16. Shoulder pain in hemiplegia.

    PubMed

    Andersen, L T

    1985-01-01

    Development of a painful shoulder in the hemiplegic patient is a significant and serious problem, because it can limit the patient's ability to reach his or her maximum functional potential. Several etiologies of shoulder pain have been identified, such as immobilization of the upper extremity, trauma to the joint structures, including brachial plexus injuries, and subluxation of the gleno-humeral joint. A review of the literature explains the basic anatomy and kinesiology of the shoulder complex, the various etiologies of hemiplegic shoulder pain, and the pros and cons of specific treatment techniques. This knowledge is essential for the occupational therapist to evaluate effectively techniques used to treat the patient with hemiplegic shoulder pain. More effective management of this problem will facilitate the patient's ability to reach his or her maximum functional potential.

  17. Positivity results for indefinite sublinear elliptic problems via a continuity argument

    NASA Astrophysics Data System (ADS)

    Kaufmann, U.; Ramos Quoirin, H.; Umezu, K.

    2017-10-01

    We establish a positivity property for a class of semilinear elliptic problems involving indefinite sublinear nonlinearities. Namely, we show that any nontrivial nonnegative solution is positive for a class of problems the strong maximum principle does not apply to. Our approach is based on a continuity argument combined with variational techniques, the sub and supersolutions method and some a priori bounds. Both Dirichlet and Neumann homogeneous boundary conditions are considered. As a byproduct, we deduce some existence and uniqueness results. Finally, as an application, we derive some positivity results for indefinite concave-convex type problems.

  18. A parallel-machine scheduling problem with two competing agents

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya

    2017-06-01

    Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.

  19. Assessment of voice, speech and communication changes associated with cervical spinal cord injury.

    PubMed

    Johansson, Kerstin; Seiger, Åke; Forsén, Malin; Holmgren Nilsson, Jeanette; Hartelius, Lena; Schalling, Ellika

    2018-02-24

    Respiratory muscle impairment following cervical spinal cord injury (CSCI) may lead to reduced voice function, although the individual variation is large. Voice problems in this population may not always receive attention since individuals with CSCI face other, more acute and life-threatening issues that need/receive attention. Currently there is no consensus on the tasks suitable to identify the specific voice impairments and functional voice changes experienced by individuals with CSCI. To examine which voice/speech tasks identify the specific voice and communication changes associated with CSCI, habitual and maximum speech performance of a group with CSCI was compared with that of a healthy control group (CG), and the findings were related to respiratory function and to self-reported voice problems. Respiratory, aerodynamic, acoustic and self-reported voice data from 19 individuals (nine women and 10 men, aged 23-59 years, heights = 153-192 cm) with CSCI (levels C3-C7) were compared with data from a CG consisting of 19 carefully matched non-injured people (nine women and 10 men, aged 19-59 years, heights = 152-187 cm). Despite considerable variability of performance, highly significant differences between the group with CSCI and the CG were found in maximum phonation time, maximum duration of breath phrases, maximum sound pressure level and maximum voice area in voice-range profiles (all p = .000). Subglottal pressure was lower and phonatory stability was reduced in some of the individuals with CSCI, but differences between the groups were not statistically significant. Six of 19 had voice handicap index (VHI) scores above 20 (the cut-off for voice disorder). Individuals with a vital capacity below 50% of the expected for an equivalent reference individual performed significantly worse than participants with more normal vital capacity. Completeness and level of injury seemed to impact vocal function in some individuals. A combination of maximum performance speech tasks, respiratory tasks and self-reported information on voice problems help to identify individuals with reduced voice function following CSCI. Early identification of individuals with voice changes post-CSCI, and introducing appropriate rehabilitation strategies, may help to minimize development of maladaptive voice behaviours such as vocal strain, which can lead to further impairments and limitations to communication participation. © 2018 Royal College of Speech and Language Therapists.

  20. Maximum Relative Entropy of Coherence: An Operational Coherence Measure.

    PubMed

    Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde

    2017-10-13

    The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.

  1. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  2. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  3. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  4. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  5. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  6. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  7. The Influence of the Form of a Wooden Beam on Its Stiffness and Strength III : Stresses in Wood Members Subjected to Combined Column and Beam Action

    NASA Technical Reports Server (NTRS)

    Newlin, J A; Trayer, G W

    1925-01-01

    The general purpose in this study was to determine the stresses in a wooden member subjected to combined beam and column action. What may be considered the specific purpose, as it relates more directly to the problem of design, was to determine the particular stress that obtains at maximum load which, for combined loading, does not occur simultaneously with maximum stress.

  8. On Bipartite Graphs Trees and Their Partial Vertex Covers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caskurlu, Bugra; Mkrtchyan, Vahan; Parekh, Ojas D.

    2015-03-01

    Graphs can be used to model risk management in various systems. Particularly, Caskurlu et al. in [7] have considered a system, which has threats, vulnerabilities and assets, and which essentially represents a tripartite graph. The goal in this model is to reduce the risk in the system below a predefined risk threshold level. One can either restricting the permissions of the users, or encapsulating the system assets. The pointed out two strategies correspond to deleting minimum number of elements corresponding to vulnerabilities and assets, such that the flow between threats and assets is reduced below the predefined threshold level. Itmore » can be shown that the main goal in this risk management system can be formulated as a Partial Vertex Cover problem on bipartite graphs. It is well-known that the Vertex Cover problem is in P on bipartite graphs, however; the computational complexity of the Partial Vertex Cover problem on bipartite graphs has remained open. In this paper, we establish that the Partial Vertex Cover problem is NP-hard on bipartite graphs, which was also recently independently demonstrated [N. Apollonio and B. Simeone, Discrete Appl. Math., 165 (2014), pp. 37–48; G. Joret and A. Vetta, preprint, arXiv:1211.4853v1 [cs.DS], 2012]. We then identify interesting special cases of bipartite graphs, for which the Partial Vertex Cover problem, the closely related Budgeted Maximum Coverage problem, and their weighted extensions can be solved in polynomial time. We also present an 8/9-approximation algorithm for the Budgeted Maximum Coverage problem in the class of bipartite graphs. We show that this matches and resolves the integrality gap of the natural LP relaxation of the problem and improves upon a recent 4/5-approximation.« less

  9. Finite Optimal Stopping Problems: The Seller's Perspective

    ERIC Educational Resources Information Center

    Hemmati, Mehdi; Smith, J. Cole

    2011-01-01

    We consider a version of an optimal stopping problem, in which a customer is presented with a finite set of items, one by one. The customer is aware of the number of items in the finite set and the minimum and maximum possible value of each item, and must purchase exactly one item. When an item is presented to the customer, she or he observes its…

  10. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    NASA Astrophysics Data System (ADS)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  11. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  12. RAID/C90 Technology Integration

    NASA Technical Reports Server (NTRS)

    Ciotti, Bob; Cooper, D. M. (Technical Monitor)

    1994-01-01

    In March 1993, NAS was the first to connect a Maximum Strategy RAID disk to the C90 using standard Cray provided software. This paper discusses the problems encountered, lessons learned, and performance achieved.

  13. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies

    PubMed Central

    Rukhin, Andrew L.

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed. PMID:26989583

  14. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    PubMed

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  15. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  16. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  17. How Learning Problems Are Managed

    MedlinePlus

    ... Individuals with Disabilities Act is that students with disabilities be educated alongside their nondisabled peers to the maximum extent possible. By that standard, the ideal situation is inclusion: being taught in a regular classroom in the ...

  18. Optimization of fuel-cell tram operation based on two dimension dynamic programming

    NASA Astrophysics Data System (ADS)

    Zhang, Wenbin; Lu, Xuecheng; Zhao, Jingsong; Li, Jianqiu

    2018-02-01

    This paper proposes an optimal control strategy based on the two-dimension dynamic programming (2DDP) algorithm targeting at minimizing operation energy consumption for a fuel-cell tram. The energy consumption model with the tram dynamics is firstly deduced. Optimal control problem are analyzed and the 2DDP strategy is applied to solve the problem. The optimal tram speed profiles are obtained for each interstation which consist of three stages: accelerate to the set speed with the maximum traction power, dynamically adjust to maintain a uniform speed and decelerate to zero speed with the maximum braking power at a suitable timing. The optimal control curves of all the interstations are connected with the parking time to form the optimal control method of the whole line. The optimized speed profiles are also simplified for drivers to follow.

  19. Maximum Data Collection Rate Routing Protocol Based on Topology Control for Rechargeable Wireless Sensor Networks

    PubMed Central

    Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei

    2016-01-01

    In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network’s performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks. PMID:27483282

  20. Maximum Data Collection Rate Routing Protocol Based on Topology Control for Rechargeable Wireless Sensor Networks.

    PubMed

    Lin, Haifeng; Bai, Di; Gao, Demin; Liu, Yunfei

    2016-07-30

    In Rechargeable Wireless Sensor Networks (R-WSNs), in order to achieve the maximum data collection rate it is critical that sensors operate in very low duty cycles because of the sporadic availability of energy. A sensor has to stay in a dormant state in most of the time in order to recharge the battery and use the energy prudently. In addition, a sensor cannot always conserve energy if a network is able to harvest excessive energy from the environment due to its limited storage capacity. Therefore, energy exploitation and energy saving have to be traded off depending on distinct application scenarios. Since higher data collection rate or maximum data collection rate is the ultimate objective for sensor deployment, surplus energy of a node can be utilized for strengthening packet delivery efficiency and improving the data generating rate in R-WSNs. In this work, we propose an algorithm based on data aggregation to compute an upper data generation rate by maximizing it as an optimization problem for a network, which is formulated as a linear programming problem. Subsequently, a dual problem by introducing Lagrange multipliers is constructed, and subgradient algorithms are used to solve it in a distributed manner. At the same time, a topology controlling scheme is adopted for improving the network's performance. Through extensive simulation and experiments, we demonstrate that our algorithm is efficient at maximizing the data collection rate in rechargeable wireless sensor networks.

  1. Maximum power point tracker for photovoltaic power plants

    NASA Astrophysics Data System (ADS)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  2. Chapman Enskog-maximum entropy method on time-dependent neutron transport equation

    NASA Astrophysics Data System (ADS)

    Abdou, M. A.

    2006-09-01

    The time-dependent neutron transport equation in semi and infinite medium with linear anisotropic and Rayleigh scattering is proposed. The problem is solved by means of the flux-limited, Chapman Enskog-maximum entropy for obtaining the solution of the time-dependent neutron transport. The solution gives the neutron distribution density function which is used to compute numerically the radiant energy density E(x,t), net flux F(x,t) and reflectivity Rf. The behaviour of the approximate flux-limited maximum entropy neutron density function are compared with those found by other theories. Numerical calculations for the radiant energy, net flux and reflectivity of the proposed medium are calculated at different time and space.

  3. Wind-influenced projectile motion

    NASA Astrophysics Data System (ADS)

    Bernardo, Reginald Christian; Perico Esguerra, Jose; Day Vallejos, Jazmine; Jerard Canda, Jeff

    2015-03-01

    We solved the wind-influenced projectile motion problem with the same initial and final heights and obtained exact analytical expressions for the shape of the trajectory, range, maximum height, time of flight, time of ascent, and time of descent with the help of the Lambert W function. It turns out that the range and maximum horizontal displacement are not always equal. When launched at a critical angle, the projectile will return to its starting position. It turns out that a launch angle of 90° maximizes the time of flight, time of ascent, time of descent, and maximum height and that the launch angle corresponding to maximum range can be obtained by solving a transcendental equation. Finally, we expressed in a parametric equation the locus of points corresponding to maximum heights for projectiles launched from the ground with the same initial speed in all directions. We used the results to estimate how much a moderate wind can modify a golf ball’s range and suggested other possible applications.

  4. Control strategy of grid-connected photovoltaic generation system based on GMPPT method

    NASA Astrophysics Data System (ADS)

    Wang, Zhongfeng; Zhang, Xuyang; Hu, Bo; Liu, Jun; Li, Ligang; Gu, Yongqiang; Zhou, Bowen

    2018-02-01

    There are multiple local maximum power points when photovoltaic (PV) array runs under partial shading condition (PSC).However, the traditional maximum power point tracking (MPPT) algorithm might be easily trapped in local maximum power points (MPPs) and cannot find the global maximum power point (GMPP). To solve such problem, a global maximum power point tracking method (GMPPT) is improved, combined with traditional MPPT method and particle swarm optimization (PSO) algorithm. Under different operating conditions of PV cells, different tracking algorithms are used. When the environment changes, the improved PSO algorithm is adopted to realize the global optimal search, and the variable step incremental conductance (INC) method is adopted to achieve MPPT in optimal local location. Based on the simulation model of the PV grid system built in Matlab/Simulink, comparative analysis of the tracking effect of MPPT by the proposed control algorithm and the traditional MPPT method under the uniform solar condition and PSC, validate the correctness, feasibility and effectiveness of the proposed control strategy.

  5. Geometric versus numerical optimal control of a dissipative spin-(1/2) particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapert, M.; Sugny, D.; Zhang, Y.

    2010-12-15

    We analyze the saturation of a nuclear magnetic resonance (NMR) signal using optimal magnetic fields. We consider both the problems of minimizing the duration of the control and its energy for a fixed duration. We solve the optimal control problems by using geometric methods and a purely numerical approach, the grape algorithm, the two methods being based on the application of the Pontryagin maximum principle. A very good agreement is obtained between the two results. The optimal solutions for the energy-minimization problem are finally implemented experimentally with available NMR techniques.

  6. On the computational aspects of comminution in discrete element method

    NASA Astrophysics Data System (ADS)

    Chaudry, Mohsin Ali; Wriggers, Peter

    2018-04-01

    In this paper, computational aspects of crushing/comminution of granular materials are addressed. For crushing, maximum tensile stress-based criterion is used. Crushing model in discrete element method (DEM) is prone to problems of mass conservation and reduction in critical time step. The first problem is addressed by using an iterative scheme which, depending on geometric voids, recovers mass of a particle. In addition, a global-local framework for DEM problem is proposed which tends to alleviate the local unstable motion of particles and increases the computational efficiency.

  7. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  8. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  9. Rarity-weighted richness: a simple and reliable alternative to integer programming and heuristic algorithms for minimum set and maximum coverage problems in conservation planning.

    PubMed

    Albuquerque, Fabio; Beier, Paul

    2015-01-01

    Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from <1 ha to 2,500 km2. On average, RWR solutions were more efficient than Zonation solutions. Integer programming remains the only guaranteed way find an optimal solution, and heuristic algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.

  10. Map synchronization in optical communication systems

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.; Mohanty, N.

    1973-01-01

    The time synchronization problem in an optical communication system is approached as a problem of estimating the arrival time (delay variable) of a known transmitted field. Maximum aposteriori (MAP) estimation procedures are used to generate optimal estimators, with emphasis placed on their interpretation as a practical system device, Estimation variances are used to aid in the design of the transmitter signals for best synchronization. Extension is made to systems that perform separate acquisition and tracking operations during synchronization. The closely allied problem of maintaining timing during pulse position modulation is also considered. The results have obvious application to optical radar and ranging systems, as well as the time synchronization problem.

  11. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  12. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  13. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    PubMed

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  14. Distributed Learning, Extremum Seeking, and Model-Free Optimization for the Resilient Coordination of Multi-Agent Adversarial Groups

    DTIC Science & Technology

    2016-09-07

    been demonstrated on maximum power point tracking for photovoltaic arrays and for wind turbines . 3. ES has recently been implemented on the Mars...high-dimensional optimization problems . Extensions and applications of these techniques were developed during the realization of the project. 15...studied problems of dynamic average consensus and a class of unconstrained continuous-time optimization algorithms for the coordination of multiple

  15. Multi-Frame Convolutional Neural Networks for Object Detection in Temporal Data

    DTIC Science & Technology

    2017-03-01

    maximum 200 words) Given the problem of detecting objects in video , existing neural-network solutions rely on a post-processing step to combine...information across frames and strengthen conclusions. This technique has been successful for videos with simple, dominant objects but it cannot detect objects...Computer Science iii THIS PAGE INTENTIONALLY LEFT BLANK iv ABSTRACT Given the problem of detecting objects in video , existing neural-network solutions rely

  16. Maximum Margin Clustering of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  17. Data preprocessing for determining outer/inner parallelization in the nested loop problem using OpenMP

    NASA Astrophysics Data System (ADS)

    Handhika, T.; Bustamam, A.; Ernastuti, Kerami, D.

    2017-07-01

    Multi-thread programming using OpenMP on the shared-memory architecture with hyperthreading technology allows the resource to be accessed by multiple processors simultaneously. Each processor can execute more than one thread for a certain period of time. However, its speedup depends on the ability of the processor to execute threads in limited quantities, especially the sequential algorithm which contains a nested loop. The number of the outer loop iterations is greater than the maximum number of threads that can be executed by a processor. The thread distribution technique that had been found previously only be applied by the high-level programmer. This paper generates a parallelization procedure for low-level programmer in dealing with 2-level nested loop problems with the maximum number of threads that can be executed by a processor is smaller than the number of the outer loop iterations. Data preprocessing which is related to the number of the outer loop and the inner loop iterations, the computational time required to execute each iteration and the maximum number of threads that can be executed by a processor are used as a strategy to determine which parallel region that will produce optimal speedup.

  18. Storage of platelets: effects associated with high platelet content in platelet storage containers.

    PubMed

    Gulliksson, Hans; Sandgren, Per; Sjödin, Agneta; Hultenby, Kjell

    2012-04-01

    A major problem associated with platelet storage containers is that some platelet units show a dramatic fall in pH, especially above certain platelet contents. The aim of this study was a detailed investigation of the different in vitro effects occurring when the maximum storage capacity of a platelet container is exceeded as compared to normal storage. Buffy coats were combined in large-volume containers to create primary pools to be split into two equal aliquots for the preparation of platelets (450-520×10(9) platelets/unit) in SSP+ for 7-day storage in two containers (test and reference) with different platelet storage capacity (n=8). Exceeding the maximum storage capacity of the test platelet storage container resulted in immediate negative effects on platelet metabolism and energy supply, but also delayed effects on platelet function, activation and disintegration. Our study gives a very clear indication of the effects in different phases associated with exceeding the maximum storage capacity of platelet containers but throw little additional light on the mechanism initiating those negative effects. The problem appears to be complex and further studies in different media using different storage containers will be needed to understand the mechanisms involved.

  19. A coherent Ising machine for 2000-node optimization problems

    NASA Astrophysics Data System (ADS)

    Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki

    2016-11-01

    The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.

  20. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  1. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  2. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  3. Graph traversals, genes, and matroids: An efficient case of the travelling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusfield, D.; Stelling, P.; Wang, Lusheng

    1996-12-31

    In this paper the authors consider graph traversal problems that arise from a particular technology for DNA sequencing - sequencing by hybridization (SBH). They first explain the connection of the graph problems to SBH and then focus on the traversal problems. They describe a practical polynomial time solution to the Travelling Salesman Problem in a rich class of directed graphs (including edge weighted binary de Bruijn graphs), and provide a bounded-error approximation algorithm for the maximum weight TSP in a superset of those directed graphs. The authors also establish the existence of a matroid structure defined on the set ofmore » Euler and Hamilton paths in the restricted class of graphs. 8 refs., 5 figs.« less

  4. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    PubMed

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  5. Two-phase framework for near-optimal multi-target Lambert rendezvous

    NASA Astrophysics Data System (ADS)

    Bang, Jun; Ahn, Jaemyung

    2018-03-01

    This paper proposes a two-phase framework to obtain a near-optimal solution of multi-target Lambert rendezvous problem. The objective of the problem is to determine the minimum-cost rendezvous sequence and trajectories to visit a given set of targets within a maximum mission duration. The first phase solves a series of single-target rendezvous problems for all departure-arrival object pairs to generate the elementary solutions, which provides candidate rendezvous trajectories. The second phase formulates a variant of traveling salesman problem (TSP) using the elementary solutions prepared in the first phase and determines the final rendezvous sequence and trajectories of the multi-target rendezvous problem. The validity of the proposed optimization framework is demonstrated through an asteroid exploration case study.

  6. Women Favour Dyadic Relationships, but Men Prefer Clubs: Cross-Cultural Evidence from Social Networking

    PubMed Central

    David-Barrett, Tamas; Rotkirch, Anna; Carney, James; Behncke Izquierdo, Isabel; Krems, Jaimie A.; Townley, Dylan; McDaniell, Elinor; Byrne-Smith, Anna; Dunbar, Robin I. M.

    2015-01-01

    The ability to create lasting, trust-based friendships makes it possible for humans to form large and coherent groups. The recent literature on the evolution of sociality and on the network dynamics of human societies suggests that large human groups have a layered structure generated by emotionally supported social relationships. There are also gender differences in adult social style which may involve different trade-offs between the quantity and quality of friendships. Although many have suggested that females tend to focus on intimate relations with a few other females, while males build larger, more hierarchical coalitions, the existence of such gender differences is disputed and data from adults is scarce. Here, we present cross-cultural evidence for gender differences in the preference for close friendships. We use a sample of ∼112,000 profile pictures from nine world regions posted on a popular social networking site to show that, in self-selected displays of social relationships, women favour dyadic relations, whereas men favour larger, all-male cliques. These apparently different solutions to quality-quantity trade-offs suggest a universal and fundamental difference in the function of close friendships for the two sexes. PMID:25775258

  7. Locating overlapping dense subgraphs in gene (protein) association networks and predicting novel protein functional groups among these subgraphs

    NASA Astrophysics Data System (ADS)

    Palla, Gergely; Derenyi, Imre; Farkas, Illes J.; Vicsek, Tamas

    2006-03-01

    Most tasks in a cell are performed not by individual proteins, but by functional groups of proteins (either physically interacting with each other or associated in other ways). In gene (protein) association networks these groups show up as sets of densely connected nodes. In the yeast, Saccharomyces cerevisiae, known physically interacting groups of proteins (called protein complexes) strongly overlap: the total number of proteins contained by these complexes by far underestimates the sum of their sizes (2750 vs. 8932). Thus, most functional groups of proteins, both physically interacting and other, are likely to share many of their members with other groups. However, current algorithms searching for dense groups of nodes in networks usually exclude overlaps. With the aim to discover both novel functions of individual proteins and novel protein functional groups we combine in protein association networks (i) a search for overlapping dense subgraphs based on the Clique Percolation Method (CPM) (Palla, G., et.al. Nature 435, 814-818 (2005), http://angel.elte.hu/clustering), which explicitly allows for overlaps among the groups, and (ii) a verification and characterization of the identified groups of nodes (proteins) with the help of standard annotation databases listing known functions.

  8. A novel approach to analyzing lung cancer mortality disparities: Using the exposome and a graph-theoretical toolchain

    PubMed Central

    Juarez, Paul D; Hood, Darryl B; Rogers, Gary L; Baktash, Suzanne H; Saxton, Arnold M; Matthews-Juarez, Patricia; Im, Wansoo; Cifuentes, Myriam Patricia; Phillips, Charles A; Lichtveld, Maureen Y; Langston, Michael A

    2017-01-01

    Objectives The aim is to identify exposures associated with lung cancer mortality and mortality disparities by race and gender using an exposome database coupled to a graph theoretical toolchain. Methods Graph theoretical algorithms were employed to extract paracliques from correlation graphs using associations between 2162 environmental exposures and lung cancer mortality rates in 2067 counties, with clique doubling applied to compute an absolute threshold of significance. Factor analysis and multiple linear regressions then were used to analyze differences in exposures associated with lung cancer mortality and mortality disparities by race and gender. Results While cigarette consumption was highly correlated with rates of lung cancer mortality for both white men and women, previously unidentified novel exposures were more closely associated with lung cancer mortality and mortality disparities for blacks, particularly black women. Conclusions Exposures beyond smoking moderate lung cancer mortality and mortality disparities by race and gender. Policy Implications An exposome approach and database coupled with scalable combinatorial analytics provides a powerful new approach for analyzing relationships between multiple environmental exposures, pathways and health outcomes. An assessment of multiple exposures is needed to appropriately translate research findings into environmental public health practice and policy. PMID:29152601

  9. Hierarchical Higher Order Crf for the Classification of Airborne LIDAR Point Clouds in Urban Areas

    NASA Astrophysics Data System (ADS)

    Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2016-06-01

    We propose a novel hierarchical approach for the classification of airborne 3D lidar points. Spatial and semantic context is incorporated via a two-layer Conditional Random Field (CRF). The first layer operates on a point level and utilises higher order cliques. Segments are generated from the labelling obtained in this way. They are the entities of the second layer, which incorporates larger scale context. The classification result of the segments is introduced as an energy term for the next iteration of the point-based layer. This framework iterates and mutually propagates context to improve the classification results. Potentially wrong decisions can be revised at later stages. The output is a labelled point cloud as well as segments roughly corresponding to object instances. Moreover, we present two new contextual features for the segment classification: the distance and the orientation of a segment with respect to the closest road. It is shown that the classification benefits from these features. In our experiments the hierarchical framework improve the overall accuracies by 2.3% on a point-based level and by 3.0% on a segment-based level, respectively, compared to a purely point-based classification.

  10. Excavation of attractor modules for nasopharyngeal carcinoma via integrating systemic module inference with attract method.

    PubMed

    Jiang, T; Jiang, C-Y; Shu, J-H; Xu, Y-J

    2017-07-10

    The molecular mechanism of nasopharyngeal carcinoma (NPC) is poorly understood and effective therapeutic approaches are needed. This research aimed to excavate the attractor modules involved in the progression of NPC and provide further understanding of the underlying mechanism of NPC. Based on the gene expression data of NPC, two specific protein-protein interaction networks for NPC and control conditions were re-weighted using Pearson correlation coefficient. Then, a systematic tracking of candidate modules was conducted on the re-weighted networks via cliques algorithm, and a total of 19 and 38 modules were separately identified from NPC and control networks, respectively. Among them, 8 pairs of modules with similar gene composition were selected, and 2 attractor modules were identified via the attract method. Functional analysis indicated that these two attractor modules participate in one common bioprocess of cell division. Based on the strategy of integrating systemic module inference with the attract method, we successfully identified 2 attractor modules. These attractor modules might play important roles in the molecular pathogenesis of NPC via affecting the bioprocess of cell division in a conjunct way. Further research is needed to explore the correlations between cell division and NPC.

  11. Differential network as an indicator of osteoporosis with network entropy.

    PubMed

    Ma, Lili; Du, Hongmei; Chen, Guangdong

    2018-07-01

    Osteoporosis is a common skeletal disorder characterized by a decrease in bone mass and density. The peak bone mass (PBM) is a significant determinant of osteoporosis. To gain insights into the indicating effect of PBM to osteoporosis, this study focused on characterizing the PBM networks and identifying key genes. One biological data set with 12 monocyte low PBM samples and 11 high PBM samples was derived to construct protein-protein interaction networks (PPINs). Based on clique-merging, module-identification algorithm was used to identify modules from PPINs. The systematic calculation and comparison were performed to test whether the network entropy can discriminate the low PBM network from high PBM network. We constructed 32 destination networks with 66 modules divided from monocyte low and high PBM networks. Among them, network 11 was the only significantly differential one (P<0.05) with 8 nodes and 28 edges. All genes belonged to precursors of osteoclasts, which were related to calcium transport as well as blood monocytes. In conclusion, based on the entropy in PBM PPINs, the differential network appears to be a novel therapeutic indicator for osteoporosis during the bone monocyte progression; these findings are helpful in disclosing the pathogenetic mechanisms of osteoporosis.

  12. Sentiment Diffusion of Public Opinions about Hot Events: Based on Complex Network

    PubMed Central

    Hao, Xiaoqing; An, Haizhong; Zhang, Lijia; Li, Huajiao; Wei, Guannan

    2015-01-01

    To study the sentiment diffusion of online public opinions about hot events, we collected people’s posts through web data mining techniques. We calculated the sentiment value of each post based on a sentiment dictionary. Next, we divided those posts into five different orientations of sentiments: strongly positive (P), weakly positive (p), neutral (o), weakly negative (n), and strongly negative (N). These sentiments are combined into modes through coarse graining. We constructed sentiment mode complex network of online public opinions (SMCOP) with modes as nodes and the conversion relation in chronological order between different types of modes as edges. We calculated the strength, k-plex clique, clustering coefficient and betweenness centrality of the SMCOP. The results show that the strength distribution obeys power law. Most posts’ sentiments are weakly positive and neutral, whereas few are strongly negative. There are weakly positive subgroups and neutral subgroups with ppppp and ooooo as the core mode, respectively. Few modes have larger betweenness centrality values and most modes convert to each other with these higher betweenness centrality modes as mediums. Therefore, the relevant person or institutes can take measures to lead people’s sentiments regarding online hot events according to the sentiment diffusion mechanism. PMID:26462230

  13. Clique size and network characteristics in hyperlink cinema. Constraints of evolved psychology.

    PubMed

    Krems, Jaimie Arona; Dunbar, R I M

    2013-12-01

    Hyperlink cinema is an emergent film genre that seeks to push the boundaries of the medium in order to mirror contemporary life in the globalized community. Films in the genre thus create an interacting network across space and time in such a way as to suggest that people's lives can intersect on scales that would not have been possible without modern technologies of travel and communication. This allows us to test the hypothesis that new kinds of media might permit us to break through the natural cognitive constraints that limit the number and quality of social relationships we can manage in the conventional face-to-face world. We used network analysis to test this hypothesis with data from 12 hyperlink films, using 10 motion pictures from a more conventional film genre as a control. We found few differences between hyperlink cinema films and the control genre, and few differences between hyperlink cinema films and either the real world or classical drama (e.g., Shakespeare's plays). Conversation group size seems to be especially resilient to alteration. It seems that, despite many efficiency advantages, modern media are unable to circumvent the constraints imposed by our evolved psychology.

  14. Correlations between Community Structure and Link Formation in Complex Networks

    PubMed Central

    Liu, Zhen; He, Jia-Lin; Kapoor, Komal; Srivastava, Jaideep

    2013-01-01

    Background Links in complex networks commonly represent specific ties between pairs of nodes, such as protein-protein interactions in biological networks or friendships in social networks. However, understanding the mechanism of link formation in complex networks is a long standing challenge for network analysis and data mining. Methodology/Principal Findings Links in complex networks have a tendency to cluster locally and form so-called communities. This widely existed phenomenon reflects some underlying mechanism of link formation. To study the correlations between community structure and link formation, we present a general computational framework including a theory for network partitioning and link probability estimation. Our approach enables us to accurately identify missing links in partially observed networks in an efficient way. The links having high connection likelihoods in the communities reveal that links are formed preferentially to create cliques and accordingly promote the clustering level of the communities. The experimental results verify that such a mechanism can be well captured by our approach. Conclusions/Significance Our findings provide a new insight into understanding how links are created in the communities. The computational framework opens a wide range of possibilities to develop new approaches and applications, such as community detection and missing link prediction. PMID:24039818

  15. Multiple-hopping trajectories near a rotating asteroid

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Zhang, Tian-Jiao; Li, Zhao; Li, Heng-Nian

    2017-03-01

    We present a study of the transfer orbits connecting landing points of irregular-shaped asteroids. The landing points do not touch the surface of the asteroids and are chosen several meters above the surface. The ant colony optimization technique is used to calculate the multiple-hopping trajectories near an arbitrary irregular asteroid. This new method has three steps which are as follows: (1) the search of the maximal clique of candidate target landing points; (2) leg optimization connecting all landing point pairs; and (3) the hopping sequence optimization. In particular this method is applied to asteroids 433 Eros and 216 Kleopatra. We impose a critical constraint on the target landing points to allow for extensive exploration of the asteroid: the relative distance between all the arrived target positions should be larger than a minimum allowed value. Ant colony optimization is applied to find the set and sequence of targets, and the differential evolution algorithm is used to solve for the hopping orbits. The minimum-velocity increment tours of hopping trajectories connecting all the landing positions are obtained by ant colony optimization. The results from different size asteroids indicate that the cost of the minimum velocity-increment tour depends on the size of the asteroids.

  16. Women favour dyadic relationships, but men prefer clubs: cross-cultural evidence from social networking.

    PubMed

    David-Barrett, Tamas; Rotkirch, Anna; Carney, James; Behncke Izquierdo, Isabel; Krems, Jaimie A; Townley, Dylan; McDaniell, Elinor; Byrne-Smith, Anna; Dunbar, Robin I M

    2015-01-01

    The ability to create lasting, trust-based friendships makes it possible for humans to form large and coherent groups. The recent literature on the evolution of sociality and on the network dynamics of human societies suggests that large human groups have a layered structure generated by emotionally supported social relationships. There are also gender differences in adult social style which may involve different trade-offs between the quantity and quality of friendships. Although many have suggested that females tend to focus on intimate relations with a few other females, while males build larger, more hierarchical coalitions, the existence of such gender differences is disputed and data from adults is scarce. Here, we present cross-cultural evidence for gender differences in the preference for close friendships. We use a sample of ∼112,000 profile pictures from nine world regions posted on a popular social networking site to show that, in self-selected displays of social relationships, women favour dyadic relations, whereas men favour larger, all-male cliques. These apparently different solutions to quality-quantity trade-offs suggest a universal and fundamental difference in the function of close friendships for the two sexes.

  17. Power and Efficiency.

    ERIC Educational Resources Information Center

    Boyd, James N.

    1991-01-01

    Presents a mathematical problem that, when examined and generalized, develops the relationships between power and efficiency in energy transfer. Offers four examples of simple electrical and mechanical systems to illustrate the principle that maximum power occurs at 50 percent efficiency. (MDH)

  18. Ultraviolet Spectrometer and Polarimeter (UVSP) software development and hardware tests for the solar maximum mission

    NASA Technical Reports Server (NTRS)

    1984-01-01

    An analysis of UVSP wavelength drive hardware, problems, and recovery procedures; radiative power loss from solar plasmas; and correlations between observed UV brightness and inferred photospheric currents are given.

  19. Maximum margin multiple instance clustering with applications to image and text clustering.

    PubMed

    Zhang, Dan; Wang, Fei; Si, Luo; Li, Tao

    2011-05-01

    In multiple instance learning problems, patterns are often given as bags and each bag consists of some instances. Most of existing research in the area focuses on multiple instance classification and multiple instance regression, while very limited work has been conducted for multiple instance clustering (MIC). This paper formulates a novel framework, maximum margin multiple instance clustering (M(3)IC), for MIC. However, it is impractical to directly solve the optimization problem of M(3)IC. Therefore, M(3)IC is relaxed in this paper to enable an efficient optimization solution with a combination of the constrained concave-convex procedure and the cutting plane method. Furthermore, this paper presents some important properties of the proposed method and discusses the relationship between the proposed method and some other related ones. An extensive set of empirical results are shown to demonstrate the advantages of the proposed method against existing research for both effectiveness and efficiency.

  20. Numerical Experimentation with Maximum Likelihood Identification in Static Distributed Systems

    NASA Technical Reports Server (NTRS)

    Scheid, R. E., Jr.; Rodriguez, G.

    1985-01-01

    Many important issues in the control of large space structures are intimately related to the fundamental problem of parameter identification. One might also ask how well this identification process can be carried out in the presence of noisy data since no sensor system is perfect. With these considerations in mind the algorithms herein are designed to treat both the case of uncertainties in the modeling and uncertainties in the data. The analytical aspects of maximum likelihood identification are considered in some detail in another paper. The questions relevant to the implementation of these schemes are dealt with, particularly as they apply to models of large space structures. The emphasis is on the influence of the infinite dimensional character of the problem on finite dimensional implementations of the algorithms. Those areas of current and future analysis are highlighted which indicate the interplay between error analysis and possible truncations of the state and parameter spaces.

  1. Hybrid genetic algorithm in the Hopfield network for maximum 2-satisfiability problem

    NASA Astrophysics Data System (ADS)

    Kasihmuddin, Mohd Shareduwan Mohd; Sathasivam, Saratha; Mansor, Mohd. Asyraf

    2017-08-01

    Heuristic method was designed for finding optimal solution more quickly compared to classical methods which are too complex to comprehend. In this study, a hybrid approach that utilizes Hopfield network and genetic algorithm in doing maximum 2-Satisfiability problem (MAX-2SAT) was proposed. Hopfield neural network was used to minimize logical inconsistency in interpretations of logic clauses or program. Genetic algorithm (GA) has pioneered the implementation of methods that exploit the idea of combination and reproduce a better solution. The simulation incorporated with and without genetic algorithm will be examined by using Microsoft Visual 2013 C++ Express software. The performance of both searching techniques in doing MAX-2SAT was evaluate based on global minima ratio, ratio of satisfied clause and computation time. The result obtained form the computer simulation demonstrates the effectiveness and acceleration features of genetic algorithm in doing MAX-2SAT in Hopfield network.

  2. EFFECTS OF LASER RADIATION ON MATTER: Maximum depth of keyhole melting of metals by a laser beam

    NASA Astrophysics Data System (ADS)

    Pinsker, V. A.; Cherepanov, G. P.

    1990-11-01

    A calculation is reported of the maximum depth and diameter of a narrow crater formed in a stationary metal target exposed to high-power cw CO2 laser radiation. The energy needed for erosion of a unit volume is assumed to be constant and the energy losses experienced by the beam in the vapor-gas channel are ignored. The heat losses in the metal are allowed for by an analytic solution of the three-dimensional boundary-value heat-conduction problem of the temperature field in the vicinity of a thin but long crater with a constant temperature on its surface. An approximate solution of this problem by a method proposed earlier by one of the present authors was tested on a computer. The dimensions of the thin crater were found to be very different from those obtained earlier subject to a less rigorous allowance for the heat losses.

  3. Installation of Multiple Application X-ray Imaging Undulator Microscope (MAXIMUM) at ALS: Final report, 8/15/95-8/15/96

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-12-31

    MAXIMUM is short for Multiple Application X-ray IMaging Undulator Microscope, a project started in 1988 by our group at the Synchrotron Radiation Center of the University of Wisconsin-Madison. It is a scanning x-ray photoemission microscope that uses a multilayer-coated Schwarzschild objective as the focusing element. It was designed primarily for materials science studies of lateral variations in surface chemistry. Suitable problems include: lateral inhomogeneities in Schottky barrier formation, heterojunction formation, patterned samples and devices, insulating samples. Any system which has interesting properties that are not uniform as a function of spatial dimension can potentially be studied with MAXIMUM. 6 figs.,more » 3 tabs.« less

  4. The use of a numerical method to justify the criteria for the maximum settlement of the tank foundation

    NASA Astrophysics Data System (ADS)

    Tarasenko, Alexander; Chepur, Petr; Gruchenkova, Alesya

    2017-11-01

    The article examines the problem of assessing the permissible values of uneven settlement for a vertical steel tank base and foundation. A numerical experiment was performed using a finite element model of the tank. The model took into account the geometric shape of the structure and its additional stiffening elements that affect the stress-strain state of the tank. An equation was obtained that allowed determining the maximum possible deformation of the bottom outer contour during uneven settlement. Depending on the length of the uneven settlement zone, the values of the permissible settlement of the tank base were determined. The article proposes new values of the maximum permissible tank settlement with additional stiffening elements.

  5. MAP Estimators for Piecewise Continuous Inversion

    DTIC Science & Technology

    2016-08-08

    MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP

  6. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  7. A Modified Artificial Bee Colony Algorithm for p-Center Problems

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The objective of the p-center problem is to locate p-centers on a network such that the maximum of the distances from each node to its nearest center is minimized. The artificial bee colony algorithm is a swarm-based meta-heuristic algorithm that mimics the foraging behavior of honey bee colonies. This study proposes a modified ABC algorithm that benefits from a variety of search strategies to balance exploration and exploitation. Moreover, random key-based coding schemes are used to solve the p-center problem effectively. The proposed algorithm is compared to state-of-the-art techniques using different benchmark problems, and computational results reveal that the proposed approach is very efficient. PMID:24616648

  8. Comparison and Enumeration of Chemical Graphs

    PubMed Central

    Akutsu, Tatsuya; Nagamochi, Hiroshi

    2013-01-01

    Chemical compounds are usually represented as graph structured data in computers. In this review article, we overview several graph classes relevant to chemical compounds and the computational complexities of several fundamental problems for these graph classes. In particular, we consider the following problems: determining whether two chemical graphs are identical, determining whether one input chemical graph is a part of the other input chemical graph, finding a maximum common part of two input graphs, finding a reaction atom mapping, enumerating possible chemical graphs, and enumerating stereoisomers. We also discuss the relationship between the fifth problem and kernel functions for chemical compounds. PMID:24688697

  9. National Space Transportation Systems Program mission report

    NASA Technical Reports Server (NTRS)

    Collins, M. A., Jr.; Aldrich, A. D.; Lunney, G. S.

    1984-01-01

    The STS 41-C National Space Transportation Systems Program Mission Report contains a summary of the major activities and accomplishments of the eleventh Shuttle flight and fifth flight of the OV-099 vehicle, Challenger. Also summarized are the significant problems that occurred during STS 41-C, and a problem tracking list that is a complete list of all problems that occurred during the flight. The major objectives of flight STS 41-C were to successfully deploy the LDEF (long duration exposure facility) and retrieve, repair and redeploy the SMM (Solar Maximum Mission) spacecraft, and perform functions of IMAX and Cinema 360 cameras.

  10. Dynamical Networks Characterization of Space Weather Events

    NASA Astrophysics Data System (ADS)

    Orr, L.; Chapman, S. C.; Dods, J.; Gjerloev, J. W.

    2017-12-01

    Space weather can cause disturbances to satellite systems, impacting navigation technology and telecommunications; it can cause power loss and aviation disruption. A central aspect of the earth's magnetospheric response to space weather events are large scale and rapid changes in ionospheric current patterns. Space weather is highly dynamic and there are still many controversies about how the current system evolves in time. The recent SuperMAG initiative, collates ground-based vector magnetic field time series from over 200 magnetometers with 1-minute temporal resolution. In principle this combined dataset is an ideal candidate for quantification using dynamical networks. Network properties and parameters allow us to characterize the time dynamics of the full spatiotemporal pattern of the ionospheric current system. However, applying network methodologies to physical data presents new challenges. We establish whether a given pair of magnetometers are connected in the network by calculating their canonical cross correlation. The magnetometers are connected if their cross correlation exceeds a threshold. In our physical time series this threshold needs to be both station specific, as it varies with (non-linear) individual station sensitivity and location, and able to vary with season, which affects ground conductivity. Additionally, the earth rotates and therefore the ground stations move significantly on the timescales of geomagnetic disturbances. The magnetometers are non-uniformly spatially distributed. We will present new methodology which addresses these problems and in particular achieves dynamic normalization of the physical time series in order to form the network. Correlated disturbances across the magnetometers capture transient currents. Once the dynamical network has been obtained [1][2] from the full magnetometer data set it can be used to directly identify detailed inferred transient ionospheric current patterns and track their dynamics. We will show our first results that use network properties such as cliques and clustering coefficients to map these highly dynamic changes in ionospheric current patterns.[l] Dods et al, J. Geophys. Res 120, doi:10.1002/2015JA02 (2015). [2] Dods et al, J. Geophys. Res. 122, doi:10.1002/2016JA02 (2017).

  11. PUZZLE - A program for computer-aided design of printed circuit artwork

    NASA Technical Reports Server (NTRS)

    Harrell, D. A. W.; Zane, R.

    1971-01-01

    Program assists in solving spacing problems encountered in printed circuit /PC/ design. It is intended to have maximum use for two-sided PC boards carrying integrated circuits, and also aids design of discrete component circuits.

  12. Application of the Bootstrap Methods in Factor Analysis.

    ERIC Educational Resources Information Center

    Ichikawa, Masanori; Konishi, Sadanori

    1995-01-01

    A Monte Carlo experiment was conducted to investigate the performance of bootstrap methods in normal theory maximum likelihood factor analysis when the distributional assumption was satisfied or unsatisfied. Problems arising with the use of bootstrap methods are highlighted. (SLD)

  13. Minimum fuel coplanar aeroassisted orbital transfer using collocation and nonlinear programming

    NASA Technical Reports Server (NTRS)

    Shi, Yun Yuan; Young, D. H.

    1991-01-01

    The fuel optimal control problem arising in coplanar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) without plane change. The basic approach here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the coplanar aeroassisted HEO to LEO orbit transfer consists of three phases. In the first phase, the transfer begins with a deorbit impulse at HEO which injects the vehicle into a elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and drag modulation to satisfy heating constraints and to exit the atmosphere with the desired flight path angle and velocity so that the apogee of the exit orbit is the altitude of the desired LEO. Finally, the second impulse is required to circularize the orbit at LEO. The performance index is maximum final mass. Simulation results show that the coplanar aerocapture is quite different from the case where orbital plane changes are made inside the atmosphere. In the latter case, the vehicle has to penetrate deeper into the atmosphere to perform the desired orbital plane change. For the coplanar case, the vehicle needs only to penetrate the atmosphere deep enough to reduce the exit velocity so the vehicle can be captured at the desired LEO. The peak heating rates are lower and the entry corridor is wider. From the thermal protection point of view, the coplanar transfer may be desirable. Parametric studies also show the maximum peak heating rates and the entry corridor width are functions of maximum lift coefficient. The problem is solved using a direct optimization technique which uses piecewise polynomial representation for the states and controls and collocation to represent the differential equations. This converts the optimal control problem into a nonlinear programming problem which is solved numerically by using a modified version of NPSOL. Solutions were obtained for the described problem for cases with and without heating constraints. The method appears to be more robust than other optimization methods. In addition, the method can handle complex dynamical constraints.

  14. Entropic Inference

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  15. Maximum performance of solar heat engines: Discussion of thermodynamic availability and other second law considerations and their implications

    NASA Astrophysics Data System (ADS)

    Boehm, R. F.

    1985-09-01

    A review of thermodynamic principles is given in an effort to see if these concepts may indicate possibilities for improvements in solar central receiver power plants. Aspects related to rate limitations in cycles, thermodynamic availability of solar radiation, and sink temperature considerations are noted. It appears that considerably higher instantaneous plant efficiencies are possible by raising the maximum temperature and lowering the minimum temperature of the cycles. Of course, many practical engineering problems will have to be solved to realize the promised benefits.

  16. Optimization of the interplanetary trajectories of spacecraft with a solar electric propulsion power plant of minimal power

    NASA Astrophysics Data System (ADS)

    Ivanyukhin, A. V.; Petukhov, V. G.

    2016-12-01

    The problem of optimizing the interplanetary trajectories of a spacecraft (SC) with a solar electric propulsion system (SEPS) is examined. The problem of investigating the permissible power minimum of the solar electric propulsion power plant required for a successful flight is studied. Permissible ranges of thrust and exhaust velocity are analyzed for the given range of flight time and final mass of the spacecraft. The optimization is performed according to Portnyagin's maximum principle, and the continuation method is used for reducing the boundary problem of maximal principle to the Cauchy problem and to study the solution/ parameters dependence. Such a combination results in the robust algorithm that reduces the problem of trajectory optimization to the numerical integration of differential equations by the continuation method.

  17. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing

    PubMed Central

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-01-01

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m < n), such that minimum cost or maximum profit obtained. It is a vitally important Non-deterministic Polynomial (NP) complete problem in operation management and applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation. PMID:26512650

  18. Deterministic physical systems under uncertain initial conditions: the case of maximum entropy applied to projectile motion

    NASA Astrophysics Data System (ADS)

    Montecinos, Alejandra; Davis, Sergio; Peralta, Joaquín

    2018-07-01

    The kinematics and dynamics of deterministic physical systems have been a foundation of our understanding of the world since Galileo and Newton. For real systems, however, uncertainty is largely present via external forces such as friction or lack of precise knowledge about the initial conditions of the system. In this work we focus on the latter case and describe the use of inference methodologies in solving the statistical properties of classical systems subject to uncertain initial conditions. In particular we describe the application of the formalism of maximum entropy (MaxEnt) inference to the problem of projectile motion, given information about the average horizontal range over many realizations. By using MaxEnt we can invert the problem and use the provided information on the average range to reduce the original uncertainty in the initial conditions. Also, additional insight into the initial condition's probabilities, and the projectile path distribution itself, can be achieved based on the value of the average horizontal range. The wide applicability of this procedure, as well as its ease of use, reveals a useful tool with which to revisit a large number of physics problems, from classrooms to frontier research.

  19. A binary genetic programing model for teleconnection identification between global sea surface temperature and local maximum monthly rainfall events

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Nourani, Vahid; Hrnjica, Bahrudin; Molajou, Amir

    2017-12-01

    The effectiveness of genetic programming (GP) for solving regression problems in hydrology has been recognized in recent studies. However, its capability to solve classification problems has not been sufficiently explored so far. This study develops and applies a novel classification-forecasting model, namely Binary GP (BGP), for teleconnection studies between sea surface temperature (SST) variations and maximum monthly rainfall (MMR) events. The BGP integrates certain types of data pre-processing and post-processing methods with conventional GP engine to enhance its ability to solve both regression and classification problems simultaneously. The model was trained and tested using SST series of Black Sea, Mediterranean Sea, and Red Sea as potential predictors as well as classified MMR events at two locations in Iran as predictand. Skill of the model was measured in regard to different rainfall thresholds and SST lags and compared to that of the hybrid decision tree-association rule (DTAR) model available in the literature. The results indicated that the proposed model can identify potential teleconnection signals of surrounding seas beneficial to long-term forecasting of the occurrence of the classified MMR events.

  20. Identification of mutated driver pathways in cancer using a multi-objective optimization model.

    PubMed

    Zheng, Chun-Hou; Yang, Wu; Chong, Yan-Wen; Xia, Jun-Feng

    2016-05-01

    New-generation high-throughput technologies, including next-generation sequencing technology, have been extensively applied to solve biological problems. As a result, large cancer genomics projects such as the Cancer Genome Atlas (TCGA) and the International Cancer Genome Consortium are producing large amount of rich and diverse data in multiple cancer types. The identification of mutated driver genes and driver pathways from these data is a significant challenge. Genome aberrations in cancer cells can be divided into two types: random 'passenger mutation' and functional 'driver mutation'. In this paper, we introduced a Multi-objective Optimization model based on a Genetic Algorithm (MOGA) to solve the maximum weight submatrix problem, which can be employed to identify driver genes and driver pathways promoting cancer proliferation. The maximum weight submatrix problem defined to find mutated driver pathways is based on two specific properties, i.e., high coverage and high exclusivity. The multi-objective optimization model can adjust the trade-off between high coverage and high exclusivity. We proposed an integrative model by combining gene expression data and mutation data to improve the performance of the MOGA algorithm in a biological context. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  2. Large-scale optimization-based non-negative computational framework for diffusion equations: Parallel implementation and performance studies

    DOE PAGES

    Chang, Justin; Karra, Satish; Nakshatrala, Kalyana B.

    2016-07-26

    It is well-known that the standard Galerkin formulation, which is often the formulation of choice under the finite element method for solving self-adjoint diffusion equations, does not meet maximum principles and the non-negative constraint for anisotropic diffusion equations. Recently, optimization-based methodologies that satisfy maximum principles and the non-negative constraint for steady-state and transient diffusion-type equations have been proposed. To date, these methodologies have been tested only on small-scale academic problems. The purpose of this paper is to systematically study the performance of the non-negative methodology in the context of high performance computing (HPC). PETSc and TAO libraries are, respectively, usedmore » for the parallel environment and optimization solvers. For large-scale problems, it is important for computational scientists to understand the computational performance of current algorithms available in these scientific libraries. The numerical experiments are conducted on the state-of-the-art HPC systems, and a single-core performance model is used to better characterize the efficiency of the solvers. Furthermore, our studies indicate that the proposed non-negative computational framework for diffusion-type equations exhibits excellent strong scaling for real-world large-scale problems.« less

  3. A multiple maximum scatter difference discriminant criterion for facial feature extraction.

    PubMed

    Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei

    2007-12-01

    Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.

  4. AGARD Manual on Aeroelasticity in Axial-Flow Turbomachines. Volume 2. Structural Dynamics and Aeroelasticity,

    DTIC Science & Technology

    1988-06-01

    LEVELSKSI C. Q ac ca VANE OVERALL TOTAL-STATIC EXPANSION RATOS * Figure 12. Prediction of Response due to Second Stage Vane. 22-12 SAP /- MAXIMUM...assessment methods, written by Armstrong. The problem of life time prediction is reviewed by Labourdette, who also summarizes ONERA’s research in...applicable to single blades and bladed assemblies. The blade fatigue problem and its assessment methods, and life-time- prediction are considered. Aeroelastic

  5. On Computing Breakpoint Distances for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Moret, Bernard M E

    2017-06-01

    A fundamental problem in comparative genomics is to compute the distance between two genomes in terms of its higher level organization (given by genes or syntenic blocks). For two genomes without duplicate genes, we can easily define (and almost always efficiently compute) a variety of distance measures, but the problem is NP-hard under most models when genomes contain duplicate genes. To tackle duplicate genes, three formulations (exemplar, maximum matching, and any matching) have been proposed, all of which aim to build a matching between homologous genes so as to minimize some distance measure. Of the many distance measures, the breakpoint distance (the number of nonconserved adjacencies) was the first one to be studied and remains of significant interest because of its simplicity and model-free property. The three breakpoint distance problems corresponding to the three formulations have been widely studied. Although we provided last year a solution for the exemplar problem that runs very fast on full genomes, computing optimal solutions for the other two problems has remained challenging. In this article, we describe very fast, exact algorithms for these two problems. Our algorithms rely on a compact integer-linear program that we further simplify by developing an algorithm to remove variables, based on new results on the structure of adjacencies and matchings. Through extensive experiments using both simulations and biological data sets, we show that our algorithms run very fast (in seconds) on mammalian genomes and scale well beyond. We also apply these algorithms (as well as the classic orthology tool MSOAR) to create orthology assignment, then compare their quality in terms of both accuracy and coverage. We find that our algorithm for the "any matching" formulation significantly outperforms other methods in terms of accuracy while achieving nearly maximum coverage.

  6. Use of typical moisture : density curves.

    DOT National Transportation Integrated Search

    1965-05-01

    One of the many problems associated with compaction control on any construction project is the time consuming task of obtaining maximum density and optimum moisture content of soils both in the laboratory and in the field. In addition to the time ele...

  7. Interplanetary Trajectories, Encke Method (ITEM)

    NASA Technical Reports Server (NTRS)

    Whitlock, F. H.; Wolfe, H.; Lefton, L.; Levine, N.

    1972-01-01

    Modified program has been developed using improved variation of Encke method which avoids accumulation of round-off errors and avoids numerical ambiguities arising from near-circular orbits of low inclination. Variety of interplanetary trajectory problems can be computed with maximum accuracy and efficiency.

  8. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  9. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  10. A maximum power point prediction method for group control of photovoltaic water pumping systems based on parameter identification

    NASA Astrophysics Data System (ADS)

    Chen, B.; Su, J. H.; Guo, L.; Chen, J.

    2017-06-01

    This paper puts forward a maximum power estimation method based on the photovoltaic array (PVA) model to solve the optimization problems about group control of the PV water pumping systems (PVWPS) at the maximum power point (MPP). This method uses the improved genetic algorithm (GA) for model parameters estimation and identification in view of multi P-V characteristic curves of a PVA model, and then corrects the identification results through least square method. On this basis, the irradiation level and operating temperature under any condition are able to estimate so an accurate PVA model is established and the MPP none-disturbance estimation is achieved. The simulation adopts the proposed GA to determine parameters, and the results verify the accuracy and practicability of the methods.

  11. A case study of analyzing 11th graders’ problem solving ability on heat and temperature topic

    NASA Astrophysics Data System (ADS)

    Yulianawati, D.; Muslim; Hasanah, L.; Samsudin, A.

    2018-05-01

    Problem solving ability must be owned by students after the process of physics learning so that the concept of physics becomes meaningful. Consequently, the research aims to describe their problem solving ability. Metacognition is contributed to physics learning to the success of students in solving problems. This research has already been implemented to 37 science students (30 women and 7 men) of eleventh grade from one of the secondary schools in Bandung. The research methods utilized the single case study with embedded research design. The instrument is Heat and Temperature Problem Solving Ability Test (HT-PSAT) which consists of twelve questions from three context problems. The result shows that the average value of the test is 8.27 out of the maximum total value of 36. In conclusion, eleventh graders’ problem-solving ability is still under expected. The implication of the findings is able to create learning situations which are probably developing students to embrace better problem solving ability.

  12. The maximum vector-angular margin classifier and its fast training on large datasets using a core vector machine.

    PubMed

    Hu, Wenjun; Chung, Fu-Lai; Wang, Shitong

    2012-03-01

    Although pattern classification has been extensively studied in the past decades, how to effectively solve the corresponding training on large datasets is a problem that still requires particular attention. Many kernelized classification methods, such as SVM and SVDD, can be formulated as the corresponding quadratic programming (QP) problems, but computing the associated kernel matrices requires O(n2)(or even up to O(n3)) computational complexity, where n is the size of the training patterns, which heavily limits the applicability of these methods for large datasets. In this paper, a new classification method called the maximum vector-angular margin classifier (MAMC) is first proposed based on the vector-angular margin to find an optimal vector c in the pattern feature space, and all the testing patterns can be classified in terms of the maximum vector-angular margin ρ, between the vector c and all the training data points. Accordingly, it is proved that the kernelized MAMC can be equivalently formulated as the kernelized Minimum Enclosing Ball (MEB), which leads to a distinctive merit of MAMC, i.e., it has the flexibility of controlling the sum of support vectors like v-SVC and may be extended to a maximum vector-angular margin core vector machine (MAMCVM) by connecting the core vector machine (CVM) method with MAMC such that the corresponding fast training on large datasets can be effectively achieved. Experimental results on artificial and real datasets are provided to validate the power of the proposed methods. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Maximum caliber inference of nonequilibrium processes

    NASA Astrophysics Data System (ADS)

    Otten, Moritz; Stock, Gerhard

    2010-07-01

    Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.

  14. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  15. The patient-zero problem with noisy observations

    NASA Astrophysics Data System (ADS)

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; Ingrosso, Alessandro; Zecchina, Riccardo

    2014-10-01

    A belief propagation approach has been recently proposed for the patient-zero problem in SIR epidemics. The patient-zero problem consists of finding the initial source of an epidemic outbreak given observations at a later time. In this work, we study a more difficult but related inference problem, in which observations are noisy and there is confusion between observed states. In addition to studying the patient-zero problem, we also tackle the problem of completing and correcting the observations to possibly find undiscovered infected individuals and false test results. Moreover, we devise a set of equations, based on the variational expression of the Bethe free energy, to find the patient-zero along with maximum-likelihood epidemic parameters. We show, by means of simulated epidemics, that this method is able to infer details on the past history of an epidemic outbreak based solely on the topology of the contact network and a single snapshot of partial and noisy observations.

  16. Methodes entropiques appliquees au probleme inverse en magnetoencephalographie

    NASA Astrophysics Data System (ADS)

    Lapalme, Ervig

    2005-07-01

    This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.

  17. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  18. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  19. A Novel Technique for Maximum Power Point Tracking of a Photovoltaic Based on Sensing of Array Current Using Adaptive Neuro-Fuzzy Inference System (ANFIS)

    NASA Astrophysics Data System (ADS)

    El-Zoghby, Helmy M.; Bendary, Ahmed F.

    2016-10-01

    Maximum Power Point Tracking (MPPT) is now widely used method in increasing the photovoltaic (PV) efficiency. The conventional MPPT methods have many problems concerning the accuracy, flexibility and efficiency. The MPP depends on the PV temperature and solar irradiation that randomly varied. In this paper an artificial intelligence based controller is presented through implementing of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to obtain maximum power from PV. The ANFIS inputs are the temperature and cell current, and the output is optimal voltage at maximum power. During operation the trained ANFIS senses the PV current using suitable sensor and also senses the temperature to determine the optimal operating voltage that corresponds to the current at MPP. This voltage is used to control the boost converter duty cycle. The MATLAB simulation results shows the effectiveness of the ANFIS with sensing the PV current in obtaining the MPPT from the PV.

  20. Bayesian image reconstruction for improving detection performance of muon tomography.

    PubMed

    Wang, Guobao; Schultz, Larry J; Qi, Jinyi

    2009-05-01

    Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.

  1. Parents: Avoid Kids Foot Problems with the Right Shoes

    MedlinePlus

    ... pain, Achilles tendonitis and even ankle sprains and stress fractures. Children with Flat Feet Children with flat feet need shoes with a wide toe box, maximum arch support and shock absorption. The best shoes to buy are oxford, lace-up shoes ...

  2. Attic Inlet Technology Update

    USDA-ARS?s Scientific Manuscript database

    Attic inlets are a popular addition for new construction and energy saving retrofits. Proper management of attic inlets is necessary to get maximum benefits from the system and reduce the likelihood of moisture-related problems in the structure. Solar energy levels were determined for the continen...

  3. ESTIMATING PROPORTION OF AREA OCCUPIED UNDER COMPLEX SURVEY DESIGNS

    EPA Science Inventory

    Estimating proportion of sites occupied, or proportion of area occupied (PAO) is a common problem in environmental studies. Typically, field surveys do not ensure that occupancy of a site is made with perfect detection. Maximum likelihood estimation of site occupancy rates when...

  4. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    PubMed

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Multiplicative noise removal via a learned dictionary.

    PubMed

    Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong

    2012-11-01

    Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

  6. Decision feedback equalizer for holographic data storage.

    PubMed

    Kim, Kyuhwan; Kim, Seung Hun; Koo, Gyogwon; Seo, Min Seok; Kim, Sang Woo

    2018-05-20

    Holographic data storage (HDS) has attracted much attention as a next-generation storage medium. Because HDS suffers from two-dimensional (2D) inter-symbol interference (ISI), the partial-response maximum-likelihood (PRML) method has been studied to reduce 2D ISI. However, the PRML method has various drawbacks. To solve the problems, we propose a modified decision feedback equalizer (DFE) for HDS. To prevent the error propagation problem, which is a typical problem in DFEs, we also propose a reliability factor for HDS. Various simulations were executed to analyze the performance of the proposed methods. The proposed methods showed fast processing speed after training, superior bit error rate performance, and consistency.

  7. Escape rates over potential barriers: variational principles and the Hamilton-Jacobi equation

    NASA Astrophysics Data System (ADS)

    Cortés, Emilio; Espinosa, Francisco

    We describe a rigorous formalism to study some extrema statistics problems, like maximum probability events or escape rate processes, by taking into account that the Hamilton-Jacobi equation completes, in a natural way, the required set of boundary conditions of the Euler-Lagrange equation, for this kind of variational problem. We apply this approach to a one-dimensional stochastic process, driven by colored noise, for a double-parabola potential, where we have one stable and one unstable steady states.

  8. Advancements in medicine from aerospace research

    NASA Technical Reports Server (NTRS)

    Wooten, F. T.

    1971-01-01

    NASA has taken the lead in implementing the concept of technology utilization, and the Technology Utilization Program is the first vital step in the goal of a technological society to insure maximum benefit from the costs of technology. Experience has shown that the active approach to technology transfer is unique and is well received in the medical profession when appropriate problems are tackled. The problem solving approach is a useful one at the precise time when medicine is recognizing the need for new technology.

  9. Establishment of a center of excellence for applied mathematical and statistical research

    NASA Technical Reports Server (NTRS)

    Woodward, W. A.; Gray, H. L.

    1983-01-01

    The state of the art was assessed with regards to efforts in support of the crop production estimation problem and alternative generic proportion estimation techniques were investigated. Topics covered include modeling the greeness profile (Badhwarmos model), parameter estimation using mixture models such as CLASSY, and minimum distance estimation as an alternative to maximum likelihood estimation. Approaches to the problem of obtaining proportion estimates when the underlying distributions are asymmetric are examined including the properties of Weibull distribution.

  10. Approximation of the ruin probability using the scaled Laplace transform inversion

    PubMed Central

    Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak

    2015-01-01

    The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796

  11. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  12. Optimal control of lift/drag ratios on a rotating cylinder

    NASA Technical Reports Server (NTRS)

    Ou, Yuh-Roung; Burns, John A.

    1992-01-01

    We present the numerical solution to a problem of maximizing the lift to drag ratio by rotating a circular cylinder in a two-dimensional viscous incompressible flow. This problem is viewed as a test case for the newly developing theoretical and computational methods for control of fluid dynamic systems. We show that the time averaged lift to drag ratio for a fixed finite-time interval achieves its maximum value at an optimal rotation rate that depends on the time interval.

  13. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

  14. Derivation of Hamilton's equations of motion for mechanical systems with constraints on the basis of Pontriagin's maximum principle

    NASA Astrophysics Data System (ADS)

    Kovalev, A. M.

    The problem of the motion of a mechanical system with constraints conforming to Hamilton's principle is stated as an optimum control problem, with equations of motion obtained on the basis of Pontriagin's principle. A Hamiltonian function in Rodrigues-Hamilton parameters for a gyrostat in a potential force field is obtained as an example. Equations describing the motion of a skate on a sloping surface and the motion of a disk on a horizontal plane are examined.

  15. Participant-Predicted, Observed, and Calculated Peak Blood Alcohol Levels: A Gender-Specific Analysis

    PubMed Central

    Van Tassel, W.E.; Manser, M.P.

    2000-01-01

    In recent years there has been a push by federal and state governments to lower the maximum blood alcohol level at which drivers are considered intoxicated. Many states have lowered the maximum blood alcohol level to .08%. This paper offers insight into drinkers’ ability to predict their level of impairment prior to consuming a given amount of alcohol. It addresses the problem of drinkers not knowing how many drinks they can consume before becoming legally impaired. Results indicate males and females differ in their ability to predict impairment levels. PMID:11558094

  16. MEM application to IRAS CPC images

    NASA Technical Reports Server (NTRS)

    Marston, A. P.

    1994-01-01

    A method for applying the Maximum Entropy Method (MEM) to Chopped Photometric Channel (CPC) IRAS additional observations is illustrated. The original CPC data suffered from problems with repeatability which MEM is able to cope with by use of a noise image, produced from the results of separate data scans of objects. The process produces images of small areas of sky with circular Gaussian beams of approximately 30 in. full width half maximum resolution at 50 and 100 microns. Comparison is made to previous reconstructions made in the far-infrared as well as morphologies of objects at other wavelengths. Some projects with this dataset are discussed.

  17. On the Achievable Throughput Over TVWS Sensor Networks

    PubMed Central

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-01-01

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565

  18. Analysis of interface crack branching

    NASA Technical Reports Server (NTRS)

    Ballarini, R.; Mukai, D. J.; Miller, G. R.

    1989-01-01

    A solution is presented for the problem of a finite length crack branching off the interface between two bonded dissimilar isotropic materials. Results are presented in terms of the ratio of the energy release rate of a branched interface crack to the energy release rate of a straight interface crack with the same total length. It is found that this ratio reaches a maximum when the interface crack branches into the softer material. Longer branches tend to have smaller maximum energy release rate ratio angles indicating that all else being equal, a branch crack will tend to turn back parallel to the interface as it grows.

  19. Twenty-five years of maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  20. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    NASA Astrophysics Data System (ADS)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].

  1. LANDSCAPE-LEVEL INDICATORS IN SMALL GEORGIA WATERSHEDS

    EPA Science Inventory

    Landscape level indicators in small watersheds can be used as a screening tool to guide in-situ monitoring to confirm stream condition problems, aid listing of impaired waters under Section 303(d) of the Clean Water Act and total maximum daily load (TMDL) development, and provide...

  2. A Limitation of the Applicability of Interval Shift Analysis to Program Evaluation

    ERIC Educational Resources Information Center

    Hardy, Roy

    1975-01-01

    Interval Shift Analysis (ISA) is an adaptation of the linear programming model used to determine maximum benefits or minimal losses in quantifiable economics problems. ISA is applied to pre and posttest score distributions for 43 classes of second graders. (RC)

  3. WATERSHED CENTRAL: AN INTEGRATED WATERSHED ASSESSMENT AND MANAGEMENT WEBSITE

    EPA Science Inventory

    The Clean Water Act (CWA) requires that States develop pollution reduction targets for impaired or threatened waters often referred to as total maximum daily loads (TMDLs). These are waters that do not meet state water quality standards or will have impending problems meeting th...

  4. Information and Entropy

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2007-11-01

    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.

  5. Gender recognition from vocal source

    NASA Astrophysics Data System (ADS)

    Sorokin, V. N.; Makarov, I. S.

    2008-07-01

    Efficiency of automatic recognition of male and female voices based on solving the inverse problem for glottis area dynamics and for waveform of the glottal airflow volume velocity pulse is studied. The inverse problem is regularized through the use of analytical models of the voice excitation pulse and of the dynamics of the glottis area, as well as the model of one-dimensional glottal airflow. Parameters of these models and spectral parameters of the volume velocity pulse are considered. The following parameters are found to be most promising: the instant of maximum glottis area, the maximum derivative of the area, the slope of the spectrum of the glottal airflow volume velocity pulse, the amplitude ratios of harmonics of this spectrum, and the pitch. On the plane of the first two main components in the space of these parameters, an almost twofold decrease in the classification error relative to that for the pitch alone is attained. The male voice recognition probability is found to be 94.7%, and the female voice recognition probability is 95.9%.

  6. SOME APPLICATIONS OF SEISMIC SOURCE MECHANISM STUDIES TO ASSESSING UNDERGROUND HAZARD.

    USGS Publications Warehouse

    McGarr, A.; ,

    1984-01-01

    Various measures of the seismic source mechanism of mine tremors, such as magnitude, moment, stress drop, apparent stress, and seismic efficiency, can be related directly to several aspects of the problem of determining the underground hazard arising from strong ground motion of large seismic events. First, the relation between the sum of seismic moments of tremors and the volume of stope closure caused by mining during a given period can be used in conjunction with magnitude-frequency statistics and an empirical relation between moment and magnitude to estimate the maximum possible sized tremor for a given mining situation. Second, it is shown that the 'energy release rate,' a commonly-used parameter for predicting underground seismic hazard, may be misleading in that the importance of overburden stress, or depth, is overstated. Third, results involving the relation between peak velocity and magnitude, magnitude-frequency statistics, and the maximum possible magnitude are applied to the problem of estimating the frequency at which design limits of certain underground support equipment are likely to be exceeded.

  7. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  8. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection.

    PubMed

    Ma, Xin; Guo, Jing; Sun, Xiao

    2015-01-01

    The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR) method, followed by incremental feature selection (IFS). We incorporated features of conjoint triad features and three novel features: binding propensity (BP), nonbinding propensity (NBP), and evolutionary information combined with physicochemical properties (EIPP). The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient). High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  9. Classifier utility modeling and analysis of hypersonic inlet start/unstart considering training data costs

    NASA Astrophysics Data System (ADS)

    Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen

    2011-11-01

    Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.

  10. Induced subgraph searching for geometric model fitting

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  11. Optimal placement of FACTS devices using optimization techniques: A review

    NASA Astrophysics Data System (ADS)

    Gaur, Dipesh; Mathew, Lini

    2018-03-01

    Modern power system is dealt with overloading problem especially transmission network which works on their maximum limit. Today’s power system network tends to become unstable and prone to collapse due to disturbances. Flexible AC Transmission system (FACTS) provides solution to problems like line overloading, voltage stability, losses, power flow etc. FACTS can play important role in improving static and dynamic performance of power system. FACTS devices need high initial investment. Therefore, FACTS location, type and their rating are vital and should be optimized to place in the network for maximum benefit. In this paper, different optimization methods like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) etc. are discussed and compared for optimal location, type and rating of devices. FACTS devices such as Thyristor Controlled Series Compensator (TCSC), Static Var Compensator (SVC) and Static Synchronous Compensator (STATCOM) are considered here. Mentioned FACTS controllers effects on different IEEE bus network parameters like generation cost, active power loss, voltage stability etc. have been analyzed and compared among the devices.

  12. The choice of the energy embedding law in the design of heavy ionic fusion cylindrical targets

    NASA Astrophysics Data System (ADS)

    Dolgoleva, GV; Zykova, A. I.

    2017-10-01

    The paper considers the numerical design of heavy ion fusion (FIHIF) targets, which is one of the branches of controlled thermonuclear fusion (CTF). One of the important tasks in the targets design for controlled thermonuclear fusion is the energy embedding selection whereby it is possible to obtain “burning” (the presence of thermonuclear reactions) of the working DT region. The work is devoted to the rapid ignition of FIHIF targets by means of an additional short-term energy contribution to the DT substance already compressed by massively more longer by energy embedding. This problem has been fairly well studied for laser targets, but this problem is new for heavy ion fusion targets. Maximum momentum increasing is very technically difficult and expensive on modern FIHIF installations. The work shows that the additional energy embedding (“igniting” impulse) reduces the requirements to the maximum impulse. The purpose of this work is to research the ignition impulse effect on the FIHIF target parameters.

  13. Maximum likelihood techniques applied to quasi-elastic light scattering

    NASA Technical Reports Server (NTRS)

    Edwards, Robert V.

    1992-01-01

    There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.

  14. Estimating a Logistic Discrimination Functions When One of the Training Samples Is Subject to Misclassification: A Maximum Likelihood Approach.

    PubMed

    Nagelkerke, Nico; Fidler, Vaclav

    2015-01-01

    The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.

  15. An application of the Krylov-FSP-SSA method to parameter fitting with maximum likelihood

    NASA Astrophysics Data System (ADS)

    Dinh, Khanh N.; Sidje, Roger B.

    2017-12-01

    Monte Carlo methods such as the stochastic simulation algorithm (SSA) have traditionally been employed in gene regulation problems. However, there has been increasing interest to directly obtain the probability distribution of the molecules involved by solving the chemical master equation (CME). This requires addressing the curse of dimensionality that is inherent in most gene regulation problems. The finite state projection (FSP) seeks to address the challenge and there have been variants that further reduce the size of the projection or that accelerate the resulting matrix exponential. The Krylov-FSP-SSA variant has proved numerically efficient by combining, on one hand, the SSA to adaptively drive the FSP, and on the other hand, adaptive Krylov techniques to evaluate the matrix exponential. Here we apply this Krylov-FSP-SSA to a mutual inhibitory gene network synthetically engineered in Saccharomyces cerevisiae, in which bimodality arises. We show numerically that the approach can efficiently approximate the transient probability distribution, and this has important implications for parameter fitting, where the CME has to be solved for many different parameter sets. The fitting scheme amounts to an optimization problem of finding the parameter set so that the transient probability distributions fit the observations with maximum likelihood. We compare five optimization schemes for this difficult problem, thereby providing further insights into this approach of parameter estimation that is often applied to models in systems biology where there is a need to calibrate free parameters. Work supported by NSF grant DMS-1320849.

  16. Compressed Secret Key Agreement:Maximizing Multivariate Mutual Information per Bit

    NASA Astrophysics Data System (ADS)

    Chan, Chung

    2017-10-01

    The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components for generating a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion, by offering a more structured achieving scheme and some simpler conjectures to prove.

  17. Optimum data analysis procedures for Titan 4 and Space Shuttle payload acoustic measurements during lift-off

    NASA Technical Reports Server (NTRS)

    Piersol, Allan G.

    1991-01-01

    Analytical expressions have been derived to describe the mean square error in the estimation of the maximum rms value computed from a step-wise (or running) time average of a nonstationary random signal. These analytical expressions have been applied to the problem of selecting the optimum averaging times that will minimize the total mean square errors in estimates of the maximum sound pressure levels measured inside the Titan IV payload fairing (PLF) and the Space Shuttle payload bay (PLB) during lift-off. Based on evaluations of typical Titan IV and Space Shuttle launch data, it has been determined that the optimum averaging times for computing the maximum levels are (1) T (sub o) = 1.14 sec for the maximum overall level, and T(sub oi) = 4.88 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Titan IV PLF, and (2) T (sub o) = 1.65 sec for the maximum overall level, and T (sub oi) = 7.10 f (sub i) (exp -0.2) sec for the maximum 1/3 octave band levels inside the Space Shuttle PLB, where f (sub i) is the 1/3 octave band center frequency. However, the results for both vehicles indicate that the total rms error in the maximum level estimates will be within 25 percent the minimum error for all averaging times within plus or minus 50 percent of the optimum averaging time, so a precise selection of the exact optimum averaging time is not critical. Based on these results, linear averaging times (T) are recommended for computing the maximum sound pressure level during lift-off.

  18. A Message Passing Approach to Side Chain Positioning with Applications in Protein Docking Refinement *

    PubMed Central

    Moghadasi, Mohammad; Kozakov, Dima; Mamonov, Artem B.; Vakili, Pirooz; Vajda, Sandor; Paschalidis, Ioannis Ch.

    2013-01-01

    We introduce a message-passing algorithm to solve the Side Chain Positioning (SCP) problem. SCP is a crucial component of protein docking refinement, which is a key step of an important class of problems in computational structural biology called protein docking. We model SCP as a combinatorial optimization problem and formulate it as a Maximum Weighted Independent Set (MWIS) problem. We then employ a modified and convergent belief-propagation algorithm to solve a relaxation of MWIS and develop randomized estimation heuristics that use the relaxed solution to obtain an effective MWIS feasible solution. Using a benchmark set of protein complexes we demonstrate that our approach leads to more accurate docking predictions compared to a baseline algorithm that does not solve the SCP. PMID:23515575

  19. Efficient bounding schemes for the two-center hybrid flow shop scheduling problem with removal times.

    PubMed

    Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.

  20. Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times

    PubMed Central

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911

Top