HeNCE: A Heterogeneous Network Computing Environment
Beguelin, Adam; Dongarra, Jack J.; Geist, George Al; ...
1994-01-01
Network computing seeks to utilize the aggregate resources of many networked computers to solve a single problem. In so doing it is often possible to obtain supercomputer performance from an inexpensive local area network. The drawback is that network computing is complicated and error prone when done by hand, especially if the computers have different operating systems and data formats and are thus heterogeneous. The heterogeneous network computing environment (HeNCE) is an integrated graphical environment for creating and running parallel programs over a heterogeneous collection of computers. It is built on a lower level package called parallel virtual machine (PVM).more » The HeNCE philosophy of parallel programming is to have the programmer graphically specify the parallelism of a computation and to automate, as much as possible, the tasks of writing, compiling, executing, debugging, and tracing the network computation. Key to HeNCE is a graphical language based on directed graphs that describe the parallelism and data dependencies of an application. Nodes in the graphs represent conventional Fortran or C subroutines and the arcs represent data and control flow. This article describes the present state of HeNCE, its capabilities, limitations, and areas of future research.« less
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Message Efficient Checkpointing and Rollback Recovery in Heterogeneous Mobile Networks
NASA Astrophysics Data System (ADS)
Jaggi, Parmeet Kaur; Singh, Awadhesh Kumar
2016-06-01
Heterogeneous networks provide an appealing way of expanding the computing capability of mobile networks by combining infrastructure-less mobile ad-hoc networks with the infrastructure-based cellular mobile networks. The nodes in such a network range from low-power nodes to macro base stations and thus, vary greatly in their capabilities such as computation power and battery power. The nodes are susceptible to different types of transient and permanent failures and therefore, the algorithms designed for such networks need to be fault-tolerant. The article presents a checkpointing algorithm for the rollback recovery of mobile hosts in a heterogeneous mobile network. Checkpointing is a well established approach to provide fault tolerance in static and cellular mobile distributed systems. However, the use of checkpointing for fault tolerance in a heterogeneous environment remains to be explored. The proposed protocol is based on the results of zigzag paths and zigzag cycles by Netzer-Xu. Considering the heterogeneity prevalent in the network, an uncoordinated checkpointing technique is employed. Yet, useless checkpoints are avoided without causing a high message overhead.
NASA Astrophysics Data System (ADS)
Niño, Alfonso; Muñoz-Caro, Camelia; Reyes, Sebastián
2015-11-01
The last decade witnessed a great development of the structural and dynamic study of complex systems described as a network of elements. Therefore, systems can be described as a set of, possibly, heterogeneous entities or agents (the network nodes) interacting in, possibly, different ways (defining the network edges). In this context, it is of practical interest to model and handle not only static and homogeneous networks but also dynamic, heterogeneous ones. Depending on the size and type of the problem, these networks may require different computational approaches involving sequential, parallel or distributed systems with or without the use of disk-based data structures. In this work, we develop an Application Programming Interface (APINetworks) for the modeling and treatment of general networks in arbitrary computational environments. To minimize dependency between components, we decouple the network structure from its function using different packages for grouping sets of related tasks. The structural package, the one in charge of building and handling the network structure, is the core element of the system. In this work, we focus in this API structural component. We apply an object-oriented approach that makes use of inheritance and polymorphism. In this way, we can model static and dynamic networks with heterogeneous elements in the nodes and heterogeneous interactions in the edges. In addition, this approach permits a unified treatment of different computational environments. Tests performed on a C++11 version of the structural package show that, on current standard computers, the system can handle, in main memory, directed and undirected linear networks formed by tens of millions of nodes and edges. Our results compare favorably to those of existing tools.
Network Coding on Heterogeneous Multi-Core Processors for Wireless Sensor Networks
Kim, Deokho; Park, Karam; Ro, Won W.
2011-01-01
While network coding is well known for its efficiency and usefulness in wireless sensor networks, the excessive costs associated with decoding computation and complexity still hinder its adoption into practical use. On the other hand, high-performance microprocessors with heterogeneous multi-cores would be used as processing nodes of the wireless sensor networks in the near future. To this end, this paper introduces an efficient network coding algorithm developed for the heterogenous multi-core processors. The proposed idea is fully tested on one of the currently available heterogeneous multi-core processors referred to as the Cell Broadband Engine. PMID:22164053
Heterogeneity in Health Care Computing Environments
Sengupta, Soumitra
1989-01-01
This paper discusses issues of heterogeneity in computer systems, networks, databases, and presentation techniques, and the problems it creates in developing integrated medical information systems. The need for institutional, comprehensive goals are emphasized. Using the Columbia-Presbyterian Medical Center's computing environment as the case study, various steps to solve the heterogeneity problem are presented.
Heterogeneous concurrent computing with exportable services
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy
1995-01-01
Heterogeneous concurrent computing, based on the traditional process-oriented model, is approaching its functionality and performance limits. An alternative paradigm, based on the concept of services, supporting data driven computation, and built on a lightweight process infrastructure, is proposed to enhance the functional capabilities and the operational efficiency of heterogeneous network-based concurrent computing. TPVM is an experimental prototype system supporting exportable services, thread-based computation, and remote memory operations that is built as an extension of and an enhancement to the PVM concurrent computing system. TPVM offers a significantly different computing paradigm for network-based computing, while maintaining a close resemblance to the conventional PVM model in the interest of compatibility and ease of transition Preliminary experiences have demonstrated that the TPVM framework presents a natural yet powerful concurrent programming interface, while being capable of delivering performance improvements of upto thirty percent.
DNET: A communications facility for distributed heterogeneous computing
NASA Technical Reports Server (NTRS)
Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.
1989-01-01
This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.
Bertalan, Tom; Wu, Yan; Laing, Carlo; Gear, C. William; Kevrekidis, Ioannis G.
2017-01-01
Finding accurate reduced descriptions for large, complex, dynamically evolving networks is a crucial enabler to their simulation, analysis, and ultimately design. Here, we propose and illustrate a systematic and powerful approach to obtaining good collective coarse-grained observables—variables successfully summarizing the detailed state of such networks. Finding such variables can naturally lead to successful reduced dynamic models for the networks. The main premise enabling our approach is the assumption that the behavior of a node in the network depends (after a short initial transient) on the node identity: a set of descriptors that quantify the node properties, whether intrinsic (e.g., parameters in the node evolution equations) or structural (imparted to the node by its connectivity in the particular network structure). The approach creates a natural link with modeling and “computational enabling technology” developed in the context of Uncertainty Quantification. In our case, however, we will not focus on ensembles of different realizations of a problem, each with parameters randomly selected from a distribution. We will instead study many coupled heterogeneous units, each characterized by randomly assigned (heterogeneous) parameter value(s). One could then coin the term Heterogeneity Quantification for this approach, which we illustrate through a model dynamic network consisting of coupled oscillators with one intrinsic heterogeneity (oscillator individual frequency) and one structural heterogeneity (oscillator degree in the undirected network). The computational implementation of the approach, its shortcomings and possible extensions are also discussed. PMID:28659781
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
NASA Astrophysics Data System (ADS)
Liu, Zonghua; Lai, Ying-Cheng; Ye, Nong
2003-03-01
We consider the entire spectrum of architectures of general networks, ranging from being heterogeneous (scale-free) to homogeneous (random), and investigate the infection dynamics by using a three-state epidemiological model that does not involve the mechanism of self-recovery. This model is relevant to realistic situations such as the propagation of a flu virus or information over a social network. Our heuristic analysis and computations indicate that (1) regardless of the network architecture, there exists a substantial fraction of nodes that can never be infected and (2) heterogeneous networks are relatively more robust against spreads of infection as compared with homogeneous networks. We have also considered the problem of immunization for preventing wide spread of infection, with the result that targeted immunization is effective for heterogeneous networks.
NASA Astrophysics Data System (ADS)
Khan, Akhtar Nawaz
2017-11-01
Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.
Methodologies and systems for heterogeneous concurrent computing
NASA Technical Reports Server (NTRS)
Sunderam, V. S.
1994-01-01
Heterogeneous concurrent computing is gaining increasing acceptance as an alternative or complementary paradigm to multiprocessor-based parallel processing as well as to conventional supercomputing. While algorithmic and programming aspects of heterogeneous concurrent computing are similar to their parallel processing counterparts, system issues, partitioning and scheduling, and performance aspects are significantly different. In this paper, we discuss critical design and implementation issues in heterogeneous concurrent computing, and describe techniques for enhancing its effectiveness. In particular, we highlight the system level infrastructures that are required, aspects of parallel algorithm development that most affect performance, system capabilities and limitations, and tools and methodologies for effective computing in heterogeneous networked environments. We also present recent developments and experiences in the context of the PVM system and comment on ongoing and future work.
NASA Technical Reports Server (NTRS)
Engelberg, N.; Shaw, C., III
1984-01-01
The design of a uniform command language to be used in a local area network of heterogeneous, autonomous nodes is considered. After examining the major characteristics of such a network, and after considering the profile of a scientist using the computers on the net as an investigative aid, a set of reasonable requirements for the command language are derived. Taking into account the possible inefficiencies in implementing a guest-layered network operating system and command language on a heterogeneous net, the authors examine command language naming, process/procedure invocation, parameter acquisition, help and response facilities, and other features found in single-node command languages, and conclude that some features may extend simply to the network case, others extend after some restrictions are imposed, and still others require modifications. In addition, it is noted that some requirements considered reasonable (user accounting reports, for example) demand further study before they can be efficiently implemented on a network of the sort described.
Heterogeneous Distributed Computing for Computational Aerosciences
NASA Technical Reports Server (NTRS)
Sunderam, Vaidy S.
1998-01-01
The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.
Mouse Driven Window Graphics for Network Teaching.
ERIC Educational Resources Information Center
Makinson, G. J.; And Others
Computer enhanced teaching of computational mathematics on a network system driving graphics terminals is being redeveloped for a mouse-driven, high resolution, windowed environment of a UNIX work station. Preservation of the features of networked access by heterogeneous terminals is provided by the use of the X Window environment. A dmonstrator…
Jambusaria, Ankit; Klomp, Jeff; Hong, Zhigang; Rafii, Shahin; Dai, Yang; Malik, Asrar B; Rehman, Jalees
2018-06-07
The heterogeneity of cells across tissue types represents a major challenge for studying biological mechanisms as well as for therapeutic targeting of distinct tissues. Computational prediction of tissue-specific gene regulatory networks may provide important insights into the mechanisms underlying the cellular heterogeneity of cells in distinct organs and tissues. Using three pathway analysis techniques, gene set enrichment analysis (GSEA), parametric analysis of gene set enrichment (PGSEA), alongside our novel model (HeteroPath), which assesses heterogeneously upregulated and downregulated genes within the context of pathways, we generated distinct tissue-specific gene regulatory networks. We analyzed gene expression data derived from freshly isolated heart, brain, and lung endothelial cells and populations of neurons in the hippocampus, cingulate cortex, and amygdala. In both datasets, we found that HeteroPath segregated the distinct cellular populations by identifying regulatory pathways that were not identified by GSEA or PGSEA. Using simulated datasets, HeteroPath demonstrated robustness that was comparable to what was seen using existing gene set enrichment methods. Furthermore, we generated tissue-specific gene regulatory networks involved in vascular heterogeneity and neuronal heterogeneity by performing motif enrichment of the heterogeneous genes identified by HeteroPath and linking the enriched motifs to regulatory transcription factors in the ENCODE database. HeteroPath assesses contextual bidirectional gene expression within pathways and thus allows for transcriptomic assessment of cellular heterogeneity. Unraveling tissue-specific heterogeneity of gene expression can lead to a better understanding of the molecular underpinnings of tissue-specific phenotypes.
An Overview of MSHN: The Management System for Heterogeneous Networks
1999-04-01
An Overview of MSHN: The Management System for Heterogeneous Networks Debra A. Hensgen†, Taylor Kidd†, David St. John§, Matthew C . Schnaidt†, Howard...ABSTRACT UU 18. NUMBER OF PAGES 15 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c . THIS PAGE...Alhusaini, V. K. Prasanna, and C . S. Raghavendra, “A unified resource scheduling framework for heterogeneous computing environments,” Proc. 8th IEEE
Real-time video streaming in mobile cloud over heterogeneous wireless networks
NASA Astrophysics Data System (ADS)
Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos
2012-06-01
Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets are captured for analytical purposes on the mobile user node. Experimental results are obtained and analysed. Future work is identified towards further improvement of the current design and implementation. With this new mobile video networking concept and paradigm implemented and evaluated, results and observations obtained from this study would form the basis of a more in-depth, comprehensive understanding of various challenges and opportunities in supporting high-quality real-time video streaming in mobile cloud over heterogeneous wireless networks.
Faithful qubit transmission in a quantum communication network with heterogeneous channels
NASA Astrophysics Data System (ADS)
Chen, Na; Zhang, Lin Xi; Pei, Chang Xing
2018-04-01
Quantum communication networks enable long-distance qubit transmission and distributed quantum computation. In this paper, a quantum communication network with heterogeneous quantum channels is constructed. A faithful qubit transmission scheme is presented. Detailed calculations and performance analyses show that even in a low-quality quantum channel with serious decoherence, only modest number of locally prepared target qubits are required to achieve near-deterministic qubit transmission.
Spiking network simulation code for petascale computers.
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M; Plesser, Hans E; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.
Spiking network simulation code for petascale computers
Kunkel, Susanne; Schmidt, Maximilian; Eppler, Jochen M.; Plesser, Hans E.; Masumoto, Gen; Igarashi, Jun; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus; Helias, Moritz
2014-01-01
Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today. PMID:25346682
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.
Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda
2011-03-15
We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.
Object-oriented Tools for Distributed Computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1993-01-01
Distributed computing systems are proliferating, owing to the availability of powerful, affordable microcomputers and inexpensive communication networks. A critical problem in developing such systems is getting application programs to interact with one another across a computer network. Remote interprogram connectivity is particularly challenging across heterogeneous environments, where applications run on different kinds of computers and operating systems. NetWorks! (trademark) is an innovative software product that provides an object-oriented messaging solution to these problems. This paper describes the design and functionality of NetWorks! and illustrates how it is being used to build complex distributed applications for NASA and in the commercial sector.
Sweeney, Yann; Hellgren Kotaleski, Jeanette; Hennig, Matthias H.
2015-01-01
Gaseous neurotransmitters such as nitric oxide (NO) provide a unique and often overlooked mechanism for neurons to communicate through diffusion within a network, independent of synaptic connectivity. NO provides homeostatic control of intrinsic excitability. Here we conduct a theoretical investigation of the distinguishing roles of NO-mediated diffusive homeostasis in comparison with canonical non-diffusive homeostasis in cortical networks. We find that both forms of homeostasis provide a robust mechanism for maintaining stable activity following perturbations. However, the resulting networks differ, with diffusive homeostasis maintaining substantial heterogeneity in activity levels of individual neurons, a feature disrupted in networks with non-diffusive homeostasis. This results in networks capable of representing input heterogeneity, and linearly responding over a broader range of inputs than those undergoing non-diffusive homeostasis. We further show that these properties are preserved when homeostatic and Hebbian plasticity are combined. These results suggest a mechanism for dynamically maintaining neural heterogeneity, and expose computational advantages of non-local homeostatic processes. PMID:26158556
ERIC Educational Resources Information Center
Crane, Earl Newell
2013-01-01
The research problem that inspired this effort is the challenge of managing the security of systems in large-scale heterogeneous networked environments. Human intervention is slow and limited: humans operate at much slower speeds than networked computer communications and there are few humans associated with each network. Enabling each node in the…
Research of G3-PLC net self-organization processes in the NS-3 modeling framework
NASA Astrophysics Data System (ADS)
Pospelova, Irina; Chebotayev, Pavel; Klimenko, Aleksey; Myakochin, Yuri; Polyakov, Igor; Shelupanov, Alexander; Zykov, Dmitriy
2017-11-01
When modern infocommunication networks are designed, the combination of several data transfer channels is widely used. It is necessary for the purposes of improvement in quality and robustness of communication. Communication systems based on more than one data transfer channel are named heterogeneous communication systems. For the design of a heterogeneous network, the most optimal solution is the use of mesh technology. Mesh technology ensures message delivery to the destination under conditions of unpredictable interference environment situation in each of two channels. Therewith, one of the high-priority problems is the choice of a routing protocol when the mesh networks are designed. An important design stage for any computer network is modeling. Modeling allows us to design a few different variants of design solutions and also to compute all necessary functional specifications for each of these solutions. As a result, it allows us to reduce costs for the physical realization of a network. In this article the research of dynamic routing in the NS3 simulation modeling framework is presented. The article contains an evaluation of simulation modeling applicability in solving the problem of heterogeneous networks design. Results of modeling may be afterwards used for physical realization of this kind of networks.
Coarse-Grained Clustering Dynamics of Heterogeneously Coupled Neurons.
Moon, Sung Joon; Cook, Katherine A; Rajendran, Karthikeyan; Kevrekidis, Ioannis G; Cisternas, Jaime; Laing, Carlo R
2015-12-01
The formation of oscillating phase clusters in a network of identical Hodgkin-Huxley neurons is studied, along with their dynamic behavior. The neurons are synaptically coupled in an all-to-all manner, yet the synaptic coupling characteristic time is heterogeneous across the connections. In a network of N neurons where this heterogeneity is characterized by a prescribed random variable, the oscillatory single-cluster state can transition-through [Formula: see text] (possibly perturbed) period-doubling and subsequent bifurcations-to a variety of multiple-cluster states. The clustering dynamic behavior is computationally studied both at the detailed and the coarse-grained levels, and a numerical approach that can enable studying the coarse-grained dynamics in a network of arbitrarily large size is suggested. Among a number of cluster states formed, double clusters, composed of nearly equal sub-network sizes are seen to be stable; interestingly, the heterogeneity parameter in each of the double-cluster components tends to be consistent with the random variable over the entire network: Given a double-cluster state, permuting the dynamical variables of the neurons can lead to a combinatorially large number of different, yet similar "fine" states that appear practically identical at the coarse-grained level. For weak heterogeneity we find that correlations rapidly develop, within each cluster, between the neuron's "identity" (its own value of the heterogeneity parameter) and its dynamical state. For single- and double-cluster states we demonstrate an effective coarse-graining approach that uses the Polynomial Chaos expansion to succinctly describe the dynamics by these quickly established "identity-state" correlations. This coarse-graining approach is utilized, within the equation-free framework, to perform efficient computations of the neuron ensemble dynamics.
NASA Technical Reports Server (NTRS)
Townsend, James C.; Weston, Robert P.; Eidson, Thomas M.
1993-01-01
The Framework for Interdisciplinary Design Optimization (FIDO) is a general programming environment for automating the distribution of complex computing tasks over a networked system of heterogeneous computers. For example, instead of manually passing a complex design problem between its diverse specialty disciplines, the FIDO system provides for automatic interactions between the discipline tasks and facilitates their communications. The FIDO system networks all the computers involved into a distributed heterogeneous computing system, so they have access to centralized data and can work on their parts of the total computation simultaneously in parallel whenever possible. Thus, each computational task can be done by the most appropriate computer. Results can be viewed as they are produced and variables changed manually for steering the process. The software is modular in order to ease migration to new problems: different codes can be substituted for each of the current code modules with little or no effect on the others. The potential for commercial use of FIDO rests in the capability it provides for automatically coordinating diverse computations on a networked system of workstations and computers. For example, FIDO could provide the coordination required for the design of vehicles or electronics or for modeling complex systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaskey, Alexander J.
There is a lack of state-of-the-art quantum computing simulation software that scales on heterogeneous systems like Titan. Tensor Network Quantum Virtual Machine (TNQVM) provides a quantum simulator that leverages a distributed network of GPUs to simulate quantum circuits in a manner that leverages recent results from tensor network theory.
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B.
2018-01-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support. PMID:29629431
RSTensorFlow: GPU Enabled TensorFlow for Deep Learning on Commodity Android Devices.
Alzantot, Moustafa; Wang, Yingnan; Ren, Zhengshuang; Srivastava, Mani B
2017-06-01
Mobile devices have become an essential part of our daily lives. By virtue of both their increasing computing power and the recent progress made in AI, mobile devices evolved to act as intelligent assistants in many tasks rather than a mere way of making phone calls. However, popular and commonly used tools and frameworks for machine intelligence are still lacking the ability to make proper use of the available heterogeneous computing resources on mobile devices. In this paper, we study the benefits of utilizing the heterogeneous (CPU and GPU) computing resources available on commodity android devices while running deep learning models. We leveraged the heterogeneous computing framework RenderScript to accelerate the execution of deep learning models on commodity Android devices. Our system is implemented as an extension to the popular open-source framework TensorFlow. By integrating our acceleration framework tightly into TensorFlow, machine learning engineers can now easily make benefit of the heterogeneous computing resources on mobile devices without the need of any extra tools. We evaluate our system on different android phones models to study the trade-offs of running different neural network operations on the GPU. We also compare the performance of running different models architectures such as convolutional and recurrent neural networks on CPU only vs using heterogeneous computing resources. Our result shows that although GPUs on the phones are capable of offering substantial performance gain in matrix multiplication on mobile devices. Therefore, models that involve multiplication of large matrices can run much faster (approx. 3 times faster in our experiments) due to GPU support.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2018-05-01
The pinning/leader control problems provide the design of the leader or pinning controller in order to guide a complex network to a desired trajectory or target (synchronisation or consensus). Let a time-invariant complex network, pinning/leader control problems include the design of the leader or pinning controller gain and number of nodes to pin in order to guide a network to a desired trajectory (synchronization or consensus). Usually, lower is the number of pinned nodes larger is the pinning gain required to assess network synchronisation. On the other side, realistic application scenario of complex networks is characterised by switching topologies, time-varying node coupling strength and link weight that make hard to solve the pinning/leader control problem. Additionally, the system dynamics at nodes can be heterogeneous. In this paper, we derive robust stabilisation conditions of time-varying heterogeneous complex networks with jointly connected topologies when coupling strength and link weight interactions are affected by time-varying uncertainties. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, we formulate low computationally demanding stabilisability conditions to design a pinning/leader control gain for robust network synchronisation. The effectiveness of the proposed approach is shown by several design examples applied to a paradigmatic well-known complex network composed of heterogeneous Chua's circuits.
NASA Astrophysics Data System (ADS)
Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2016-04-01
High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.
Provably Secure Heterogeneous Access Control Scheme for Wireless Body Area Network.
Omala, Anyembe Andrew; Mbandu, Angolo Shem; Mutiria, Kamenyi Domenic; Jin, Chunhua; Li, Fagen
2018-04-28
Wireless body area network (WBAN) provides a medium through which physiological information could be harvested and transmitted to application provider (AP) in real time. Integrating WBAN in a heterogeneous Internet of Things (IoT) ecosystem would enable an AP to monitor patients from anywhere and at anytime. However, the IoT roadmap of interconnected 'Things' is still faced with many challenges. One of the challenges in healthcare is security and privacy of streamed medical data from heterogeneously networked devices. In this paper, we first propose a heterogeneous signcryption scheme where a sender is in a certificateless cryptographic (CLC) environment while a receiver is in identity-based cryptographic (IBC) environment. We then use this scheme to design a heterogeneous access control protocol. Formal security proof for indistinguishability against adaptive chosen ciphertext attack and unforgeability against adaptive chosen message attack in random oracle model is presented. In comparison with some of the existing access control schemes, our scheme has lower computation and communication cost.
Experiments and Analysis on a Computer Interface to an Information-Retrieval Network.
ERIC Educational Resources Information Center
Marcus, Richard S.; Reintjes, J. Francis
A primary goal of this project was to develop an interface that would provide direct access for inexperienced users to existing online bibliographic information retrieval networks. The experiment tested the concept of a virtual-system mode of access to a network of heterogeneous interactive retrieval systems and databases. An experimental…
Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.
Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve
2011-11-01
Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.
Scaling Laws for Heterogeneous Wireless Networks
2009-09-01
planned and the size of communication networks that are fundamentally understood. On the one hand, wireline networks (like the Internet) have grown from...Franceschetti, Marco D. Migliore, and Paolo Minero . The capacity of wireless networks: Information-theoretic and physical limits. In Proceedings of the...Allerton Conference on Communication, Control, and Computing, September 2007. [12] Massimo Franceschetti, Marco D. Migliore, and Paolo Minero . The
Mi, Shichao; Han, Hui; Chen, Cailian; Yan, Jian; Guan, Xinping
2016-02-19
Heterogeneous wireless sensor networks (HWSNs) can achieve more tasks and prolong the network lifetime. However, they are vulnerable to attacks from the environment or malicious nodes. This paper is concerned with the issues of a consensus secure scheme in HWSNs consisting of two types of sensor nodes. Sensor nodes (SNs) have more computation power, while relay nodes (RNs) with low power can only transmit information for sensor nodes. To address the security issues of distributed estimation in HWSNs, we apply the heterogeneity of responsibilities between the two types of sensors and then propose a parameter adjusted-based consensus scheme (PACS) to mitigate the effect of the malicious node. Finally, the convergence property is proven to be guaranteed, and the simulation results validate the effectiveness and efficiency of PACS.
Application-oriented integrated control center (AICC) for heterogeneous optical networks
NASA Astrophysics Data System (ADS)
Zhao, Yongli; Zhang, Jie; Cao, Xuping; Wang, Dajiang; Wu, Koubo; Cai, Yinxiang; Gu, Wanyi
2011-12-01
Various broad bandwidth services have being swallowing the bandwidth resource of optical networks, such as the data center application and cloud computation. There are still some challenges for future optical networks although the available bandwidth is increasing with the development of transmission technologies. The relationship between upper application layer and lower network resource layer is necessary to be researched further. In order to improve the efficiency of network resources and capability of service provisioning, heterogeneous optical networks resource can be abstracted as unified Application Programming Interfaces (APIs) which can be open to various upper applications through Application-oriented Integrated Control Center (AICC) proposed in the paper. A novel Openflow-based unified control architecture is proposed for the optimization of cross layer resources. Numeric results show good performance of AICC through simulation experiments.
Discovering network behind infectious disease outbreak
NASA Astrophysics Data System (ADS)
Maeno, Yoshiharu
2010-11-01
Stochasticity and spatial heterogeneity are of great interest recently in studying the spread of an infectious disease. The presented method solves an inverse problem to discover the effectively decisive topology of a heterogeneous network and reveal the transmission parameters which govern the stochastic spreads over the network from a dataset on an infectious disease outbreak in the early growth phase. Populations in a combination of epidemiological compartment models and a meta-population network model are described by stochastic differential equations. Probability density functions are derived from the equations and used for the maximal likelihood estimation of the topology and parameters. The method is tested with computationally synthesized datasets and the WHO dataset on the SARS outbreak.
High-throughput Bayesian Network Learning using Heterogeneous Multicore Computers
Linderman, Michael D.; Athalye, Vivek; Meng, Teresa H.; Asadi, Narges Bani; Bruggner, Robert; Nolan, Garry P.
2017-01-01
Aberrant intracellular signaling plays an important role in many diseases. The causal structure of signal transduction networks can be modeled as Bayesian Networks (BNs), and computationally learned from experimental data. However, learning the structure of Bayesian Networks (BNs) is an NP-hard problem that, even with fast heuristics, is too time consuming for large, clinically important networks (20–50 nodes). In this paper, we present a novel graphics processing unit (GPU)-accelerated implementation of a Monte Carlo Markov Chain-based algorithm for learning BNs that is up to 7.5-fold faster than current general-purpose processor (GPP)-based implementations. The GPU-based implementation is just one of several implementations within the larger application, each optimized for a different input or machine configuration. We describe the methodology we use to build an extensible application, assembled from these variants, that can target a broad range of heterogeneous systems, e.g., GPUs, multicore GPPs. Specifically we show how we use the Merge programming model to efficiently integrate, test and intelligently select among the different potential implementations. PMID:28819655
2015-06-01
system accuracy. The AnRAD system was also generalized for the additional application of network intrusion detection . A self-structuring technique...to Host- based Intrusion Detection Systems using Contiguous and Discontiguous System Call Patterns,” IEEE Transactions on Computer, 63(4), pp. 807...square kilometer areas. The anomaly recognition and detection (AnRAD) system was built as a cogent confabulation network . It represented road
Additional Security Considerations for Grid Management
NASA Technical Reports Server (NTRS)
Eidson, Thomas M.
2003-01-01
The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkata, Manjunath Gorentla; Aderholdt, William F
The pre-exascale systems are expected to have a significant amount of hierarchical and heterogeneous on-node memory, and this trend of system architecture in extreme-scale systems is expected to continue into the exascale era. along with hierarchical-heterogeneous memory, the system typically has a high-performing network ad a compute accelerator. This system architecture is not only effective for running traditional High Performance Computing (HPC) applications (Big-Compute), but also for running data-intensive HPC applications and Big-Data applications. As a consequence, there is a growing desire to have a single system serve the needs of both Big-Compute and Big-Data applications. Though the system architecturemore » supports the convergence of the Big-Compute and Big-Data, the programming models and software layer have yet to evolve to support either hierarchical-heterogeneous memory systems or the convergence. A programming abstraction to address this problem. The programming abstraction is implemented as a software library and runs on pre-exascale and exascale systems supporting current and emerging system architecture. Using distributed data-structures as a central concept, it provides (1) a simple, usable, and portable abstraction for hierarchical-heterogeneous memory and (2) a unified programming abstraction for Big-Compute and Big-Data applications.« less
Behavior of susceptible-infected-susceptible epidemics on heterogeneous networks with saturation
NASA Astrophysics Data System (ADS)
Joo, Jaewook; Lebowitz, Joel L.
2004-06-01
We investigate saturation effects in susceptible-infected-susceptible models of the spread of epidemics in heterogeneous populations. The structure of interactions in the population is represented by networks with connectivity distribution P(k) , including scale-free (SF) networks with power law distributions P(k)˜ k-γ . Considering cases where the transmission of infection between nodes depends on their connectivity, we introduce a saturation function C(k) which reduces the infection transmission rate λ across an edge going from a node with high connectivity k . A mean-field approximation with the neglect of degree-degree correlation then leads to a finite threshold λc >0 for SF networks with 2<γ⩽3 . We also find, in this approximation, the fraction of infected individuals among those with degree k for λ close to λc . We investigate via computer simulation the contact process on a heterogeneous regular lattice and compare the results with those obtained from mean-field theory with and without neglect of degree-degree correlations.
Cooperation prevails when individuals adjust their social ties.
Santos, Francisco C; Pacheco, Jorge M; Lenaerts, Tom
2006-10-20
Conventional evolutionary game theory predicts that natural selection favours the selfish and strong even though cooperative interactions thrive at all levels of organization in living systems. Recent investigations demonstrated that a limiting factor for the evolution of cooperative interactions is the way in which they are organized, cooperators becoming evolutionarily competitive whenever individuals are constrained to interact with few others along the edges of networks with low average connectivity. Despite this insight, the conundrum of cooperation remains since recent empirical data shows that real networks exhibit typically high average connectivity and associated single-to-broad-scale heterogeneity. Here, a computational model is constructed in which individuals are able to self-organize both their strategy and their social ties throughout evolution, based exclusively on their self-interest. We show that the entangled evolution of individual strategy and network structure constitutes a key mechanism for the sustainability of cooperation in social networks. For a given average connectivity of the population, there is a critical value for the ratio W between the time scales associated with the evolution of strategy and of structure above which cooperators wipe out defectors. Moreover, the emerging social networks exhibit an overall heterogeneity that accounts very well for the diversity of patterns recently found in acquired data on social networks. Finally, heterogeneity is found to become maximal when W reaches its critical value. These results show that simple topological dynamics reflecting the individual capacity for self-organization of social ties can produce realistic networks of high average connectivity with associated single-to-broad-scale heterogeneity. On the other hand, they show that cooperation cannot evolve as a result of "social viscosity" alone in heterogeneous networks with high average connectivity, requiring the additional mechanism of topological co-evolution to ensure the survival of cooperative behaviour.
Cascade heterogeneous face sketch-photo synthesis via dual-scale Markov Network
NASA Astrophysics Data System (ADS)
Yao, Saisai; Chen, Zhenxue; Jia, Yunyi; Liu, Chengyun
2018-03-01
Heterogeneous face sketch-photo synthesis is an important and challenging task in computer vision, which has widely applied in law enforcement and digital entertainment. According to the different synthesis results based on different scales, this paper proposes a cascade sketch-photo synthesis method via dual-scale Markov Network. Firstly, Markov Network with larger scale is used to synthesise the initial sketches and the local vertical and horizontal neighbour search (LVHNS) method is used to search for the neighbour patches of test patches in training set. Then, the initial sketches and test photos are jointly entered into smaller scale Markov Network. Finally, the fine sketches are obtained after cascade synthesis process. Extensive experimental results on various databases demonstrate the superiority of the proposed method compared with several state-of-the-art methods.
Campus-Wide Computing: Early Results Using Legion at the University of Virginia
2006-01-01
Bernard et al., “Primitives for Distributed Computing in a Heterogeneous Local Area Network Environ- ment”, IEEE Trans on Soft. Eng. vol. 15, no. 12...1994. [16] F. Ferstl, “CODINE Technical Overview,” Genias, April, 1993. [17] R. F. Freund and D. S. Cornwell , “Superconcurrency: A form of distributed
Ubiquitous virtual private network: a solution for WSN seamless integration.
Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos
2014-01-06
Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network.
Tools for Administration of a UNIX-Based Network
NASA Technical Reports Server (NTRS)
LeClaire, Stephen; Farrar, Edward
2004-01-01
Several computer programs have been developed to enable efficient administration of a large, heterogeneous, UNIX-based computing and communication network that includes a variety of computers connected to a variety of subnetworks. One program provides secure software tools for administrators to create, modify, lock, and delete accounts of specific users. This program also provides tools for users to change their UNIX passwords and log-in shells. These tools check for errors. Another program comprises a client and a server component that, together, provide a secure mechanism to create, modify, and query quota levels on a network file system (NFS) mounted by use of the VERITAS File SystemJ software. The client software resides on an internal secure computer with a secure Web interface; one can gain access to the client software from any authorized computer capable of running web-browser software. The server software resides on a UNIX computer configured with the VERITAS software system. Directories where VERITAS quotas are applied are NFS-mounted. Another program is a Web-based, client/server Internet Protocol (IP) address tool that facilitates maintenance lookup of information about IP addresses for a network of computers.
An approach for heterogeneous and loosely coupled geospatial data distributed computing
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui
2010-07-01
Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.
Documentary of MFENET, a national computer network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shuttleworth, B.O.
1977-06-01
The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less
Heterogeneous fractionation profiles of meta-analytic coactivation networks.
Laird, Angela R; Riedel, Michael C; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L; Eickhoff, Simon B; Smith, Stephen M; Fox, Peter T; Sutherland, Matthew T
2017-04-01
Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d=20-300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how "parent" functional brain systems decompose into constituent "child" sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. Copyright © 2017 Elsevier Inc. All rights reserved.
Heterogeneous fractionation profiles of meta-analytic coactivation networks
Laird, Angela R.; Riedel, Michael C.; Okoe, Mershack; Jianu, Radu; Ray, Kimberly L.; Eickhoff, Simon B.; Smith, Stephen M.; Fox, Peter T.; Sutherland, Matthew T.
2017-01-01
Computational cognitive neuroimaging approaches can be leveraged to characterize the hierarchical organization of distributed, functionally specialized networks in the human brain. To this end, we performed large-scale mining across the BrainMap database of coordinate-based activation locations from over 10,000 task-based experiments. Meta-analytic coactivation networks were identified by jointly applying independent component analysis (ICA) and meta-analytic connectivity modeling (MACM) across a wide range of model orders (i.e., d = 20 to 300). We then iteratively computed pairwise correlation coefficients for consecutive model orders to compare spatial network topologies, ultimately yielding fractionation profiles delineating how “parent” functional brain systems decompose into constituent “child” sub-networks. Fractionation profiles differed dramatically across canonical networks: some exhibited complex and extensive fractionation into a large number of sub-networks across the full range of model orders, whereas others exhibited little to no decomposition as model order increased. Hierarchical clustering was applied to evaluate this heterogeneity, yielding three distinct groups of network fractionation profiles: high, moderate, and low fractionation. BrainMap-based functional decoding of resultant coactivation networks revealed a multi-domain association regardless of fractionation complexity. Rather than emphasize a cognitive-motor-perceptual gradient, these outcomes suggest the importance of inter-lobar connectivity in functional brain organization. We conclude that high fractionation networks are complex and comprised of many constituent sub-networks reflecting long-range, inter-lobar connectivity, particularly in fronto-parietal regions. In contrast, low fractionation networks may reflect persistent and stable networks that are more internally coherent and exhibit reduced inter-lobar communication. PMID:28222386
Application Portable Parallel Library
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Blech, Richard A.; Quealy, Angela; Townsend, Scott
1995-01-01
Application Portable Parallel Library (APPL) computer program is subroutine-based message-passing software library intended to provide consistent interface to variety of multiprocessor computers on market today. Minimizes effort needed to move application program from one computer to another. User develops application program once and then easily moves application program from parallel computer on which created to another parallel computer. ("Parallel computer" also include heterogeneous collection of networked computers). Written in C language with one FORTRAN 77 subroutine for UNIX-based computers and callable from application programs written in C language or FORTRAN 77.
Kumar, Pardeep; Ylianttila, Mika; Gurtov, Andrei; Lee, Sang-Gon; Lee, Hoon-Jae
2014-01-01
Robust security is highly coveted in real wireless sensor network (WSN) applications since wireless sensors' sense critical data from the application environment. This article presents an efficient and adaptive mutual authentication framework that suits real heterogeneous WSN-based applications (such as smart homes, industrial environments, smart grids, and healthcare monitoring). The proposed framework offers: (i) key initialization; (ii) secure network (cluster) formation (i.e., mutual authentication and dynamic key establishment); (iii) key revocation; and (iv) new node addition into the network. The correctness of the proposed scheme is formally verified. An extensive analysis shows the proposed scheme coupled with message confidentiality, mutual authentication and dynamic session key establishment, node privacy, and message freshness. Moreover, the preliminary study also reveals the proposed framework is secure against popular types of attacks, such as impersonation attacks, man-in-the-middle attacks, replay attacks, and information-leakage attacks. As a result, we believe the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a safeguard to real heterogeneous WSN applications. PMID:24521942
Zhang, Xiaotian; Yin, Jian; Zhang, Xu
2018-03-02
Increasing evidence suggests that dysregulation of microRNAs (miRNAs) may lead to a variety of diseases. Therefore, identifying disease-related miRNAs is a crucial problem. Currently, many computational approaches have been proposed to predict binary miRNA-disease associations. In this study, in order to predict underlying miRNA-disease association types, a semi-supervised model called the network-based label propagation algorithm is proposed to infer multiple types of miRNA-disease associations (NLPMMDA) by mutual information derived from the heterogeneous network. The NLPMMDA method integrates disease semantic similarity, miRNA functional similarity, and Gaussian interaction profile kernel similarity information of miRNAs and diseases to construct a heterogeneous network. NLPMMDA is a semi-supervised model which does not require verified negative samples. Leave-one-out cross validation (LOOCV) was implemented for four known types of miRNA-disease associations and demonstrated the reliable performance of our method. Moreover, case studies of lung cancer and breast cancer confirmed effective performance of NLPMMDA to predict novel miRNA-disease associations and their association types.
Kumar, Pardeep; Ylianttila, Mika; Gurtov, Andrei; Lee, Sang-Gon; Lee, Hoon-Jae
2014-02-11
Robust security is highly coveted in real wireless sensor network (WSN) applications since wireless sensors' sense critical data from the application environment. This article presents an efficient and adaptive mutual authentication framework that suits real heterogeneous WSN-based applications (such as smart homes, industrial environments, smart grids, and healthcare monitoring). The proposed framework offers: (i) key initialization; (ii) secure network (cluster) formation (i.e., mutual authentication and dynamic key establishment); (iii) key revocation; and (iv) new node addition into the network. The correctness of the proposed scheme is formally verified. An extensive analysis shows the proposed scheme coupled with message confidentiality, mutual authentication and dynamic session key establishment, node privacy, and message freshness. Moreover, the preliminary study also reveals the proposed framework is secure against popular types of attacks, such as impersonation attacks, man-in-the-middle attacks, replay attacks, and information-leakage attacks. As a result, we believe the proposed framework achieves efficiency at reasonable computation and communication costs and it can be a safeguard to real heterogeneous WSN applications.
Dome: Distributed Object Migration Environment
1994-05-01
Best Available Copy AD-A281 134 Computer Science Dome: Distributed object migration environment Adam Beguelin Erik Seligman Michael Starkey May 1994...Beguelin Erik Seligman Michael Starkey May 1994 CMU-CS-94-153 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract Dome... Linda [4], Isis [2], and Express [6] allow a pro- grammer to treat a heterogeneous network of computers as a parallel machine. These tools allow the
HERA: A New Platform for Embedding Agents in Heterogeneous Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Alonso, Ricardo S.; de Paz, Juan F.; García, Óscar; Gil, Óscar; González, Angélica
Ambient Intelligence (AmI) based systems require the development of innovative solutions that integrate distributed intelligent systems with context-aware technologies. In this sense, Multi-Agent Systems (MAS) and Wireless Sensor Networks (WSN) are two key technologies for developing distributed systems based on AmI scenarios. This paper presents the new HERA (Hardware-Embedded Reactive Agents) platform, that allows using dynamic and self-adaptable heterogeneous WSNs on which agents are directly embedded on the wireless nodes This approach facilitates the inclusion of context-aware capabilities in AmI systems to gather data from their surrounding environments, achieving a higher level of ubiquitous and pervasive computing.
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Ubiquitous Virtual Private Network: A Solution for WSN Seamless Integration
Villa, David; Moya, Francisco; Villanueva, Félix Jesús; Aceña, Óscar; López, Juan Carlos
2014-01-01
Sensor networks are becoming an essential part of ubiquitous systems and applications. However, there are no well-defined protocols or mechanisms to access the sensor network from the enterprise information system. We consider this issue as a heterogeneous network interconnection problem, and as a result, the same concepts may be applied. Specifically, we propose the use of object-oriented middlewares to provide a virtual private network in which all involved elements (sensor nodes or computer applications) will be able to communicate as if all of them were in a single and uniform network. PMID:24399154
Using RDF to Model the Structure and Process of Systems
NASA Astrophysics Data System (ADS)
Rodriguez, Marko A.; Watkins, Jennifer H.; Bollen, Johan; Gershenson, Carlos
Many systems can be described in terms of networks of discrete elements and their various relationships to one another. A semantic network, or multi-relational network, is a directed labeled graph consisting of a heterogeneous set of entities connected by a heterogeneous set of relationships. Semantic networks serve as a promising general-purpose modeling substrate for complex systems. Various standardized formats and tools are now available to support practical, large-scale semantic network models. First, the Resource Description Framework (RDF) offers a standardized semantic network data model that can be further formalized by ontology modeling languages such as RDF Schema (RDFS) and the Web Ontology Language (OWL). Second, the recent introduction of highly performant triple-stores (i.e. semantic network databases) allows semantic network models on the order of 109 edges to be efficiently stored and manipulated. RDF and its related technologies are currently used extensively in the domains of computer science, digital library science, and the biological sciences. This article will provide an introduction to RDF/RDFS/OWL and an examination of its suitability to model discrete element complex systems.
Completing sparse and disconnected protein-protein network by deep learning.
Huang, Lei; Liao, Li; Wu, Cathy H
2018-03-22
Protein-protein interaction (PPI) prediction remains a central task in systems biology to achieve a better and holistic understanding of cellular and intracellular processes. Recently, an increasing number of computational methods have shifted from pair-wise prediction to network level prediction. Many of the existing network level methods predict PPIs under the assumption that the training network should be connected. However, this assumption greatly affects the prediction power and limits the application area because the current golden standard PPI networks are usually very sparse and disconnected. Therefore, how to effectively predict PPIs based on a training network that is sparse and disconnected remains a challenge. In this work, we developed a novel PPI prediction method based on deep learning neural network and regularized Laplacian kernel. We use a neural network with an autoencoder-like architecture to implicitly simulate the evolutionary processes of a PPI network. Neurons of the output layer correspond to proteins and are labeled with values (1 for interaction and 0 for otherwise) from the adjacency matrix of a sparse disconnected training PPI network. Unlike autoencoder, neurons at the input layer are given all zero input, reflecting an assumption of no a priori knowledge about PPIs, and hidden layers of smaller sizes mimic ancient interactome at different times during evolution. After the training step, an evolved PPI network whose rows are outputs of the neural network can be obtained. We then predict PPIs by applying the regularized Laplacian kernel to the transition matrix that is built upon the evolved PPI network. The results from cross-validation experiments show that the PPI prediction accuracies for yeast data and human data measured as AUC are increased by up to 8.4 and 14.9% respectively, as compared to the baseline. Moreover, the evolved PPI network can also help us leverage complementary information from the disconnected training network and multiple heterogeneous data sources. Tested by the yeast data with six heterogeneous feature kernels, the results show our method can further improve the prediction performance by up to 2%, which is very close to an upper bound that is obtained by an Approximate Bayesian Computation based sampling method. The proposed evolution deep neural network, coupled with regularized Laplacian kernel, is an effective tool in completing sparse and disconnected PPI networks and in facilitating integration of heterogeneous data sources.
Merlin - Massively parallel heterogeneous computing
NASA Technical Reports Server (NTRS)
Wittie, Larry; Maples, Creve
1989-01-01
Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II; Smith, Wayne
1992-01-01
This report is the mid-year report intended for the design concepts for the communication network for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, MS. The overall network is to include heterogeneous computers, to use various protocols, and to have different bandwidths. Performance consideration must be given to the potential network applications in the network environment. The performance evaluation of X window applications was given the major emphasis in this report. A simulation study using Bones will be included later. This mid-year report has three parts: Part 1 is an investigation of X window traffic using TCP/IP over Ethernet networks; part 2 is a survey study of performance concepts of X window applications with Macintosh computers; and the last part is a tutorial on DECnet protocols. The results of this report should be useful in the design and operation of the ASRM communication network.
Infectious disease transmission and contact networks in wildlife and livestock.
Craft, Meggan E
2015-05-26
The use of social and contact networks to answer basic and applied questions about infectious disease transmission in wildlife and livestock is receiving increased attention. Through social network analysis, we understand that wild animal and livestock populations, including farmed fish and poultry, often have a heterogeneous contact structure owing to social structure or trade networks. Network modelling is a flexible tool used to capture the heterogeneous contacts of a population in order to test hypotheses about the mechanisms of disease transmission, simulate and predict disease spread, and test disease control strategies. This review highlights how to use animal contact data, including social networks, for network modelling, and emphasizes that researchers should have a pathogen of interest in mind before collecting or using contact data. This paper describes the rising popularity of network approaches for understanding transmission dynamics in wild animal and livestock populations; discusses the common mismatch between contact networks as measured in animal behaviour and relevant parasites to match those networks; and highlights knowledge gaps in how to collect and analyse contact data. Opportunities for the future include increased attention to experiments, pathogen genetic markers and novel computational tools. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Infectious disease transmission and contact networks in wildlife and livestock
Craft, Meggan E.
2015-01-01
The use of social and contact networks to answer basic and applied questions about infectious disease transmission in wildlife and livestock is receiving increased attention. Through social network analysis, we understand that wild animal and livestock populations, including farmed fish and poultry, often have a heterogeneous contact structure owing to social structure or trade networks. Network modelling is a flexible tool used to capture the heterogeneous contacts of a population in order to test hypotheses about the mechanisms of disease transmission, simulate and predict disease spread, and test disease control strategies. This review highlights how to use animal contact data, including social networks, for network modelling, and emphasizes that researchers should have a pathogen of interest in mind before collecting or using contact data. This paper describes the rising popularity of network approaches for understanding transmission dynamics in wild animal and livestock populations; discusses the common mismatch between contact networks as measured in animal behaviour and relevant parasites to match those networks; and highlights knowledge gaps in how to collect and analyse contact data. Opportunities for the future include increased attention to experiments, pathogen genetic markers and novel computational tools. PMID:25870393
Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes
NASA Astrophysics Data System (ADS)
Yang, Hui; Tang, Ming; Gross, Thilo
2015-08-01
One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.
Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes.
Yang, Hui; Tang, Ming; Gross, Thilo
2015-08-21
One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.
NASA Astrophysics Data System (ADS)
Skaggs, Todd H.
2011-10-01
Critical path analysis (CPA) is a method for estimating macroscopic transport coefficients of heterogeneous materials that are highly disordered at the micro-scale. Developed originally to model conduction in semiconductors, numerous researchers have noted that CPA might also have relevance to flow and transport processes in porous media. However, the results of several numerical investigations of critical path analysis on pore network models raise questions about the applicability of CPA to porous media. Among other things, these studies found that (i) in well-connected 3D networks, CPA predictions were inaccurate and became worse when heterogeneity was increased; and (ii) CPA could not fully explain the transport properties of 2D networks. To better understand the applicability of CPA to porous media, we made numerical computations of permeability and electrical conductivity on 2D and 3D networks with differing pore-size distributions and geometries. A new CPA model for the relationship between the permeability and electrical conductivity was found to be in good agreement with numerical data, and to be a significant improvement over a classical CPA model. In sufficiently disordered 3D networks, the new CPA prediction was within ±20% of the true value, and was nearly optimal in terms of minimizing the squared prediction errors across differing network configurations. The agreement of CPA predictions with 2D network computations was similarly good, although 2D networks are in general not well-suited for evaluating CPA. Numerical transport coefficients derived for regular 3D networks of slit-shaped pores were found to be in better agreement with experimental data from rock samples than were coefficients derived for networks of cylindrical pores.
Robust sequential working memory recall in heterogeneous cognitive networks
Rabinovich, Mikhail I.; Sokolov, Yury; Kozma, Robert
2014-01-01
Psychiatric disorders are often caused by partial heterogeneous disinhibition in cognitive networks, controlling sequential and spatial working memory (SWM). Such dynamic connectivity changes suggest that the normal relationship between the neuronal components within the network deteriorates. As a result, competitive network dynamics is qualitatively altered. This dynamics defines the robust recall of the sequential information from memory and, thus, the SWM capacity. To understand pathological and non-pathological bifurcations of the sequential memory dynamics, here we investigate the model of recurrent inhibitory-excitatory networks with heterogeneous inhibition. We consider the ensemble of units with all-to-all inhibitory connections, in which the connection strengths are monotonically distributed at some interval. Based on computer experiments and studying the Lyapunov exponents, we observed and analyzed the new phenomenon—clustered sequential dynamics. The results are interpreted in the context of the winnerless competition principle. Accordingly, clustered sequential dynamics is represented in the phase space of the model by two weakly interacting quasi-attractors. One of them is similar to the sequential heteroclinic chain—the regular image of SWM, while the other is a quasi-chaotic attractor. Coexistence of these quasi-attractors means that the recall of the normal information sequence is intermittently interrupted by episodes with chaotic dynamics. We indicate potential dynamic ways for augmenting damaged working memory and other cognitive functions. PMID:25452717
Job Scheduling in a Heterogeneous Grid Environment
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Smith, Warren; Oliker, Leonid; Biswas, Rupak
2004-01-01
Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.
Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks
Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok
2016-01-01
Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user’s quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities. PMID:27347975
Mobility-Aware Caching and Computation Offloading in 5G Ultra-Dense Cellular Networks.
Chen, Min; Hao, Yixue; Qiu, Meikang; Song, Jeungeun; Wu, Di; Humar, Iztok
2016-06-25
Recent trends show that Internet traffic is increasingly dominated by content, which is accompanied by the exponential growth of traffic. To cope with this phenomena, network caching is introduced to utilize the storage capacity of diverse network devices. In this paper, we first summarize four basic caching placement strategies, i.e., local caching, Device-to-Device (D2D) caching, Small cell Base Station (SBS) caching and Macrocell Base Station (MBS) caching. However, studies show that so far, much of the research has ignored the impact of user mobility. Therefore, taking the effect of the user mobility into consideration, we proposes a joint mobility-aware caching and SBS density placement scheme (MS caching). In addition, differences and relationships between caching and computation offloading are discussed. We present a design of a hybrid computation offloading and support it with experimental results, which demonstrate improved performance in terms of energy cost. Finally, we discuss the design of an incentive mechanism by considering network dynamics, differentiated user's quality of experience (QoE) and the heterogeneity of mobile terminals in terms of caching and computing capabilities.
Law of Large Numbers: The Theory, Applications and Technology-Based Education
ERIC Educational Resources Information Center
Dinov, Ivo D.; Christou, Nicolas; Gould, Robert
2009-01-01
Modern approaches for technology-based blended education utilize a variety of recently developed novel pedagogical, computational and network resources. Such attempts employ technology to deliver integrated, dynamically-linked, interactive-content and heterogeneous learning environments, which may improve student comprehension and information…
NMESys: An expert system for network fault detection
NASA Technical Reports Server (NTRS)
Nelson, Peter C.; Warpinski, Janet
1991-01-01
The problem of network management is becoming an increasingly difficult and challenging task. It is very common today to find heterogeneous networks consisting of many different types of computers, operating systems, and protocols. The complexity of implementing a network with this many components is difficult enough, while the maintenance of such a network is an even larger problem. A prototype network management expert system, NMESys, implemented in the C Language Integrated Production System (CLIPS). NMESys concentrates on solving some of the critical problems encountered in managing a large network. The major goal of NMESys is to provide a network operator with an expert system tool to quickly and accurately detect hard failures, potential failures, and to minimize or eliminate user down time in a large network.
Luo, Jiawei; Xiao, Qiu
2017-02-01
MicroRNAs (miRNAs) play a critical role by regulating their targets in post-transcriptional level. Identification of potential miRNA-disease associations will aid in deciphering the pathogenesis of human polygenic diseases. Several computational models have been developed to uncover novel miRNA-disease associations based on the predicted target genes. However, due to the insufficient number of experimentally validated miRNA-target interactions as well as the relatively high false-positive and false-negative rates of predicted target genes, it is still challenging for these prediction models to obtain remarkable performances. The purpose of this study is to prioritize miRNA candidates for diseases. We first construct a heterogeneous network, which consists of a disease similarity network, a miRNA functional similarity network and a known miRNA-disease association network. Then, an unbalanced bi-random walk-based algorithm on the heterogeneous network (BRWH) is adopted to discover potential associations by exploiting bipartite subgraphs. Based on 5-fold cross validation, the proposed network-based method achieves AUC values ranging from 0.782 to 0.907 for the 22 human diseases and an average AUC of almost 0.846. The experiments indicated that BRWH can achieve better performances compared with several popular methods. In addition, case studies of some common diseases further demonstrated the superior performance of our proposed method on prioritizing disease-related miRNA candidates. Copyright © 2017 Elsevier Inc. All rights reserved.
Automatic inference of multicellular regulatory networks using informative priors.
Sun, Xiaoyun; Hong, Pengyu
2009-01-01
To fully understand the mechanisms governing animal development, computational models and algorithms are needed to enable quantitative studies of the underlying regulatory networks. We developed a mathematical model based on dynamic Bayesian networks to model multicellular regulatory networks that govern cell differentiation processes. A machine-learning method was developed to automatically infer such a model from heterogeneous data. We show that the model inference procedure can be greatly improved by incorporating interaction data across species. The proposed approach was applied to C. elegans vulval induction to reconstruct a model capable of simulating C. elegans vulval induction under 73 different genetic conditions.
Moradi, Saber; Qiao, Ning; Stefanini, Fabio; Indiveri, Giacomo
2018-02-01
Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.
The epidemic threshold theorem with social and contact heterogeneity
NASA Astrophysics Data System (ADS)
Hincapié Palacio, Doracelly; Ospina Giraldo, Juan; Gómez Arias, Rubén Darío
2008-03-01
The threshold theorem of an epidemic SIR model was compared when infectious and susceptible individuals have homogeneous mixing and heterogeneous social status and when individuals of random networks have contact heterogeneity. Particularly the effect of vaccination in such models is considered when: individuals or nodes are exposed to impoverished, vaccination and loss of immunity. An equilibrium analysis and local stability of small perturbations about the equilibrium values were implemented using computer algebra. Numerical simulations were executed in order to describe the dynamic of transmission of diseases and changes of the basic reproductive rate. The implications of these results are examined around the threats to the global public health security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J.P.; Bangs, A.L.; Butler, P.L.
Hetero Helix is a programming environment which simulates shared memory on a heterogeneous network of distributed-memory computers. The machines in the network may vary with respect to their native operating systems and internal representation of numbers. Hetero Helix presents a simple programming model to developers, and also considers the needs of designers, system integrators, and maintainers. The key software technology underlying Hetero Helix is the use of a compiler'' which analyzes the data structures in shared memory and automatically generates code which translates data representations from the format native to each machine into a common format, and vice versa. Themore » design of Hetero Helix was motivated in particular by the requirements of robotics applications. Hetero Helix has been used successfully in an integration effort involving 27 CPUs in a heterogeneous network and a body of software totaling roughly 100,00 lines of code. 25 refs., 6 figs.« less
Multiplex congruence network of natural numbers.
Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua
2016-03-31
Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.
Multiplex congruence network of natural numbers
NASA Astrophysics Data System (ADS)
Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua
2016-03-01
Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.
Challenges of CAC in Heterogeneous Wireless Cognitive Networks
NASA Astrophysics Data System (ADS)
Wang, Jiazheng; Fu, Xiuhua
Call admission control (CAC) is known as an effective functionality in ensuring the QoS of wireless networks. The vision of next generation wireless networks has led to the development of new call admission control (CAC) algorithms specifically designed for heterogeneous wireless Cognitive networks. However, there will be a number of challenges created by dynamic spectrum access and scheduling techniques associated with the cognitive systems. In this paper for the first time, we recommend that the CAC policies should be distinguished between primary users and secondary users. The classification of different methods of cac policies in cognitive networks contexts is proposed. Although there have been some researches within the umbrella of Joint CAC and cross-layer optimization for wireless networks, the advent of the cognitive networks adds some additional problems. We present the conceptual models for joint CAC and cross-layer optimization respectively. Also, the benefit of Cognition can only be realized fully if application requirements and traffic flow contexts are determined or inferred in order to know what modes of operation and spectrum bands to use at each point in time. The process model of Cognition involved per-flow-based CAC is presented. Because there may be a number of parameters on different levels affecting a CAC decision and the conditions for accepting or rejecting a call must be computed quickly and frequently, simplicity and practicability are particularly important for designing a feasible CAC algorithm. In a word, a more thorough understanding of CAC in heterogeneous wireless cognitive networks may help one to design better CAC algorithms.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
NEXUS - Resilient Intelligent Middleware
NASA Astrophysics Data System (ADS)
Kaveh, N.; Hercock, R. Ghanea
Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.
Towards a Framework for Evolvable Network Design
NASA Astrophysics Data System (ADS)
Hassan, Hoda; Eltarras, Ramy; Eltoweissy, Mohamed
The layered Internet architecture that had long guided network design and protocol engineering was an “interconnection architecture” defining a framework for interconnecting networks rather than a model for generic network structuring and engineering. We claim that the approach of abstracting the network in terms of an internetwork hinders the thorough understanding of the network salient characteristics and emergent behavior resulting in impeding design evolution required to address extreme scale, heterogeneity, and complexity. This paper reports on our work in progress that aims to: 1) Investigate the problem space in terms of the factors and decisions that influenced the design and development of computer networks; 2) Sketch the core principles for designing complex computer networks; and 3) Propose a model and related framework for building evolvable, adaptable and self organizing networks We will adopt a bottom up strategy primarily focusing on the building unit of the network model, which we call the “network cell”. The model is inspired by natural complex systems. A network cell is intrinsically capable of specialization, adaptation and evolution. Subsequently, we propose CellNet; a framework for evolvable network design. We outline scenarios for using the CellNet framework to enhance legacy Internet protocol stack.
Xue, Ling; Scoglio, Caterina
2013-05-01
A wide range of infectious diseases are both vertically and horizontally transmitted. Such diseases are spatially transmitted via multiple species in heterogeneous environments, typically described by complex meta-population models. The reproduction number, R0, is a critical metric predicting whether the disease can invade the meta-population system. This paper presents the reproduction number for a generic disease vertically and horizontally transmitted among multiple species in heterogeneous networks, where nodes are locations, and links reflect outgoing or incoming movement flows. The metapopulation model for vertically and horizontally transmitted diseases is gradually formulated from two species, two-node network models. We derived an explicit expression of R0, which is the spectral radius of a matrix reduced in size with respect to the original next generation matrix. The reproduction number is shown to be a function of vertical and horizontal transmission parameters, and the lower bound is the reproduction number for horizontal transmission. As an application, the reproduction number and its bounds for the Rift Valley fever zoonosis, where livestock, mosquitoes, and humans are the involved species are derived. By computing the reproduction number for different scenarios through numerical simulations, we found the reproduction number is affected by livestock movement rates only when parameters are heterogeneous across nodes. To summarize, our study contributes the reproduction number for vertically and horizontally transmitted diseases in heterogeneous networks. This explicit expression is easily adaptable to specific infectious diseases, affording insights into disease evolution. Copyright © 2013 Elsevier Inc. All rights reserved.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Jie; Shang, Cheng; Liu, Zhi-Pan
2017-10-01
Heterogeneous catalytic reactions on surface and interfaces are renowned for ample intermediate adsorbates and complex reaction networks. The common practice to reveal the reaction mechanism is via theoretical computation, which locates all likely transition states based on the pre-guessed reaction mechanism. Here we develop a new theoretical method, namely, stochastic surface walking (SSW)-Cat method, to resolve the lowest energy reaction pathway of heterogeneous catalytic reactions, which combines our recently developed SSW global structure optimization and SSW reaction sampling. The SSW-Cat is automated and massively parallel, taking a rough reaction pattern as input to guide reaction search. We present the detailed algorithm, discuss the key features, and demonstrate the efficiency in a model catalytic reaction, water-gas shift reaction on Cu(111) (CO + H2O → CO2 + H2). The SSW-Cat simulation shows that water dissociation is the rate-determining step and formic acid (HCOOH) is the kinetically favorable product, instead of the observed final products, CO2 and H2. It implies that CO2 and H2 are secondary products from further decomposition of HCOOH at high temperatures. Being a general purpose tool for reaction prediction, the SSW-Cat may be utilized for rational catalyst design via large-scale computations.
Firing rate dynamics in recurrent spiking neural networks with intrinsic and network heterogeneity.
Ly, Cheng
2015-12-01
Heterogeneity of neural attributes has recently gained a lot of attention and is increasing recognized as a crucial feature in neural processing. Despite its importance, this physiological feature has traditionally been neglected in theoretical studies of cortical neural networks. Thus, there is still a lot unknown about the consequences of cellular and circuit heterogeneity in spiking neural networks. In particular, combining network or synaptic heterogeneity and intrinsic heterogeneity has yet to be considered systematically despite the fact that both are known to exist and likely have significant roles in neural network dynamics. In a canonical recurrent spiking neural network model, we study how these two forms of heterogeneity lead to different distributions of excitatory firing rates. To analytically characterize how these types of heterogeneities affect the network, we employ a dimension reduction method that relies on a combination of Monte Carlo simulations and probability density function equations. We find that the relationship between intrinsic and network heterogeneity has a strong effect on the overall level of heterogeneity of the firing rates. Specifically, this relationship can lead to amplification or attenuation of firing rate heterogeneity, and these effects depend on whether the recurrent network is firing asynchronously or rhythmically firing. These observations are captured with the aforementioned reduction method, and furthermore simpler analytic descriptions based on this dimension reduction method are developed. The final analytic descriptions provide compact and descriptive formulas for how the relationship between intrinsic and network heterogeneity determines the firing rate heterogeneity dynamics in various settings.
Overview of the LINCS architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.; Watson, R.W.
1982-01-13
Computing at the Lawrence Livermore National Laboratory (LLNL) has evolved over the past 15 years with a computer network based resource sharing environment. The increasing use of low cost and high performance micro, mini and midi computers and commercially available local networking systems will accelerate this trend. Further, even the large scale computer systems, on which much of the LLNL scientific computing depends, are evolving into multiprocessor systems. It is our belief that the most cost effective use of this environment will depend on the development of application systems structured into cooperating concurrent program modules (processes) distributed appropriately over differentmore » nodes of the environment. A node is defined as one or more processors with a local (shared) high speed memory. Given the latter view, the environment can be characterized as consisting of: multiple nodes communicating over noisy channels with arbitrary delays and throughput, heterogenous base resources and information encodings, no single administration controlling all resources, distributed system state, and no uniform time base. The system design problem is - how to turn the heterogeneous base hardware/firmware/software resources of this environment into a coherent set of resources that facilitate development of cost effective, reliable, and human engineered applications. We believe the answer lies in developing a layered, communication oriented distributed system architecture; layered and modular to support ease of understanding, reconfiguration, extensibility, and hiding of implementation or nonessential local details; communication oriented because that is a central feature of the environment. The Livermore Interactive Network Communication System (LINCS) is a hierarchical architecture designed to meet the above needs. While having characteristics in common with other architectures, it differs in several respects.« less
Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine
2016-08-18
We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.
Contagion on complex networks with persuasion
NASA Astrophysics Data System (ADS)
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-03-01
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense.
Contagion on complex networks with persuasion
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-01-01
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense. PMID:27029498
Contagion on complex networks with persuasion.
Huang, Wei-Min; Zhang, Li-Jie; Xu, Xin-Jian; Fu, Xinchu
2016-03-31
The threshold model has been widely adopted as a classic model for studying contagion processes on social networks. We consider asymmetric individual interactions in social networks and introduce a persuasion mechanism into the threshold model. Specifically, we study a combination of adoption and persuasion in cascading processes on complex networks. It is found that with the introduction of the persuasion mechanism, the system may become more vulnerable to global cascades, and the effects of persuasion tend to be more significant in heterogeneous networks than those in homogeneous networks: a comparison between heterogeneous and homogeneous networks shows that under weak persuasion, heterogeneous networks tend to be more robust against random shocks than homogeneous networks; whereas under strong persuasion, homogeneous networks are more stable. Finally, we study the effects of adoption and persuasion threshold heterogeneity on systemic stability. Though both heterogeneities give rise to global cascades, the adoption heterogeneity has an overwhelmingly stronger impact than the persuasion heterogeneity when the network connectivity is sufficiently dense.
Modeling and analyzing malware propagation in social networks with heterogeneous infection rates
NASA Astrophysics Data System (ADS)
Jia, Peng; Liu, Jiayong; Fang, Yong; Liu, Liang; Liu, Luping
2018-10-01
With the rapid development of social networks, hackers begin to try to spread malware more widely by utilizing various kinds of social networks. Thus, studying malware epidemic dynamics in these networks is becoming a popular subject in the literature. Most of the previous works focus on the effects of factors, such as network topology and user behavior, on malware propagation. Some researchers try to analyze the heterogeneity of infection rates, but the common problem of their works is the factors they mentioned that could affect the heterogeneity are not comprehensive enough. In this paper, focusing on the effects of heterogeneous infection rates, we propose a novel model called HSID (heterogeneous-susceptible-infectious-dormant model) to characterize virus propagation in social networks, in which a connection factor is presented to evaluate the heterogeneous relationships between nodes, and a resistance factor is introduced to represent node's mutable resistant ability. We analyzed how key parameters in the two factors affect the heterogeneity and then performed simulations to explore the effects in three real-world social networks. The results indicate: heterogeneous relationship could lead to wider diffusion in directed network, and heterogeneous security awareness could lead to wider diffusion in both directed and undirected networks; heterogeneous relationship could restrain the outbreak of malware but heterogeneous initial security awareness would increase the probability; furthermore, the increasing resistibility along with infected times would lead to malware's disappearance in social networks.
Stability and stabilisation of a class of networked dynamic systems
NASA Astrophysics Data System (ADS)
Liu, H. B.; Wang, D. Q.
2018-04-01
We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Thin client performance for remote 3-D image display.
Lai, Albert; Nieh, Jason; Laine, Andrew; Starren, Justin
2003-01-01
Several trends in biomedical computing are converging in a way that will require new approaches to telehealth image display. Image viewing is becoming an "anytime, anywhere" activity. In addition, organizations are beginning to recognize that healthcare providers are highly mobile and optimal care requires providing information wherever the provider and patient are. Thin-client computing is one way to support image viewing this complex environment. However little is known about the behavior of thin client systems in supporting image transfer in modern heterogeneous networks. Our results show that using thin-clients can deliver acceptable performance over conditions commonly seen in wireless networks if newer protocols optimized for these conditions are used.
Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta
2017-01-01
Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic. PMID:28245222
Wu, Jibing; Meng, Qinggang; Deng, Su; Huang, Hongbin; Wu, Yahui; Badii, Atta
2017-01-01
Heterogeneous information networks (e.g. bibliographic networks and social media networks) that consist of multiple interconnected objects are ubiquitous. Clustering analysis is an effective method to understand the semantic information and interpretable structure of the heterogeneous information networks, and it has attracted the attention of many researchers in recent years. However, most studies assume that heterogeneous information networks usually follow some simple schemas, such as bi-typed networks or star network schema, and they can only cluster one type of object in the network each time. In this paper, a novel clustering framework is proposed based on sparse tensor factorization for heterogeneous information networks, which can cluster multiple types of objects simultaneously in a single pass without any network schema information. The types of objects and the relations between them in the heterogeneous information networks are modeled as a sparse tensor. The clustering issue is modeled as an optimization problem, which is similar to the well-known Tucker decomposition. Then, an Alternating Least Squares (ALS) algorithm and a feasible initialization method are proposed to solve the optimization problem. Based on the tensor factorization, we simultaneously partition different types of objects into different clusters. The experimental results on both synthetic and real-world datasets have demonstrated that our proposed clustering framework, STFClus, can model heterogeneous information networks efficiently and can outperform state-of-the-art clustering algorithms as a generally applicable single-pass clustering method for heterogeneous network which is network schema agnostic.
Applying a cloud computing approach to storage architectures for spacecraft
NASA Astrophysics Data System (ADS)
Baldor, Sue A.; Quiroz, Carlos; Wood, Paul
As sensor technologies, processor speeds, and memory densities increase, spacecraft command, control, processing, and data storage systems have grown in complexity to take advantage of these improvements and expand the possible missions of spacecraft. Spacecraft systems engineers are increasingly looking for novel ways to address this growth in complexity and mitigate associated risks. Looking to conventional computing, many solutions have been executed to solve both the problem of complexity and heterogeneity in systems. In particular, the cloud-based paradigm provides a solution for distributing applications and storage capabilities across multiple platforms. In this paper, we propose utilizing a cloud-like architecture to provide a scalable mechanism for providing mass storage in spacecraft networks that can be reused on multiple spacecraft systems. By presenting a consistent interface to applications and devices that request data to be stored, complex systems designed by multiple organizations may be more readily integrated. Behind the abstraction, the cloud storage capability would manage wear-leveling, power consumption, and other attributes related to the physical memory devices, critical components in any mass storage solution for spacecraft. Our approach employs SpaceWire networks and SpaceWire-capable devices, although the concept could easily be extended to non-heterogeneous networks consisting of multiple spacecraft and potentially the ground segment.
Mixed-mode oscillations and population bursting in the pre-Bötzinger complex
Bacak, Bartholomew J; Kim, Taegyo; Smith, Jeffrey C; Rubin, Jonathan E; Rybak, Ilya A
2016-01-01
This study focuses on computational and theoretical investigations of neuronal activity arising in the pre-Bötzinger complex (pre-BötC), a medullary region generating the inspiratory phase of breathing in mammals. A progressive increase of neuronal excitability in medullary slices containing the pre-BötC produces mixed-mode oscillations (MMOs) characterized by large amplitude population bursts alternating with a series of small amplitude bursts. Using two different computational models, we demonstrate that MMOs emerge within a heterogeneous excitatory neural network because of progressive neuronal recruitment and synchronization. The MMO pattern depends on the distributed neuronal excitability, the density and weights of network interconnections, and the cellular properties underlying endogenous bursting. Critically, the latter should provide a reduction of spiking frequency within neuronal bursts with increasing burst frequency and a dependence of the after-burst recovery period on burst amplitude. Our study highlights a novel mechanism by which heterogeneity naturally leads to complex dynamics in rhythmic neuronal populations. DOI: http://dx.doi.org/10.7554/eLife.13403.001 PMID:26974345
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy Management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration off the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radtke, M.A.
This paper will chronicle the activity at Wisconsin Public Service Corporation (WPSC) that resulted in the complete migration of a traditional, late 1970`s vintage, Energy management System (EMS). The new environment includes networked microcomputers, minicomputers, and the corporate mainframe, and provides on-line access to employees outside the energy control center and some WPSC customers. In the late 1980`s, WPSC was forecasting an EMS computer upgrade or replacement to address both capacity and technology needs. Reasoning that access to diverse computing resources would best position the company to accommodate the uncertain needs of the energy industry in the 90`s, WPSC chosemore » to investigate an in-place migration to a network of computers, able to support heterogeneous hardware and operating systems. The system was developed in a modular fashion, with individual modules being deployed as soon as they were completed. The functional and technical specification was continuously enhanced as operating experience was gained from each operational module. With the migration of the original EMS computers complete, the networked system called DEMAXX (Distributed Energy Management Architecture with eXtensive eXpandability) has exceeded expectations in the areas of: cost, performance, flexibility, and reliability.« less
Liu, Zhiming; Luo, Jiawei
2017-08-01
Associating protein complexes to human inherited diseases is critical for better understanding of biological processes and functional mechanisms of the disease. Many protein complexes have been identified and functionally annotated by computational and purification methods so far, however, the particular roles they were playing in causing disease have not yet been well determined. In this study, we present a novel method to identify associations between protein complexes and diseases. First, we construct a disease-protein heterogeneous network based on data integration and laplacian normalization. Second, we apply a random walk with restart on heterogeneous network (RWRH) algorithm on this network to quantify the strength of the association between proteins and the query disease. Third, we sum over the scores of member proteins to obtain a summary score for each candidate protein complex, and then rank all candidate protein complexes according to their scores. With a series of leave-one-out cross-validation experiments, we found that our method not only possesses high performance but also demonstrates robustness regarding the parameters and the network structure. We test our approach with breast cancer and select top 20 highly ranked protein complexes, 17 of the selected protein complexes are evidenced to be connected with breast cancer. Our proposed method is effective in identifying disease-related protein complexes based on data integration and laplacian normalization. Copyright © 2017. Published by Elsevier Ltd.
Colizza, Vittoria; Barrat, Alain; Barthélemy, Marc; Vespignani, Alessandro
2006-02-14
The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.
Methods for biological data integration: perspectives and challenges
Gligorijević, Vladimir; Pržulj, Nataša
2015-01-01
Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development. PMID:26490630
Co-percolation to tune conductive behaviour in dynamical metallic nanowire networks.
Fairfield, J A; Rocha, C G; O'Callaghan, C; Ferreira, M S; Boland, J J
2016-11-03
Nanowire networks act as self-healing smart materials, whose sheet resistance can be tuned via an externally applied voltage stimulus. This memristive response occurs due to modification of junction resistances to form a connectivity path across the lowest barrier junctions in the network. While most network studies have been performed on expensive noble metal nanowires like silver, networks of inexpensive nickel nanowires with a nickel oxide coating can also demonstrate resistive switching, a common feature of metal oxides with filamentary conduction. However, networks made from solely nickel nanowires have high operation voltages which prohibit large-scale material applications. Here we show, using both experiment and simulation, that a heterogeneous network of nickel and silver nanowires allows optimization of the activation voltage, as well as tuning of the conduction behavior to be either resistive switching, memristive, or a combination of both. Small percentages of silver nanowires, below the percolation threshold, induce these changes in electrical behaviour, even for low area coverage and hence very transparent films. Silver nanowires act as current concentrators, amplifying conductivity locally as shown in our computational dynamical activation framework for networks of junctions. These results demonstrate that a heterogeneous nanowire network can act as a cost-effective adaptive material with minimal use of noble metal nanowires, without losing memristive behaviour that is essential for smart sensing and neuromorphic applications.
A system for distributed intrusion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snapp, S.R.; Brentano, J.; Dias, G.V.
1991-01-01
The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less
Epidemic processes in complex networks
NASA Astrophysics Data System (ADS)
Pastor-Satorras, Romualdo; Castellano, Claudio; Van Mieghem, Piet; Vespignani, Alessandro
2015-07-01
In recent years the research community has accumulated overwhelming evidence for the emergence of complex and heterogeneous connectivity patterns in a wide range of biological and sociotechnical systems. The complex properties of real-world networks have a profound impact on the behavior of equilibrium and nonequilibrium phenomena occurring in various systems, and the study of epidemic spreading is central to our understanding of the unfolding of dynamical processes in complex networks. The theoretical analysis of epidemic spreading in heterogeneous networks requires the development of novel analytical frameworks, and it has produced results of conceptual and practical relevance. A coherent and comprehensive review of the vast research activity concerning epidemic processes is presented, detailing the successful theoretical approaches as well as making their limits and assumptions clear. Physicists, mathematicians, epidemiologists, computer, and social scientists share a common interest in studying epidemic spreading and rely on similar models for the description of the diffusion of pathogens, knowledge, and innovation. For this reason, while focusing on the main results and the paradigmatic models in infectious disease modeling, the major results concerning generalized social contagion processes are also presented. Finally, the research activity at the forefront in the study of epidemic spreading in coevolving, coupled, and time-varying networks is reported.
Dynamic Load-Balancing for Distributed Heterogeneous Computing of Parallel CFD Problems
NASA Technical Reports Server (NTRS)
Ecer, A.; Chien, Y. P.; Boenisch, T.; Akay, H. U.
2000-01-01
The developed methodology is aimed at improving the efficiency of executing block-structured algorithms on parallel, distributed, heterogeneous computers. The basic approach of these algorithms is to divide the flow domain into many sub- domains called blocks, and solve the governing equations over these blocks. Dynamic load balancing problem is defined as the efficient distribution of the blocks among the available processors over a period of several hours of computations. In environments with computers of different architecture, operating systems, CPU speed, memory size, load, and network speed, balancing the loads and managing the communication between processors becomes crucial. Load balancing software tools for mutually dependent parallel processes have been created to efficiently utilize an advanced computation environment and algorithms. These tools are dynamic in nature because of the chances in the computer environment during execution time. More recently, these tools were extended to a second operating system: NT. In this paper, the problems associated with this application will be discussed. Also, the developed algorithms were combined with the load sharing capability of LSF to efficiently utilize workstation clusters for parallel computing. Finally, results will be presented on running a NASA based code ADPAC to demonstrate the developed tools for dynamic load balancing.
Development and implementation of a PACS network and resource manager
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Taira, Ricky K.; Dwyer, Samuel J., III; Huang, H. K.
1992-07-01
Clinical acceptance of PACS is predicated upon maximum uptime. Upon component failure, detection, diagnosis, reconfiguration and repair must occur immediately. Our current PACS network is large, heterogeneous, complex and wide-spread geographically. The overwhelming number of network devices, computers and software processes involved in a departmental or inter-institutional PACS makes development of tools for network and resource management critical. The authors have developed and implemented a comprehensive solution (PACS Network-Resource Manager) using the OSI Network Management Framework with network element agents that respond to queries and commands for network management stations. Managed resources include: communication protocol layers for Ethernet, FDDI and UltraNet; network devices; computer and operating system resources; and application, database and network services. The Network-Resource Manager is currently being used for warning, fault, security violation and configuration modification event notification. Analysis, automation and control applications have been added so that PACS resources can be dynamically reconfigured and so that users are notified when active involvement is required. Custom data and error logging have been implemented that allow statistics for each PACS subsystem to be charted for performance data. The Network-Resource Manager allows our departmental PACS system to be monitored continuously and thoroughly, with a minimal amount of personal involvement and time.
El-Sayed, Hesham; Sankar, Sharmi; Daraghmi, Yousef-Awwad; Tiwari, Prayag; Rattagan, Ekarat; Mohanty, Manoranjan; Puthal, Deepak; Prasad, Mukesh
2018-05-24
Heterogeneous vehicular networks (HETVNETs) evolve from vehicular ad hoc networks (VANETs), which allow vehicles to always be connected so as to obtain safety services within intelligent transportation systems (ITSs). The services and data provided by HETVNETs should be neither interrupted nor delayed. Therefore, Quality of Service (QoS) improvement of HETVNETs is one of the topics attracting the attention of researchers and the manufacturing community. Several methodologies and frameworks have been devised by researchers to address QoS-prediction service issues. In this paper, to improve QoS, we evaluate various traffic characteristics of HETVNETs and propose a new supervised learning model to capture knowledge on all possible traffic patterns. This model is a refinement of support vector machine (SVM) kernels with a radial basis function (RBF). The proposed model produces better results than SVMs, and outperforms other prediction methods used in a traffic context, as it has lower computational complexity and higher prediction accuracy.
Semantic integration of data on transcriptional regulation
Baitaluk, Michael; Ponomarenko, Julia
2010-01-01
Motivation: Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a ‘one-stop shop’ experience for users seeking information essential for deciphering and modeling gene regulatory networks. Results: IntegromeDB, a semantic graph-based ‘deep-web’ data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. Availability: IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org Contact: baitaluk@sdsc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20427517
Challenge Paper: Validation of Forensic Techniques for Criminal Prosecution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erbacher, Robert F.; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.
2007-04-10
Abstract: As in many domains, there is increasing agreement in the user and research community that digital forensics analysts would benefit from the extension, development and application of advanced techniques in performing large scale and heterogeneous data analysis. Modern digital forensics analysis of cyber-crimes and cyber-enabled crimes often requires scrutiny of massive amounts of data. For example, a case involving network compromise across multiple enterprises might require forensic analysis of numerous sets of network logs and computer hard drives, potentially involving 100?s of gigabytes of heterogeneous data, or even terabytes or petabytes of data. Also, the goal for forensic analysismore » is to not only determine whether the illicit activity being considered is taking place, but also to identify the source of the activity and the full extent of the compromise or impact on the local network. Even after this analysis, there remains the challenge of using the results in subsequent criminal and civil processes.« less
Semantic integration of data on transcriptional regulation.
Baitaluk, Michael; Ponomarenko, Julia
2010-07-01
Experimental and predicted data concerning gene transcriptional regulation are distributed among many heterogeneous sources. However, there are no resources to integrate these data automatically or to provide a 'one-stop shop' experience for users seeking information essential for deciphering and modeling gene regulatory networks. IntegromeDB, a semantic graph-based 'deep-web' data integration system that automatically captures, integrates and manages publicly available data concerning transcriptional regulation, as well as other relevant biological information, is proposed in this article. The problems associated with data integration are addressed by ontology-driven data mapping, multiple data annotation and heterogeneous data querying, also enabling integration of the user's data. IntegromeDB integrates over 100 experimental and computational data sources relating to genomics, transcriptomics, genetics, and functional and interaction data concerning gene transcriptional regulation in eukaryotes and prokaryotes. IntegromeDB is accessible through the integrated research environment BiologicalNetworks at http://www.BiologicalNetworks.org baitaluk@sdsc.edu Supplementary data are available at Bioinformatics online.
A global distributed storage architecture
NASA Technical Reports Server (NTRS)
Lionikis, Nemo M.; Shields, Michael F.
1996-01-01
NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.
DAI-CLIPS: Distributed, Asynchronous, Interacting CLIPS
NASA Technical Reports Server (NTRS)
Gagne, Denis; Garant, Alain
1994-01-01
DAI-CLIPS is a distributed computational environment within which each CLIPS is an active independent computational entity with the ability to communicate freely with other CLIPS. Furthermore, new CLIPS can be created, others can be deleted or modify their expertise, all dynamically in an asynchronous and independent fashion during execution. The participating CLIPS are distributed over a network of heterogeneous processors taking full advantage of the available processing power. We present the general framework encompassing DAI-CLIPS and discuss some of its advantages and potential applications.
Spagnolo, Daniel M; Gyanchandani, Rekha; Al-Kofahi, Yousef; Stern, Andrew M; Lezon, Timothy R; Gough, Albert; Meyer, Dan E; Ginty, Fiona; Sarachan, Brion; Fine, Jeffrey; Lee, Adrian V; Taylor, D Lansing; Chennubhotla, S Chakra
2016-01-01
Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME) are key contributors to heterogeneity. We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI) and visually represent heterogeneity with a two-dimensional map. We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI), which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different components within the TME. We hypothesize that PMI will uncover key spatial interactions in the TME that contribute to disease proliferation and progression.
A distributed data base management facility for the CAD/CAM environment
NASA Technical Reports Server (NTRS)
Balza, R. M.; Beaudet, R. W.; Johnson, H. R.
1984-01-01
Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.
Gogoshin, Grigoriy; Boerwinkle, Eric
2017-01-01
Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505
Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S
2017-04-01
Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.
Chaisangmongkon, Warasinee; Swaminathan, Sruthi K.; Freedman, David J.; Wang, Xiao-Jing
2017-01-01
Summary Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels. Analysis of the trained networks elucidates that robust transient trajectories of the neural population are the key driver of sequential categorical decisions. The directions of trajectories are governed by network self-organized connectivity, defining a ‘neural landscape’, consisting of a task-tailored arrangement of slow states and dynamical tunnels. With this model, we can identify functionally-relevant circuit motifs and generalize the framework to solve other categorization tasks. PMID:28334612
A cloud-based data network approach for translational cancer research.
Xing, Wei; Tsoumakos, Dimitrios; Ghanem, Moustafa
2015-01-01
We develop a new model and associated technology for constructing and managing self-organizing data to support translational cancer research studies. We employ a semantic content network approach to address the challenges of managing cancer research data. Such data is heterogeneous, large, decentralized, growing and continually being updated. Moreover, the data originates from different information sources that may be partially overlapping, creating redundancies as well as contradictions and inconsistencies. Building on the advantages of elasticity of cloud computing, we deploy the cancer data networks on top of the CELAR Cloud platform to enable more effective processing and analysis of Big cancer data.
Identifying novel genes and chemicals related to nasopharyngeal cancer in a heterogeneous network.
Li, Zhandong; An, Lifeng; Li, Hao; Wang, ShaoPeng; Zhou, You; Yuan, Fei; Li, Lin
2016-05-05
Nasopharyngeal cancer or nasopharyngeal carcinoma (NPC) is the most common cancer originating in the nasopharynx. The factors that induce nasopharyngeal cancer are still not clear. Additional information about the chemicals or genes related to nasopharyngeal cancer will promote a better understanding of the pathogenesis of this cancer and the factors that induce it. Thus, a computational method NPC-RGCP was proposed in this study to identify the possible relevant chemicals and genes based on the presently known chemicals and genes related to nasopharyngeal cancer. To extensively utilize the functional associations between proteins and chemicals, a heterogeneous network was constructed based on interactions of proteins and chemicals. The NPC-RGCP included two stages: the searching stage and the screening stage. The former stage is for finding new possible genes and chemicals in the heterogeneous network, while the latter stage is for screening and removing false discoveries and selecting the core genes and chemicals. As a result, five putative genes, CXCR3, IRF1, CDK1, GSTP1, and CDH2, and seven putative chemicals, iron, propionic acid, dimethyl sulfoxide, isopropanol, erythrose 4-phosphate, β-D-Fructose 6-phosphate, and flavin adenine dinucleotide, were identified by NPC-RGCP. Extensive analyses provided confirmation that the putative genes and chemicals have significant associations with nasopharyngeal cancer.
Identifying novel genes and chemicals related to nasopharyngeal cancer in a heterogeneous network
Li, Zhandong; An, Lifeng; Li, Hao; Wang, ShaoPeng; Zhou, You; Yuan, Fei; Li, Lin
2016-01-01
Nasopharyngeal cancer or nasopharyngeal carcinoma (NPC) is the most common cancer originating in the nasopharynx. The factors that induce nasopharyngeal cancer are still not clear. Additional information about the chemicals or genes related to nasopharyngeal cancer will promote a better understanding of the pathogenesis of this cancer and the factors that induce it. Thus, a computational method NPC-RGCP was proposed in this study to identify the possible relevant chemicals and genes based on the presently known chemicals and genes related to nasopharyngeal cancer. To extensively utilize the functional associations between proteins and chemicals, a heterogeneous network was constructed based on interactions of proteins and chemicals. The NPC-RGCP included two stages: the searching stage and the screening stage. The former stage is for finding new possible genes and chemicals in the heterogeneous network, while the latter stage is for screening and removing false discoveries and selecting the core genes and chemicals. As a result, five putative genes, CXCR3, IRF1, CDK1, GSTP1, and CDH2, and seven putative chemicals, iron, propionic acid, dimethyl sulfoxide, isopropanol, erythrose 4-phosphate, β-D-Fructose 6-phosphate, and flavin adenine dinucleotide, were identified by NPC-RGCP. Extensive analyses provided confirmation that the putative genes and chemicals have significant associations with nasopharyngeal cancer. PMID:27149165
Lücker, Adrien; Secomb, Timothy W.; Weber, Bruno; Jenny, Patrick
2018-01-01
Capillary dysfunction impairs oxygen supply to parenchymal cells and often occurs in Alzheimer's disease, diabetes and aging. Disturbed capillary flow patterns have been shown to limit the efficacy of oxygen extraction and can be quantified using capillary transit time heterogeneity (CTH). However, the transit time of red blood cells (RBCs) through the microvasculature is not a direct measure of their capacity for oxygen delivery. Here we examine the relation between CTH and capillary outflow saturation heterogeneity (COSH), which is the heterogeneity of blood oxygen content at the venous end of capillaries. Models for the evolution of hemoglobin saturation heterogeneity (HSH) in capillary networks were developed and validated using a computational model with moving RBCs. Two representative situations were selected: a Krogh cylinder geometry with heterogeneous hemoglobin saturation (HS) at the inflow, and a parallel array of four capillaries. The heterogeneity of HS after converging capillary bifurcations was found to exponentially decrease with a time scale of 0.15–0.21 s due to diffusive interaction between RBCs. Similarly, the HS difference between parallel capillaries also drops exponentially with a time scale of 0.12–0.19 s. These decay times are substantially smaller than measured RBC transit times and only weakly depend on the distance between microvessels. This work shows that diffusive interaction strongly reduces COSH on a small spatial scale. Therefore, we conclude that CTH influences COSH yet does not determine it. The second part of this study will focus on simulations in microvascular networks from the rodent cerebral cortex. Actual estimates of COSH and CTH will then be given. PMID:29755365
Lücker, Adrien; Secomb, Timothy W; Weber, Bruno; Jenny, Patrick
2018-01-01
Capillary dysfunction impairs oxygen supply to parenchymal cells and often occurs in Alzheimer's disease, diabetes and aging. Disturbed capillary flow patterns have been shown to limit the efficacy of oxygen extraction and can be quantified using capillary transit time heterogeneity (CTH). However, the transit time of red blood cells (RBCs) through the microvasculature is not a direct measure of their capacity for oxygen delivery. Here we examine the relation between CTH and capillary outflow saturation heterogeneity (COSH), which is the heterogeneity of blood oxygen content at the venous end of capillaries. Models for the evolution of hemoglobin saturation heterogeneity (HSH) in capillary networks were developed and validated using a computational model with moving RBCs. Two representative situations were selected: a Krogh cylinder geometry with heterogeneous hemoglobin saturation (HS) at the inflow, and a parallel array of four capillaries. The heterogeneity of HS after converging capillary bifurcations was found to exponentially decrease with a time scale of 0.15-0.21 s due to diffusive interaction between RBCs. Similarly, the HS difference between parallel capillaries also drops exponentially with a time scale of 0.12-0.19 s. These decay times are substantially smaller than measured RBC transit times and only weakly depend on the distance between microvessels. This work shows that diffusive interaction strongly reduces COSH on a small spatial scale. Therefore, we conclude that CTH influences COSH yet does not determine it. The second part of this study will focus on simulations in microvascular networks from the rodent cerebral cortex. Actual estimates of COSH and CTH will then be given.
Statistical mechanics of a cat's cradle
NASA Astrophysics Data System (ADS)
Shen, Tongye; Wolynes, Peter G.
2006-11-01
It is believed that, much like a cat's cradle, the cytoskeleton can be thought of as a network of strings under tension. We show that both regular and random bond-disordered networks having bonds that buckle upon compression exhibit a variety of phase transitions as a function of temperature and extension. The results of self-consistent phonon calculations for the regular networks agree very well with computer simulations at finite temperature. The analytic theory also yields a rigidity onset (mechanical percolation) and the fraction of extended bonds for random networks. There is very good agreement with the simulations by Delaney et al (2005 Europhys. Lett. 72 990). The mean field theory reveals a nontranslationally invariant phase with self-generated heterogeneity of tautness, representing 'antiferroelasticity'.
ERIC Educational Resources Information Center
Mercado, Eduardo, III; Church, Barbara A.
2016-01-01
Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair…
Spatially correlated heterogeneous aspirations to enhance network reciprocity
NASA Astrophysics Data System (ADS)
Tanimoto, Jun; Nakata, Makoto; Hagishima, Aya; Ikegaya, Naoki
2012-02-01
Perc & Wang demonstrated that aspiring to be the fittest under conditions of pairwise strategy updating enhances network reciprocity in structured populations playing 2×2 Prisoner's Dilemma games (Z. Wang, M. Perc, Aspiring to the fittest and promoted of cooperation in the Prisoner's Dilemma game, Physical Review E 82 (2010) 021115; M. Perc, Z. Wang, Heterogeneous aspiration promotes cooperation in the Prisoner's Dilemma game, PLOS one 5 (12) (2010) e15117). Through numerical simulations, this paper shows that network reciprocity is even greater if heterogeneous aspirations are imposed. We also suggest why heterogeneous aspiration fosters network reciprocity. It distributes strategy updating speed among agents in a manner that fortifies the initially allocated cooperators' clusters against invasion. This finding prompted us to further enhance the usual heterogeneous aspiration cases for heterogeneous network topologies. We find that a negative correlation between degree and aspiration level does extend cooperation among heterogeneously structured agents.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Synchrony-induced modes of oscillation of a neural field model
NASA Astrophysics Data System (ADS)
Esnaola-Acebes, Jose M.; Roxin, Alex; Avitabile, Daniele; Montbrió, Ernest
2017-11-01
We investigate the modes of oscillation of heterogeneous ring networks of quadratic integrate-and-fire (QIF) neurons with nonlocal, space-dependent coupling. Perturbations of the equilibrium state with a particular wave number produce transient standing waves with a specific temporal frequency, analogously to those in a tense string. In the neuronal network, the equilibrium corresponds to a spatially homogeneous, asynchronous state. Perturbations of this state excite the network's oscillatory modes, which reflect the interplay of episodes of synchronous spiking with the excitatory-inhibitory spatial interactions. In the thermodynamic limit, an exact low-dimensional neural field model describing the macroscopic dynamics of the network is derived. This allows us to obtain formulas for the Turing eigenvalues of the spatially homogeneous state and hence to obtain its stability boundary. We find that the frequency of each Turing mode depends on the corresponding Fourier coefficient of the synaptic pattern of connectivity. The decay rate instead is identical for all oscillation modes as a consequence of the heterogeneity-induced desynchronization of the neurons. Finally, we numerically compute the spectrum of spatially inhomogeneous solutions branching from the Turing bifurcation, showing that similar oscillatory modes operate in neural bump states and are maintained away from onset.
Synchrony-induced modes of oscillation of a neural field model.
Esnaola-Acebes, Jose M; Roxin, Alex; Avitabile, Daniele; Montbrió, Ernest
2017-11-01
We investigate the modes of oscillation of heterogeneous ring networks of quadratic integrate-and-fire (QIF) neurons with nonlocal, space-dependent coupling. Perturbations of the equilibrium state with a particular wave number produce transient standing waves with a specific temporal frequency, analogously to those in a tense string. In the neuronal network, the equilibrium corresponds to a spatially homogeneous, asynchronous state. Perturbations of this state excite the network's oscillatory modes, which reflect the interplay of episodes of synchronous spiking with the excitatory-inhibitory spatial interactions. In the thermodynamic limit, an exact low-dimensional neural field model describing the macroscopic dynamics of the network is derived. This allows us to obtain formulas for the Turing eigenvalues of the spatially homogeneous state and hence to obtain its stability boundary. We find that the frequency of each Turing mode depends on the corresponding Fourier coefficient of the synaptic pattern of connectivity. The decay rate instead is identical for all oscillation modes as a consequence of the heterogeneity-induced desynchronization of the neurons. Finally, we numerically compute the spectrum of spatially inhomogeneous solutions branching from the Turing bifurcation, showing that similar oscillatory modes operate in neural bump states and are maintained away from onset.
Track classification within wireless sensor network
NASA Astrophysics Data System (ADS)
Doumerc, Robin; Pannetier, Benjamin; Moras, Julien; Dezert, Jean; Canevet, Loic
2017-05-01
In this paper, we present our study on track classification by taking into account environmental information and target estimated states. The tracker uses several motion model adapted to different target dynamics (pedestrian, ground vehicle and SUAV, i.e. small unmanned aerial vehicle) and works in centralized architecture. The main idea is to explore both: classification given by heterogeneous sensors and classification obtained with our fusion module. The fusion module, presented in his paper, provides a class on each track according to track location, velocity and associated uncertainty. To model the likelihood on each class, a fuzzy approach is used considering constraints on target capability to move in the environment. Then the evidential reasoning approach based on Dempster-Shafer Theory (DST) is used to perform a time integration of this classifier output. The fusion rules are tested and compared on real data obtained with our wireless sensor network.In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of this system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).
Leu, Jenq-Shiou; Lin, Wei-Hsiang; Hsieh, Wen-Bin; Lo, Chien-Chih
2014-01-01
As the digitization is integrated into daily life, media including video and audio are heavily transferred over the Internet nowadays. Voice-over-Internet Protocol (VoIP), the most popular and mature technology, becomes the focus attracting many researches and investments. However, most of the existing studies focused on a one-to-one communication model in a homogeneous network, instead of one-to-many broadcasting model among diverse embedded devices in a heterogeneous network. In this paper, we present the implementation of a VoIP broadcasting service on the open source-Linphone-in a heterogeneous network environment, including WiFi, 3G, and LAN networks. The proposed system featuring VoIP broadcasting over heterogeneous networks can be integrated with heterogeneous agile devices, such as embedded devices or mobile phones. VoIP broadcasting over heterogeneous networks can be integrated into modern smartphones or other embedded devices; thus when users run in a traditional AM/FM signal unreachable area, they still can receive the broadcast voice through the IP network. Also, comprehensive evaluations are conducted to verify the effectiveness of the proposed implementation.
Lin, Wei-Hsiang; Hsieh, Wen-Bin; Lo, Chien-Chih
2014-01-01
As the digitization is integrated into daily life, media including video and audio are heavily transferred over the Internet nowadays. Voice-over-Internet Protocol (VoIP), the most popular and mature technology, becomes the focus attracting many researches and investments. However, most of the existing studies focused on a one-to-one communication model in a homogeneous network, instead of one-to-many broadcasting model among diverse embedded devices in a heterogeneous network. In this paper, we present the implementation of a VoIP broadcasting service on the open source—Linphone—in a heterogeneous network environment, including WiFi, 3G, and LAN networks. The proposed system featuring VoIP broadcasting over heterogeneous networks can be integrated with heterogeneous agile devices, such as embedded devices or mobile phones. VoIP broadcasting over heterogeneous networks can be integrated into modern smartphones or other embedded devices; thus when users run in a traditional AM/FM signal unreachable area, they still can receive the broadcast voice through the IP network. Also, comprehensive evaluations are conducted to verify the effectiveness of the proposed implementation. PMID:25300280
Accounting for small scale heterogeneity in ecohydrologic watershed models
NASA Astrophysics Data System (ADS)
Bhaskar, A.; Fleming, B.; Hogan, D. M.
2016-12-01
Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach including characterizing urban vegetation and storm water management features and their impact on watershed scale hydrology and biogeochemical cycling.
Accounting for small scale heterogeneity in ecohydrologic watershed models
NASA Astrophysics Data System (ADS)
Burke, W.; Tague, C.
2017-12-01
Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach including characterizing urban vegetation and storm water management features and their impact on watershed scale hydrology and biogeochemical cycling.
Russo, Lucia; Russo, Paola; Siettos, Constantinos I.
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire. PMID:27780249
The role of the interaction network in the emergence of diversity of behavior
Tabacof, Pedro; Von Zuben, Fernando J.
2017-01-01
How can systems in which individuals’ inner workings are very similar to each other, as neural networks or ant colonies, produce so many qualitatively different behaviors, giving rise to roles and specialization? In this work, we bring new perspectives to this question by focusing on the underlying network that defines how individuals in these systems interact. We applied a genetic algorithm to optimize rules and connections of cellular automata in order to solve the density classification task, a classical problem used to study emergent behaviors in decentralized computational systems. The networks used were all generated by the introduction of shortcuts in an originally regular topology, following the small-world model. Even though all cells follow the exact same rules, we observed the existence of different classes of cells’ behaviors in the best cellular automata found—most cells were responsible for memory and others for integration of information. Through the analysis of structural measures and patterns of connections (motifs) in successful cellular automata, we observed that the distribution of shortcuts between distant regions and the speed in which a cell can gather information from different parts of the system seem to be the main factors for the specialization we observed, demonstrating how heterogeneity in a network can create heterogeneity of behavior. PMID:28234962
Russo, Lucia; Russo, Paola; Siettos, Constantinos I
2016-01-01
Based on complex network theory, we propose a computational methodology which addresses the spatial distribution of fuel breaks for the inhibition of the spread of wildland fires on heterogeneous landscapes. This is a two-level approach where the dynamics of fire spread are modeled as a random Markov field process on a directed network whose edge weights are determined by a Cellular Automata model that integrates detailed GIS, landscape and meteorological data. Within this framework, the spatial distribution of fuel breaks is reduced to the problem of finding network nodes (small land patches) which favour fire propagation. Here, this is accomplished by exploiting network centrality statistics. We illustrate the proposed approach through (a) an artificial forest of randomly distributed density of vegetation, and (b) a real-world case concerning the island of Rhodes in Greece whose major part of its forest was burned in 2008. Simulation results show that the proposed methodology outperforms the benchmark/conventional policy of fuel reduction as this can be realized by selective harvesting and/or prescribed burning based on the density and flammability of vegetation. Interestingly, our approach reveals that patches with sparse density of vegetation may act as hubs for the spread of the fire.
Seismic signal processing on heterogeneous supercomputers
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas
2015-04-01
The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that require dedicated HPC solutions. The chosen application is using a wide range of common signal processing methods, which include various IIR filter designs, amplitude and phase correlation, computing the analytic signal, and discrete Fourier transforms. Furthermore, various processing methods specific for seismology, like rotation of seismic traces, are used. Efficient implementation of all these methods on the GPU-accelerated systems represents several challenges. In particular, it requires a careful distribution of work between the sequential processors and accelerators. Furthermore, since the application is designed to process very large volumes of data, special attention had to be paid to the efficient use of the available memory and networking hardware resources in order to reduce intensity of data input and output. In our contribution we will explain the software architecture as well as principal engineering decisions used to address these challenges. We will also describe the programming model based on C++ and CUDA that we used to develop the software. Finally, we will demonstrate performance improvements achieved by using the heterogeneous computing architecture. This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID d26.
2012-01-01
Computational approaches to generate hypotheses from biomedical literature have been studied intensively in recent years. Nevertheless, it still remains a challenge to automatically discover novel, cross-silo biomedical hypotheses from large-scale literature repositories. In order to address this challenge, we first model a biomedical literature repository as a comprehensive network of biomedical concepts and formulate hypotheses generation as a process of link discovery on the concept network. We extract the relevant information from the biomedical literature corpus and generate a concept network and concept-author map on a cluster using Map-Reduce frame-work. We extract a set of heterogeneous features such as random walk based features, neighborhood features and common author features. The potential number of links to consider for the possibility of link discovery is large in our concept network and to address the scalability problem, the features from a concept network are extracted using a cluster with Map-Reduce framework. We further model link discovery as a classification problem carried out on a training data set automatically extracted from two network snapshots taken in two consecutive time duration. A set of heterogeneous features, which cover both topological and semantic features derived from the concept network, have been studied with respect to their impacts on the accuracy of the proposed supervised link discovery process. A case study of hypotheses generation based on the proposed method has been presented in the paper. PMID:22759614
Epidemic modeling in complex realities.
Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro
2007-04-01
In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.
Radiation detection and situation management by distributed sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan, Frigo; Mielke, Angela; Cai, D Michael
Detection of radioactive materials in an urban environment usually requires large, portal-monitor-style radiation detectors. However, this may not be a practical solution in many transport scenarios. Alternatively, a distributed sensor network (DSN) could complement portal-style detection of radiological materials through the implementation of arrays of low cost, small heterogeneous sensors with the ability to detect the presence of radioactive materials in a moving vehicle over a specific region. In this paper, we report on the use of a heterogeneous, wireless, distributed sensor network for traffic monitoring in a field demonstration. Through wireless communications, the energy spectra from different radiation detectorsmore » are combined to improve the detection confidence. In addition, the DSN exploits other sensor technologies and algorithms to provide additional information about the vehicle, such as its speed, location, class (e.g. car, truck), and license plate number. The sensors are in-situ and data is processed in real-time at each node. Relevant information from each node is sent to a base station computer which is used to assess the movement of radioactive materials.« less
Heterogeneous real-time computing in radio astronomy
NASA Astrophysics Data System (ADS)
Ford, John M.; Demorest, Paul; Ransom, Scott
2010-07-01
Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.
A link prediction method for heterogeneous networks based on BP neural network
NASA Astrophysics Data System (ADS)
Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu
2018-04-01
Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.
NASA Astrophysics Data System (ADS)
Yang, X.; Scheibe, T. D.; Chen, X.; Hammond, G. E.; Song, X.
2015-12-01
The zone in which river water and groundwater mix plays an important role in natural ecosystems as it regulates the mixing of nutrients that control biogeochemical transformations. Subsurface heterogeneity leads to local hotspots of microbial activity that are important to system function yet difficult to resolve computationally. To address this challenge, we are testing a hybrid multiscale approach that couples models at two distinct scales, based on field research at the U. S. Department of Energy's Hanford Site. The region of interest is a 400 x 400 x 20 m macroscale domain that intersects the aquifer and the river and contains a contaminant plume. However, biogeochemical activity is high in a thin zone (mud layer, <1 m thick) immediately adjacent to the river. This microscale domain is highly heterogeneous and requires fine spatial resolution to adequately represent the effects of local mixing on reactions. It is not computationally feasible to resolve the full macroscale domain at the fine resolution needed in the mud layer, and the reaction network needed in the mud layer is much more complex than that needed in the rest of the macroscale domain. Hence, a hybrid multiscale approach is used to efficiently and accurately predict flow and reactive transport at both scales. In our simulations, models at both scales are simulated using the PFLOTRAN code. Multiple microscale simulations in dynamically defined sub-domains (fine resolution, complex reaction network) are executed and coupled with a macroscale simulation over the entire domain (coarse resolution, simpler reaction network). The objectives of the research include: 1) comparing accuracy and computing cost of the hybrid multiscale simulation with a single-scale simulation; 2) identifying hot spots of microbial activity; and 3) defining macroscopic quantities such as fluxes, residence times and effective reaction rates.
An investigation of networking techniques for the ASRM facility
NASA Technical Reports Server (NTRS)
Moorhead, Robert J., II; Smith, Wayne D.; Thompson, Dale R.
1992-01-01
This report is based on the early design concepts for a communications network for the Advanced Solid Rocket Motor (ASRM) facility being built at Yellow Creek near Iuka, MS. The investigators have participated in the early design concepts and in the evaluation of the initial concepts. The continuing system design effort and any modification of the plan will require a careful evaluation of the required bandwidth of the network, the capabilities of the protocol, and the requirements of the controllers and computers on the network. The overall network, which is heterogeneous in protocol and bandwidth, is being modeled, analyzed, simulated, and tested to obtain some degree of confidence in its performance capabilities and in its performance under nominal and heavy loads. The results of the proposed work should have an impact on the design and operation of the ASRM facility.
Impact of Degree Heterogeneity on Attack Vulnerability of Interdependent Networks
NASA Astrophysics Data System (ADS)
Sun, Shiwen; Wu, Yafang; Ma, Yilin; Wang, Li; Gao, Zhongke; Xia, Chengyi
2016-09-01
The study of interdependent networks has become a new research focus in recent years. We focus on one fundamental property of interdependent networks: vulnerability. Previous studies mainly focused on the impact of topological properties upon interdependent networks under random attacks, the effect of degree heterogeneity on structural vulnerability of interdependent networks under intentional attacks, however, is still unexplored. In order to deeply understand the role of degree distribution and in particular degree heterogeneity, we construct an interdependent system model which consists of two networks whose extent of degree heterogeneity can be controlled simultaneously by a tuning parameter. Meanwhile, a new quantity, which can better measure the performance of interdependent networks after attack, is proposed. Numerical simulation results demonstrate that degree heterogeneity can significantly increase the vulnerability of both single and interdependent networks. Moreover, it is found that interdependent links between two networks make the entire system much more fragile to attacks. Enhancing coupling strength between networks can greatly increase the fragility of both networks against targeted attacks, which is most evident under the case of max-max assortative coupling. Current results can help to deepen the understanding of structural complexity of complex real-world systems.
Next generation communications satellites: multiple access and network studies
NASA Technical Reports Server (NTRS)
Meadows, H. E.; Schwartz, M.; Stern, T. E.; Ganguly, S.; Kraimeche, B.; Matsuo, K.; Gopal, I.
1982-01-01
Efficient resource allocation and network design for satellite systems serving heterogeneous user populations with large numbers of small direct-to-user Earth stations are discussed. Focus is on TDMA systems involving a high degree of frequency reuse by means of satellite-switched multiple beams (SSMB) with varying degrees of onboard processing. Algorithms for the efficient utilization of the satellite resources were developed. The effect of skewed traffic, overlapping beams and batched arrivals in packet-switched SSMB systems, integration of stream and bursty traffic, and optimal circuit scheduling in SSMB systems: performance bounds and computational complexity are discussed.
Evolution of ethnocentrism on undirected and directed Barabási-Albert networks
NASA Astrophysics Data System (ADS)
Lima, F. W. S.; Hadzibeganovic, Tarik; Stauffer, Dietrich
2009-12-01
Using Monte Carlo simulations, we study the evolution of contingent cooperation and ethnocentrism in the one-shot game. Interactions and reproduction among computational agents are simulated on undirected and directed Barabási-Albert (BA) networks. We first replicate the Hammond-Axelrod model of in-group favoritism on a square lattice and then generalize this model on undirected and directed BA networks for both asexual and sexual reproduction cases. Our simulations demonstrate that irrespective of the mode of reproduction, the ethnocentric strategy becomes common even though cooperation is individually costly and mechanisms such as reciprocity or conformity are absent. Moreover, our results indicate that the spread of favoritism towards similar others highly depends on the network topology and the associated heterogeneity of the studied population.
NASA Astrophysics Data System (ADS)
Choo, Seongho; Li, Vitaly; Choi, Dong Hee; Jung, Gi Deck; Park, Hong Seong; Ryuh, Youngsun
2005-12-01
On developing the personal robot system presently, the internal architecture is every module those occupy separated functions are connected through heterogeneous network system. This module-based architecture supports specialization and division of labor at not only designing but also implementation, as an effect of this architecture, it can reduce developing times and costs for modules. Furthermore, because every module is connected among other modules through network systems, we can get easy integrations and synergy effect to apply advanced mutual functions by co-working some modules. In this architecture, one of the most important technologies is the network middleware that takes charge communications among each modules connected through heterogeneous networks systems. The network middleware acts as the human nerve system inside of personal robot system; it relays, transmits, and translates information appropriately between modules that are similar to human organizations. The network middleware supports various hardware platform, heterogeneous network systems (Ethernet, Wireless LAN, USB, IEEE 1394, CAN, CDMA-SMS, RS-232C). This paper discussed some mechanisms about our network middleware to intercommunication and routing among modules, methods for real-time data communication and fault-tolerant network service. There have designed and implemented a layered network middleware scheme, distributed routing management, network monitoring/notification technology on heterogeneous networks for these goals. The main theme is how to make routing information in our network middleware. Additionally, with this routing information table, we appended some features. Now we are designing, making a new version network middleware (we call 'OO M/W') that can support object-oriented operation, also are updating program sources itself for object-oriented architecture. It is lighter, faster, and can support more operation systems and heterogeneous network systems, but other general purposed middlewares like CORBA, UPnP, etc. can support only one network protocol or operating system.
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
Valdés, Julio J; Bonham-Carter, Graeme
2006-03-01
A computational intelligence approach is used to explore the problem of detecting internal state changes in time dependent processes; described by heterogeneous, multivariate time series with imprecise data and missing values. Such processes are approximated by collections of time dependent non-linear autoregressive models represented by a special kind of neuro-fuzzy neural network. Grid and high throughput computing model mining procedures based on neuro-fuzzy networks and genetic algorithms, generate: (i) collections of models composed of sets of time lag terms from the time series, and (ii) prediction functions represented by neuro-fuzzy networks. The composition of the models and their prediction capabilities, allows the identification of changes in the internal structure of the process. These changes are associated with the alternation of steady and transient states, zones with abnormal behavior, instability, and other situations. This approach is general, and its sensitivity for detecting subtle changes of state is revealed by simulation experiments. Its potential in the study of complex processes in earth sciences and astrophysics is illustrated with applications using paleoclimate and solar data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bossa, Nathan, E-mail: bossanathan@gmail.com; INERIS, Parc Technologique Alata, BP2, 60550 Verneuil-en-Halatte; iCEINT, CNRS, Duke Univ. International Consortium for the Environmental Implications of Nanotechnology, Aix-en-Provence
2015-01-15
Pore structure of leached cement pastes (w/c = 0.5) was studied for the first time from micro-scale down to the nano-scale by combining micro- and nano-X-ray computed tomography (micro- and nano-CT). This allowed assessing the 3D heterogeneity of the pore network along the cement profile (from the core to the altered layer) of almost the entire range of cement pore size, i.e. from capillary to gel pores. We successfully quantified an increase of porosity in the altered layer at both resolutions. Porosity is increasing from 1.8 to 6.1% and from 18 to 58% at the micro-(voxel = 1.81 μm) andmore » nano-scale (voxel = 63.5 nm) respectively. The combination of both CT allowed to circumvent weaknesses inherent of both investigation scales. In addition the connectivity and the channel size of the pore network were also evaluated to obtain a complete 3D pore network characterization at both scales.« less
Spagnolo, Daniel M.; Gyanchandani, Rekha; Al-Kofahi, Yousef; Stern, Andrew M.; Lezon, Timothy R.; Gough, Albert; Meyer, Dan E.; Ginty, Fiona; Sarachan, Brion; Fine, Jeffrey; Lee, Adrian V.; Taylor, D. Lansing; Chennubhotla, S. Chakra
2016-01-01
Background: Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME) are key contributors to heterogeneity. Methods: We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI) and visually represent heterogeneity with a two-dimensional map. Results: We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. Conclusions: This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI), which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different components within the TME. We hypothesize that PMI will uncover key spatial interactions in the TME that contribute to disease proliferation and progression. PMID:27994939
Gao, Ying; Wkram, Chris Hadri; Duan, Jiajie; Chou, Jarong
2015-01-01
In order to prolong the network lifetime, energy-efficient protocols adapted to the features of wireless sensor networks should be used. This paper explores in depth the nature of heterogeneous wireless sensor networks, and finally proposes an algorithm to address the problem of finding an effective pathway for heterogeneous clustering energy. The proposed algorithm implements cluster head selection according to the degree of energy attenuation during the network’s running and the degree of candidate nodes’ effective coverage on the whole network, so as to obtain an even energy consumption over the whole network for the situation with high degree of coverage. Simulation results show that the proposed clustering protocol has better adaptability to heterogeneous environments than existing clustering algorithms in prolonging the network lifetime. PMID:26690440
Endogenous molecular network reveals two mechanisms of heterogeneity within gastric cancer.
Li, Site; Zhu, Xiaomei; Liu, Bingya; Wang, Gaowei; Ao, Ping
2015-05-30
Intratumor heterogeneity is a common phenomenon and impedes cancer therapy and research. Gastric cancer (GC) cells have generally been classified into two heterogeneous cellular phenotypes, the gastric and intestinal types, yet the mechanisms of maintaining two phenotypes and controlling phenotypic transition are largely unknown. A qualitative systematic framework, the endogenous molecular network hypothesis, has recently been proposed to understand cancer genesis and progression. Here, a minimal network corresponding to such framework was found for GC and was quantified via a stochastic nonlinear dynamical system. We then further extended the framework to address the important question of intratumor heterogeneity quantitatively. The working network characterized main known features of normal gastric epithelial and GC cell phenotypes. Our results demonstrated that four positive feedback loops in the network are critical for GC cell phenotypes. Moreover, two mechanisms that contribute to GC cell heterogeneity were identified: particular positive feedback loops are responsible for the maintenance of intestinal and gastric phenotypes; GC cell progression routes that were revealed by the dynamical behaviors of individual key components are heterogeneous. In this work, we constructed an endogenous molecular network of GC that can be expanded in the future and would broaden the known mechanisms of intratumor heterogeneity.
Endogenous molecular network reveals two mechanisms of heterogeneity within gastric cancer
Li, Site; Zhu, Xiaomei; Liu, Bingya; Wang, Gaowei; Ao, Ping
2015-01-01
Intratumor heterogeneity is a common phenomenon and impedes cancer therapy and research. Gastric cancer (GC) cells have generally been classified into two heterogeneous cellular phenotypes, the gastric and intestinal types, yet the mechanisms of maintaining two phenotypes and controlling phenotypic transition are largely unknown. A qualitative systematic framework, the endogenous molecular network hypothesis, has recently been proposed to understand cancer genesis and progression. Here, a minimal network corresponding to such framework was found for GC and was quantified via a stochastic nonlinear dynamical system. We then further extended the framework to address the important question of intratumor heterogeneity quantitatively. The working network characterized main known features of normal gastric epithelial and GC cell phenotypes. Our results demonstrated that four positive feedback loops in the network are critical for GC cell phenotypes. Moreover, two mechanisms that contribute to GC cell heterogeneity were identified: particular positive feedback loops are responsible for the maintenance of intestinal and gastric phenotypes; GC cell progression routes that were revealed by the dynamical behaviors of individual key components are heterogeneous. In this work, we constructed an endogenous molecular network of GC that can be expanded in the future and would broaden the known mechanisms of intratumor heterogeneity. PMID:25962957
Design and implementation of a high performance network security processor
NASA Astrophysics Data System (ADS)
Wang, Haixin; Bai, Guoqiang; Chen, Hongyi
2010-03-01
The last few years have seen many significant progresses in the field of application-specific processors. One example is network security processors (NSPs) that perform various cryptographic operations specified by network security protocols and help to offload the computation intensive burdens from network processors (NPs). This article presents a high performance NSP system architecture implementation intended for both internet protocol security (IPSec) and secure socket layer (SSL) protocol acceleration, which are widely employed in virtual private network (VPN) and e-commerce applications. The efficient dual one-way pipelined data transfer skeleton and optimised integration scheme of the heterogenous parallel crypto engine arrays lead to a Gbps rate NSP, which is programmable with domain specific descriptor-based instructions. The descriptor-based control flow fragments large data packets and distributes them to the crypto engine arrays, which fully utilises the parallel computation resources and improves the overall system data throughput. A prototyping platform for this NSP design is implemented with a Xilinx XC3S5000 based FPGA chip set. Results show that the design gives a peak throughput for the IPSec ESP tunnel mode of 2.85 Gbps with over 2100 full SSL handshakes per second at a clock rate of 95 MHz.
Individual heterogeneity generating explosive system network dynamics.
Manrique, Pedro D; Johnson, Neil F
2018-03-01
Individual heterogeneity is a key characteristic of many real-world systems, from organisms to humans. However, its role in determining the system's collective dynamics is not well understood. Here we study how individual heterogeneity impacts the system network dynamics by comparing linking mechanisms that favor similar or dissimilar individuals. We find that this heterogeneity-based evolution drives an unconventional form of explosive network behavior, and it dictates how a polarized population moves toward consensus. Our model shows good agreement with data from both biological and social science domains. We conclude that individual heterogeneity likely plays a key role in the collective development of real-world networks and communities, and it cannot be ignored.
Individual heterogeneity generating explosive system network dynamics
NASA Astrophysics Data System (ADS)
Manrique, Pedro D.; Johnson, Neil F.
2018-03-01
Individual heterogeneity is a key characteristic of many real-world systems, from organisms to humans. However, its role in determining the system's collective dynamics is not well understood. Here we study how individual heterogeneity impacts the system network dynamics by comparing linking mechanisms that favor similar or dissimilar individuals. We find that this heterogeneity-based evolution drives an unconventional form of explosive network behavior, and it dictates how a polarized population moves toward consensus. Our model shows good agreement with data from both biological and social science domains. We conclude that individual heterogeneity likely plays a key role in the collective development of real-world networks and communities, and it cannot be ignored.
Latency Hiding in Dynamic Partitioning and Load Balancing of Grid Computing Applications
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak
2001-01-01
The Information Power Grid (IPG) concept developed by NASA is aimed to provide a metacomputing platform for large-scale distributed computations, by hiding the intricacies of highly heterogeneous environment and yet maintaining adequate security. In this paper, we propose a latency-tolerant partitioning scheme that dynamically balances processor workloads on the.IPG, and minimizes data movement and runtime communication. By simulating an unsteady adaptive mesh application on a wide area network, we study the performance of our load balancer under the Globus environment. The number of IPG nodes, the number of processors per node, and the interconnected speeds are parameterized to derive conditions under which the IPG would be suitable for parallel distributed processing of such applications. Experimental results demonstrate that effective solution are achieved when the IPG nodes are connected by a high-speed asynchronous interconnection network.
Case retrieval in medical databases by fusing heterogeneous information.
Quellec, Gwénolé; Lamard, Mathieu; Cazuguel, Guy; Roux, Christian; Cochener, Béatrice
2011-01-01
A novel content-based heterogeneous information retrieval framework, particularly well suited to browse medical databases and support new generation computer aided diagnosis (CADx) systems, is presented in this paper. It was designed to retrieve possibly incomplete documents, consisting of several images and semantic information, from a database; more complex data types such as videos can also be included in the framework. The proposed retrieval method relies on image processing, in order to characterize each individual image in a document by their digital content, and information fusion. Once the available images in a query document are characterized, a degree of match, between the query document and each reference document stored in the database, is defined for each attribute (an image feature or a metadata). A Bayesian network is used to recover missing information if need be. Finally, two novel information fusion methods are proposed to combine these degrees of match, in order to rank the reference documents by decreasing relevance for the query. In the first method, the degrees of match are fused by the Bayesian network itself. In the second method, they are fused by the Dezert-Smarandache theory: the second approach lets us model our confidence in each source of information (i.e., each attribute) and take it into account in the fusion process for a better retrieval performance. The proposed methods were applied to two heterogeneous medical databases, a diabetic retinopathy database and a mammography screening database, for computer aided diagnosis. Precisions at five of 0.809 ± 0.158 and 0.821 ± 0.177, respectively, were obtained for these two databases, which is very promising.
Regional gas transport in the heterogeneous lung during oscillatory ventilation
Herrmann, Jacob; Tawhai, Merryn H.
2016-01-01
Regional ventilation in the injured lung is heterogeneous and frequency dependent, making it difficult to predict how an oscillatory flow waveform at a specified frequency will be distributed throughout the periphery. To predict the impact of mechanical heterogeneity on regional ventilation distribution and gas transport, we developed a computational model of distributed gas flow and CO2 elimination during oscillatory ventilation from 0.1 to 30 Hz. The model consists of a three-dimensional airway network of a canine lung, with heterogeneous parenchymal tissues to mimic effects of gravity and injury. Model CO2 elimination during single frequency oscillation was validated against previously published experimental data (Venegas JG, Hales CA, Strieder DJ, J Appl Physiol 60: 1025–1030, 1986). Simulations of gas transport demonstrated a critical transition in flow distribution at the resonant frequency, where the reactive components of mechanical impedance due to airway inertia and parenchymal elastance were equal. For frequencies above resonance, the distribution of ventilation became spatially clustered and frequency dependent. These results highlight the importance of oscillatory frequency in managing the regional distribution of ventilation and gas exchange in the heterogeneous lung. PMID:27763872
Computing with Neural Synchrony
Brette, Romain
2012-01-01
Neurons communicate primarily with spikes, but most theories of neural computation are based on firing rates. Yet, many experimental observations suggest that the temporal coordination of spikes plays a role in sensory processing. Among potential spike-based codes, synchrony appears as a good candidate because neural firing and plasticity are sensitive to fine input correlations. However, it is unclear what role synchrony may play in neural computation, and what functional advantage it may provide. With a theoretical approach, I show that the computational interest of neural synchrony appears when neurons have heterogeneous properties. In this context, the relationship between stimuli and neural synchrony is captured by the concept of synchrony receptive field, the set of stimuli which induce synchronous responses in a group of neurons. In a heterogeneous neural population, it appears that synchrony patterns represent structure or sensory invariants in stimuli, which can then be detected by postsynaptic neurons. The required neural circuitry can spontaneously emerge with spike-timing-dependent plasticity. Using examples in different sensory modalities, I show that this allows simple neural circuits to extract relevant information from realistic sensory stimuli, for example to identify a fluctuating odor in the presence of distractors. This theory of synchrony-based computation shows that relative spike timing may indeed have computational relevance, and suggests new types of neural network models for sensory processing with appealing computational properties. PMID:22719243
Astrocytes regulate heterogeneity of presynaptic strengths in hippocampal networks
Letellier, Mathieu; Park, Yun Kyung; Chater, Thomas E.; Chipman, Peter H.; Gautam, Sunita Ghimire; Oshima-Takago, Tomoko; Goda, Yukiko
2016-01-01
Dendrites are neuronal structures specialized for receiving and processing information through their many synaptic inputs. How input strengths are modified across dendrites in ways that are crucial for synaptic integration and plasticity remains unclear. We examined in single hippocampal neurons the mechanism of heterosynaptic interactions and the heterogeneity of synaptic strengths of pyramidal cell inputs. Heterosynaptic presynaptic plasticity that counterbalances input strengths requires N-methyl-d-aspartate receptors (NMDARs) and astrocytes. Importantly, this mechanism is shared with the mechanism for maintaining highly heterogeneous basal presynaptic strengths, which requires astrocyte Ca2+ signaling involving NMDAR activation, astrocyte membrane depolarization, and L-type Ca2+ channels. Intracellular infusion of NMDARs or Ca2+-channel blockers into astrocytes, conditionally ablating the GluN1 NMDAR subunit, or optogenetically hyperpolarizing astrocytes with archaerhodopsin promotes homogenization of convergent presynaptic inputs. Our findings support the presence of an astrocyte-dependent cellular mechanism that enhances the heterogeneity of presynaptic strengths of convergent connections, which may help boost the computational power of dendrites. PMID:27118849
Dynamic resource allocation scheme for distributed heterogeneous computer systems
NASA Technical Reports Server (NTRS)
Liu, Howard T. (Inventor); Silvester, John A. (Inventor)
1991-01-01
This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.
NASA Astrophysics Data System (ADS)
Ansari, Hamid Reza
2014-09-01
In this paper we propose a new method for predicting rock porosity based on a combination of several artificial intelligence systems. The method focuses on one of the Iranian carbonate fields in the Persian Gulf. Because there is strong heterogeneity in carbonate formations, estimation of rock properties experiences more challenge than sandstone. For this purpose, seismic colored inversion (SCI) and a new approach of committee machine are used in order to improve porosity estimation. The study comprises three major steps. First, a series of sample-based attributes is calculated from 3D seismic volume. Acoustic impedance is an important attribute that is obtained by the SCI method in this study. Second, porosity log is predicted from seismic attributes using common intelligent computation systems including: probabilistic neural network (PNN), radial basis function network (RBFN), multi-layer feed forward network (MLFN), ε-support vector regression (ε-SVR) and adaptive neuro-fuzzy inference system (ANFIS). Finally, a power law committee machine (PLCM) is constructed based on imperial competitive algorithm (ICA) to combine the results of all previous predictions in a single solution. This technique is called PLCM-ICA in this paper. The results show that PLCM-ICA model improved the results of neural networks, support vector machine and neuro-fuzzy system.
Linking Microstructural Changes to Bulk Behavior in Shear Disordered Matter
NASA Astrophysics Data System (ADS)
Blair, Daniel
Soft and biological materials often exhibit disordered and heterogeneous microstructure. In most cases, the transmission and distribution of stresses through these complex materials reflects their inherent heterogeneity. Through the combination of rheology and 4D imaging we can directly alter and quantify the connection between microstructure and local stresses. We subject soft and biological materials to precise shear deformations while measuring real space information about the distribution and redistribution of the applied stress.In this talk, I will focus on the flow behavior of two distinct but related disordered materials; a flowing compressed emulsion above its yield stress and a strained collagen network. In the emulsion system, I will present experimental and computational results on the dynamical response, at the level of individual droplets, that directly links the particle motion and deformation to the rheology. I will also present results that utilize boundary stress microscopy to quantify the spatial distribution of surface stresses that arise from sheared in-vitro collagen networks. I will outline our main conclusions which is that the strain stiffening behavior observed in collagen networks can be parameterized by a single characteristic strain and associated stress. This characteristic rheological signature seems to describe both the strain stiffening regime and network yielding. NSF DMR: 0847490.
Genomics and transcriptomics in drug discovery.
Dopazo, Joaquin
2014-02-01
The popularization of genomic high-throughput technologies is causing a revolution in biomedical research and, particularly, is transforming the field of drug discovery. Systems biology offers a framework to understand the extensive human genetic heterogeneity revealed by genomic sequencing in the context of the network of functional, regulatory and physical protein-drug interactions. Thus, approaches to find biomarkers and therapeutic targets will have to take into account the complex system nature of the relationships of the proteins with the disease. Pharmaceutical companies will have to reorient their drug discovery strategies considering the human genetic heterogeneity. Consequently, modeling and computational data analysis will have an increasingly important role in drug discovery. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Hyun Mo
2015-12-01
Currently, discrete modellings are largely accepted due to the access to computers with huge storage capacity and high performance processors and easy implementation of algorithms, allowing to develop and simulate increasingly sophisticated models. Wang et al. [7] present a review of dynamics in complex networks, focusing on the interaction between disease dynamics and human behavioral and social dynamics. By doing an extensive review regarding to the human behavior responding to disease dynamics, the authors briefly describe the complex dynamics found in the literature: well-mixed populations networks, where spatial structure can be neglected, and other networks considering heterogeneity on spatially distributed populations. As controlling mechanisms are implemented, such as social distancing due 'social contagion', quarantine, non-pharmaceutical interventions and vaccination, adaptive behavior can occur in human population, which can be easily taken into account in the dynamics formulated by networked populations.
Vida, Imre; Bartos, Marlene; Jonas, Peter
2006-01-05
Networks of GABAergic neurons are key elements in the generation of gamma oscillations in the brain. Computational studies suggested that the emergence of coherent oscillations requires hyperpolarizing inhibition. Here, we show that GABA(A) receptor-mediated inhibition in mature interneurons of the hippocampal dentate gyrus is shunting rather than hyperpolarizing. Unexpectedly, when shunting inhibition is incorporated into a structured interneuron network model with fast and strong synapses, coherent oscillations emerge. In comparison to hyperpolarizing inhibition, networks with shunting inhibition show several advantages. First, oscillations are generated with smaller tonic excitatory drive. Second, network frequencies are tuned to the gamma band. Finally, robustness against heterogeneity in the excitatory drive is markedly improved. In single interneurons, shunting inhibition shortens the interspike interval for low levels of drive but prolongs it for high levels, leading to homogenization of neuronal firing rates. Thus, shunting inhibition may confer increased robustness to gamma oscillations in the brain.
Lin, Na; Chen, Hanning; Jing, Shikai; Liu, Fang; Liang, Xiaodan
2017-03-01
In recent years, symbiosis as a rich source of potential engineering applications and computational model has attracted more and more attentions in the adaptive complex systems and evolution computing domains. Inspired by different symbiotic coevolution forms in nature, this paper proposed a series of multi-swarm particle swarm optimizers called PS 2 Os, which extend the single population particle swarm optimization (PSO) algorithm to interacting multi-swarms model by constructing hierarchical interaction topologies and enhanced dynamical update equations. According to different symbiotic interrelationships, four versions of PS 2 O are initiated to mimic mutualism, commensalism, predation, and competition mechanism, respectively. In the experiments, with five benchmark problems, the proposed algorithms are proved to have considerable potential for solving complex optimization problems. The coevolutionary dynamics of symbiotic species in each PS 2 O version are also studied respectively to demonstrate the heterogeneity of different symbiotic interrelationships that effect on the algorithm's performance. Then PS 2 O is used for solving the radio frequency identification (RFID) network planning (RNP) problem with a mixture of discrete and continuous variables. Simulation results show that the proposed algorithm outperforms the reference algorithms for planning RFID networks, in terms of optimization accuracy and computation robustness.
Secure data exchange between intelligent devices and computing centers
NASA Astrophysics Data System (ADS)
Naqvi, Syed; Riguidel, Michel
2005-03-01
The advent of reliable spontaneous networking technologies (commonly known as wireless ad-hoc networks) has ostensibly raised stakes for the conception of computing intensive environments using intelligent devices as their interface with the external world. These smart devices are used as data gateways for the computing units. These devices are employed in highly volatile environments where the secure exchange of data between these devices and their computing centers is of paramount importance. Moreover, their mission critical applications require dependable measures against the attacks like denial of service (DoS), eavesdropping, masquerading, etc. In this paper, we propose a mechanism to assure reliable data exchange between an intelligent environment composed of smart devices and distributed computing units collectively called 'computational grid'. The notion of infosphere is used to define a digital space made up of a persistent and a volatile asset in an often indefinite geographical space. We study different infospheres and present general evolutions and issues in the security of such technology-rich and intelligent environments. It is beyond any doubt that these environments will likely face a proliferation of users, applications, networked devices, and their interactions on a scale never experienced before. It would be better to build in the ability to uniformly deal with these systems. As a solution, we propose a concept of virtualization of security services. We try to solve the difficult problems of implementation and maintenance of trust on the one hand, and those of security management in heterogeneous infrastructure on the other hand.
Dunmyre, Justin R
2011-06-01
The pre-Bötzinger complex of the mammalian brainstem is a heterogeneous neuronal network, and individual neurons within the network have varying strengths of the persistent sodium and calcium-activated nonspecific cationic currents. Individually, these currents have been the focus of modeling efforts. Previously, Dunmyre et al. (J Comput Neurosci 1-24, 2011) proposed a model and studied the interactions of these currents within one self-coupled neuron. In this work, I consider two identical, reciprocally coupled model neurons and validate the reduction to the self-coupled case. I find that all of the dynamics of the two model neuron network and the regions of parameter space where these distinct dynamics are found are qualitatively preserved in the reduction to the self-coupled case.
Hass, Joachim; Hertäg, Loreen; Durstewitz, Daniel
2016-01-01
The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition. PMID:27203563
Reconstruction of network topology using status-time-series data
NASA Astrophysics Data System (ADS)
Pandey, Pradumn Kumar; Badarla, Venkataramana
2018-01-01
Uncovering the heterogeneous connection pattern of a networked system from the available status-time-series (STS) data of a dynamical process on the network is of great interest in network science and known as a reverse engineering problem. Dynamical processes on a network are affected by the structure of the network. The dependency between the diffusion dynamics and structure of the network can be utilized to retrieve the connection pattern from the diffusion data. Information of the network structure can help to devise the control of dynamics on the network. In this paper, we consider the problem of network reconstruction from the available status-time-series (STS) data using matrix analysis. The proposed method of network reconstruction from the STS data is tested successfully under susceptible-infected-susceptible (SIS) diffusion dynamics on real-world and computer-generated benchmark networks. High accuracy and efficiency of the proposed reconstruction procedure from the status-time-series data define the novelty of the method. Our proposed method outperforms compressed sensing theory (CST) based method of network reconstruction using STS data. Further, the same procedure of network reconstruction is applied to the weighted networks. The ordering of the edges in the weighted networks is identified with high accuracy.
Particle simulation on heterogeneous distributed supercomputers
NASA Technical Reports Server (NTRS)
Becker, Jeffrey C.; Dagum, Leonardo
1993-01-01
We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.
Finite-time consensus for controlled dynamical systems in network
NASA Astrophysics Data System (ADS)
Zoghlami, Naim; Mlayeh, Rhouma; Beji, Lotfi; Abichou, Azgal
2018-04-01
The key challenges in networked dynamical systems are the component heterogeneities, nonlinearities, and the high dimension of the formulated vector of state variables. In this paper, the emphasise is put on two classes of systems in network include most controlled driftless systems as well as systems with drift. For each model structure that defines homogeneous and heterogeneous multi-system behaviour, we derive protocols leading to finite-time consensus. For each model evolving in networks forming a homogeneous or heterogeneous multi-system, protocols integrating sufficient conditions are derived leading to finite-time consensus. Likewise, for the networking topology, we make use of fixed directed and undirected graphs. To prove our approaches, finite-time stability theory and Lyapunov methods are considered. As illustrative examples, the homogeneous multi-unicycle kinematics and the homogeneous/heterogeneous multi-second order dynamics in networks are studied.
Genome network medicine: innovation to overcome huge challenges in cancer therapy.
Roukos, Dimitrios H
2014-01-01
The post-ENCODE era shapes now a new biomedical research direction for understanding transcriptional and signaling networks driving gene expression and core cellular processes such as cell fate, survival, and apoptosis. Over the past half century, the Francis Crick 'central dogma' of single n gene/protein-phenotype (trait/disease) has defined biology, human physiology, disease, diagnostics, and drugs discovery. However, the ENCODE project and several other genomic studies using high-throughput sequencing technologies, computational strategies, and imaging techniques to visualize regulatory networks, provide evidence that transcriptional process and gene expression are regulated by highly complex dynamic molecular and signaling networks. This Focus article describes the linear experimentation-based limitations of diagnostics and therapeutics to cure advanced cancer and the need to move on from reductionist to network-based approaches. With evident a wide genomic heterogeneity, the power and challenges of next-generation sequencing (NGS) technologies to identify a patient's personal mutational landscape for tailoring the best target drugs in the individual patient are discussed. However, the available drugs are not capable of targeting aberrant signaling networks and research on functional transcriptional heterogeneity and functional genome organization is poorly understood. Therefore, the future clinical genome network medicine aiming at overcoming multiple problems in the new fields of regulatory DNA mapping, noncoding RNA, enhancer RNAs, and dynamic complexity of transcriptional circuitry are also discussed expecting in new innovation technology and strong appreciation of clinical data and evidence-based medicine. The problematic and potential solutions in the discovery of next-generation, molecular, and signaling circuitry-based biomarkers and drugs are explored. © 2013 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
Statistical mechanics of complex neural systems and high dimensional data
NASA Astrophysics Data System (ADS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-03-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.
A world-wide databridge supported by a commercial cloud provider
NASA Astrophysics Data System (ADS)
Tat Cheung, Kwong; Field, Laurence; Furano, Fabrizio
2017-10-01
Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. One of the challenges with exploiting volunteer computing is to support a global community of volunteers that provides heterogeneous resources. However, high energy physics applications require more data input and output than the CPU intensive applications that are typically used by other volunteer computing projects. While the so-called databridge has already been successfully proposed as a method to span the untrusted and trusted domains of volunteer computing and Grid computing respective, globally transferring data between potentially poor-performing residential networks and CERN could be unreliable, leading to wasted resources usage. The expectation is that by placing a storage endpoint that is part of a wider, flexible geographical databridge deployment closer to the volunteers, the transfer success rate and the overall performance can be improved. This contribution investigates the provision of a globally distributed databridge implemented upon a commercial cloud provider.
Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.
2012-01-01
Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.
An interactive web-based system using cloud for large-scale visual analytics
NASA Astrophysics Data System (ADS)
Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.
An Efficient Offloading Scheme For MEC System Considering Delay and Energy Consumption
NASA Astrophysics Data System (ADS)
Sun, Yanhua; Hao, Zhe; Zhang, Yanhua
2018-01-01
With the increasing numbers of mobile devices, mobile edge computing (MEC) which provides cloud computing capabilities proximate to mobile devices in 5G networks has been envisioned as a promising paradigm to enhance users experience. In this paper, we investigate a joint consideration of delay and energy consumption offloading scheme (JCDE) for MEC system in 5G heterogeneous networks. An optimization is formulated to minimize the delay as well as energy consumption of the offloading system, which the delay and energy consumption of transmitting and calculating tasks are taken into account. We adopt an iterative greedy algorithm to solve the optimization problem. Furthermore, simulations were carried out to validate the utility and effectiveness of our proposed scheme. The effect of parameter variations on the system is analysed as well. Numerical results demonstrate delay and energy efficiency promotion of our proposed scheme compared with another paper’s scheme.
Experience with abstract notation one
NASA Technical Reports Server (NTRS)
Harvey, James D.; Weaver, Alfred C.
1990-01-01
The development of computer science has produced a vast number of machine architectures, programming languages, and compiler technologies. The cross product of these three characteristics defines the spectrum of previous and present data representation methodologies. With regard to computer networks, the uniqueness of these methodologies presents an obstacle when disparate host environments are to be interconnected. Interoperability within a heterogeneous network relies upon the establishment of data representation commonality. The International Standards Organization (ISO) is currently developing the abstract syntax notation one standard (ASN.1) and the basic encoding rules standard (BER) that collectively address this problem. When used within the presentation layer of the open systems interconnection reference model, these two standards provide the data representation commonality required to facilitate interoperability. The details of a compiler that was built to automate the use of ASN.1 and BER are described. From this experience, insights into both standards are given and potential problems relating to this development effort are discussed.
Deep graphs—A general framework to represent and analyze heterogeneous complex systems across scales
NASA Astrophysics Data System (ADS)
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of multilayer networks from our framework and demonstrate the advantages of our representation. On the basis of the formal framework described here, we provide a rich, fully scalable (and self-explanatory) software package that integrates into the PyData ecosystem and offers interfaces to popular network packages, making it a powerful, general-purpose data analysis toolkit. We exemplify an application of deep graphs using a real world dataset, comprising 16 years of satellite-derived global precipitation measurements. We deduce a deep graph representation of these measurements in order to track and investigate local formations of spatio-temporal clusters of extreme precipitation events.
Traxl, Dominik; Boers, Niklas; Kurths, Jürgen
2016-06-01
Network theory has proven to be a powerful tool in describing and analyzing systems by modelling the relations between their constituent objects. Particularly in recent years, a great progress has been made by augmenting "traditional" network theory in order to account for the multiplex nature of many networks, multiple types of connections between objects, the time-evolution of networks, networks of networks and other intricacies. However, existing network representations still lack crucial features in order to serve as a general data analysis tool. These include, most importantly, an explicit association of information with possibly heterogeneous types of objects and relations, and a conclusive representation of the properties of groups of nodes as well as the interactions between such groups on different scales. In this paper, we introduce a collection of definitions resulting in a framework that, on the one hand, entails and unifies existing network representations (e.g., network of networks and multilayer networks), and on the other hand, generalizes and extends them by incorporating the above features. To implement these features, we first specify the nodes and edges of a finite graph as sets of properties (which are permitted to be arbitrary mathematical objects). Second, the mathematical concept of partition lattices is transferred to the network theory in order to demonstrate how partitioning the node and edge set of a graph into supernodes and superedges allows us to aggregate, compute, and allocate information on and between arbitrary groups of nodes. The derived partition lattice of a graph, which we denote by deep graph, constitutes a concise, yet comprehensive representation that enables the expression and analysis of heterogeneous properties, relations, and interactions on all scales of a complex system in a self-contained manner. Furthermore, to be able to utilize existing network-based methods and models, we derive different representations of multilayer networks from our framework and demonstrate the advantages of our representation. On the basis of the formal framework described here, we provide a rich, fully scalable (and self-explanatory) software package that integrates into the PyData ecosystem and offers interfaces to popular network packages, making it a powerful, general-purpose data analysis toolkit. We exemplify an application of deep graphs using a real world dataset, comprising 16 years of satellite-derived global precipitation measurements. We deduce a deep graph representation of these measurements in order to track and investigate local formations of spatio-temporal clusters of extreme precipitation events.
Gronau, Greta; Jacobsen, Matthew M.; Huang, Wenwen; Rizzo, Daniel J.; Li, David; Staii, Cristian; Pugno, Nicola M.; Wong, Joyce Y.; Kaplan, David L.; Buehler, Markus J.
2016-01-01
Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified. PMID:26017575
Lin, Shangchao; Ryu, Seunghwa; Tokareva, Olena; Gronau, Greta; Jacobsen, Matthew M; Huang, Wenwen; Rizzo, Daniel J; Li, David; Staii, Cristian; Pugno, Nicola M; Wong, Joyce Y; Kaplan, David L; Buehler, Markus J
2015-05-28
Scalable computational modelling tools are required to guide the rational design of complex hierarchical materials with predictable functions. Here, we utilize mesoscopic modelling, integrated with genetic block copolymer synthesis and bioinspired spinning process, to demonstrate de novo materials design that incorporates chemistry, processing and material characterization. We find that intermediate hydrophobic/hydrophilic block ratios observed in natural spider silks and longer chain lengths lead to outstanding silk fibre formation. This design by nature is based on the optimal combination of protein solubility, self-assembled aggregate size and polymer network topology. The original homogeneous network structure becomes heterogeneous after spinning, enhancing the anisotropic network connectivity along the shear flow direction. Extending beyond the classical polymer theory, with insights from the percolation network model, we illustrate the direct proportionality between network conductance and fibre Young's modulus. This integrated approach provides a general path towards de novo functional network materials with enhanced mechanical properties and beyond (optical, electrical or thermal) as we have experimentally verified.
Statistical inference to advance network models in epidemiology.
Welch, David; Bansal, Shweta; Hunter, David R
2011-03-01
Contact networks are playing an increasingly important role in the study of epidemiology. Most of the existing work in this area has focused on considering the effect of underlying network structure on epidemic dynamics by using tools from probability theory and computer simulation. This work has provided much insight on the role that heterogeneity in host contact patterns plays on infectious disease dynamics. Despite the important understanding afforded by the probability and simulation paradigm, this approach does not directly address important questions about the structure of contact networks such as what is the best network model for a particular mode of disease transmission, how parameter values of a given model should be estimated, or how precisely the data allow us to estimate these parameter values. We argue that these questions are best answered within a statistical framework and discuss the role of statistical inference in estimating contact networks from epidemiological data. Copyright © 2011 Elsevier B.V. All rights reserved.
Rapid innovation diffusion in social networks.
Kreindler, Gabriel E; Young, H Peyton
2014-07-22
Social and technological innovations often spread through social networks as people respond to what their neighbors are doing. Previous research has identified specific network structures, such as local clustering, that promote rapid diffusion. Here we derive bounds that are independent of network structure and size, such that diffusion is fast whenever the payoff gain from the innovation is sufficiently high and the agents' responses are sufficiently noisy. We also provide a simple method for computing an upper bound on the expected time it takes for the innovation to become established in any finite network. For example, if agents choose log-linear responses to what their neighbors are doing, it takes on average less than 80 revision periods for the innovation to diffuse widely in any network, provided that the error rate is at least 5% and the payoff gain (relative to the status quo) is at least 150%. Qualitatively similar results hold for other smoothed best-response functions and populations that experience heterogeneous payoff shocks.
Rapid innovation diffusion in social networks
Kreindler, Gabriel E.; Young, H. Peyton
2014-01-01
Social and technological innovations often spread through social networks as people respond to what their neighbors are doing. Previous research has identified specific network structures, such as local clustering, that promote rapid diffusion. Here we derive bounds that are independent of network structure and size, such that diffusion is fast whenever the payoff gain from the innovation is sufficiently high and the agents’ responses are sufficiently noisy. We also provide a simple method for computing an upper bound on the expected time it takes for the innovation to become established in any finite network. For example, if agents choose log-linear responses to what their neighbors are doing, it takes on average less than 80 revision periods for the innovation to diffuse widely in any network, provided that the error rate is at least 5% and the payoff gain (relative to the status quo) is at least 150%. Qualitatively similar results hold for other smoothed best-response functions and populations that experience heterogeneous payoff shocks. PMID:25024191
Implementing Internet of Things in a military command and control environment
NASA Astrophysics Data System (ADS)
Raglin, Adrienne; Metu, Somiya; Russell, Stephen; Budulas, Peter
2017-05-01
While the term Internet of Things (IoT) has been coined relatively recently, it has deep roots in multiple other areas of research including cyber-physical systems, pervasive and ubiquitous computing, embedded systems, mobile ad-hoc networks, wireless sensor networks, cellular networks, wearable computing, cloud computing, big data analytics, and intelligent agents. As the Internet of Things, these technologies have created a landscape of diverse heterogeneous capabilities and protocols that will require adaptive controls to effect linkages and changes that are useful to end users. In the context of military applications, it will be necessary to integrate disparate IoT devices into a common platform that necessarily must interoperate with proprietary military protocols, data structures, and systems. In this environment, IoT devices and data will not be homogeneous and provenance-controlled (i.e. single vendor/source/supplier owned). This paper presents a discussion of the challenges of integrating varied IoT devices and related software in a military environment. A review of contemporary commercial IoT protocols is given and as a practical example, a middleware implementation is proffered that provides transparent interoperability through a proactive message dissemination system. The implementation is described as a framework through which military applications can integrate and utilize commercial IoT in conjunction with existing military sensor networks and command and control (C2) systems.
Altered Micro-RNA Degradation Promotes Tumor Heterogeneity: A Result from Boolean Network Modeling.
Wu, Yunyi; Krueger, Gerhard R F; Wang, Guanyu
2016-02-01
Cancer heterogeneity may reflect differential dynamical outcomes of the regulatory network encompassing biomolecules at both transcriptional and post-transcriptional levels. In other words, differential gene-expression profiles may correspond to different stable steady states of a mathematical model for simulation of biomolecular networks. To test this hypothesis, we simplified a regulatory network that is important for soft-tissue sarcoma metastasis and heterogeneity, comprising of transcription factors, micro-RNAs, and signaling components of the NOTCH pathway. We then used a Boolean network model to simulate the dynamics of this network, and particularly investigated the consequences of differential miRNA degradation modes. We found that efficient miRNA degradation is crucial for sustaining a homogenous and healthy phenotype, while defective miRNA degradation may lead to multiple stable steady states and ultimately to carcinogenesis and heterogeneity. Copyright© 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
Optimal forwarding ratio on dynamical networks with heterogeneous mobility
NASA Astrophysics Data System (ADS)
Gan, Yu; Tang, Ming; Yang, Hanxin
2013-05-01
Since the discovery of non-Poisson statistics of human mobility trajectories, more attention has been paid to understand the role of these patterns in different dynamics. In this study, we first introduce the heterogeneous mobility of mobile agents into dynamical networks, and then investigate packet forwarding strategy on the heterogeneous dynamical networks. We find that the faster speed and the higher proportion of high-speed agents can enhance the network throughput and reduce the mean traveling time in random forwarding. A hierarchical structure in the dependence of high-speed is observed: the network throughput remains unchanged at small and large high-speed value. It is also interesting to find that a slightly preferential forwarding to high-speed agents can maximize the network capacity. Through theoretical analysis and numerical simulations, we show that the optimal forwarding ratio stems from the local structural heterogeneity of low-speed agents.
Efficient Process Migration for Parallel Processing on Non-Dedicated Networks of Workstations
NASA Technical Reports Server (NTRS)
Chanchio, Kasidit; Sun, Xian-He
1996-01-01
This paper presents the design and preliminary implementation of MpPVM, a software system that supports process migration for PVM application programs in a non-dedicated heterogeneous computing environment. New concepts of migration point as well as migration point analysis and necessary data analysis are introduced. In MpPVM, process migrations occur only at previously inserted migration points. Migration point analysis determines appropriate locations to insert migration points; whereas, necessary data analysis provides a minimum set of variables to be transferred at each migration pint. A new methodology to perform reliable point-to-point data communications in a migration environment is also discussed. Finally, a preliminary implementation of MpPVM and its experimental results are presented, showing the correctness and promising performance of our process migration mechanism in a scalable non-dedicated heterogeneous computing environment. While MpPVM is developed on top of PVM, the process migration methodology introduced in this study is general and can be applied to any distributed software environment.
Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao
2018-03-01
We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.
Collision Resolution Scheme with Offset for Improved Performance of Heterogeneous WLAN
NASA Astrophysics Data System (ADS)
Upadhyay, Raksha; Vyavahare, Prakash D.; Tokekar, Sanjiv
2016-03-01
CSMA/CA based DCF of 802.11 MAC layer employs best effort delivery model, in which all stations compete for channel access with same priority. Heterogeneous conditions result in unfairness among stations and degradation in throughput, therefore, providing different priorities to different applications for required quality of service in heterogeneous networks is challenging task. This paper proposes a collision resolution scheme with a novel concept of introducing offset, which is suitable for heterogeneous networks. Selection of random value by a station for its contention with offset results in reduced probability of collision. Expression for the optimum value of the offset is also derived. Results show that proposed scheme, when applied to heterogeneous networks, has improved throughput and fairness than conventional scheme. Results show that proposed scheme also exhibits higher throughput and fairness with reduced delay in homogeneous networks.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
NASA Astrophysics Data System (ADS)
Simunovic, S.; Zacharia, T.; Baltas, N.; Spalding, D. B.
1995-03-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. The Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.
MPI implementation of PHOENICS: A general purpose computational fluid dynamics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, S.; Zacharia, T.; Baltas, N.
1995-04-01
PHOENICS is a suite of computational analysis programs that are used for simulation of fluid flow, heat transfer, and dynamical reaction processes. The parallel version of the solver EARTH for the Computational Fluid Dynamics (CFD) program PHOENICS has been implemented using Message Passing Interface (MPI) standard. Implementation of MPI version of PHOENICS makes this computational tool portable to a wide range of parallel machines and enables the use of high performance computing for large scale computational simulations. MPI libraries are available on several parallel architectures making the program usable across different architectures as well as on heterogeneous computer networks. Themore » Intel Paragon NX and MPI versions of the program have been developed and tested on massively parallel supercomputers Intel Paragon XP/S 5, XP/S 35, and Kendall Square Research, and on the multiprocessor SGI Onyx computer at Oak Ridge National Laboratory. The preliminary testing results of the developed program have shown scalable performance for reasonably sized computational domains.« less
Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.
Ly, Cheng; Marsat, Gary
2018-02-01
Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.
Mitigating wildland fire hazard using complex network centrality measures
NASA Astrophysics Data System (ADS)
Russo, Lucia; Russo, Paola; Siettos, Constantinos I.
2016-12-01
We show how to distribute firebreaks in heterogeneous forest landscapes in the presence of strong wind using complex network centrality measures. The proposed framework is essentially a two-tire one: at the inner part a state-of- the-art Cellular Automata model is used to compute the weights of the underlying lattice network while at the outer part the allocation of the fire breaks is scheduled in terms of a hierarchy of centralities which influence the most the spread of fire. For illustration purposes we applied the proposed framework to a real-case wildfire that broke up in Spetses Island, Greece in 1990. We evaluate the scheme against the benchmark of random allocation of firebreaks under the weather conditions of the real incident i.e. in the presence of relatively strong winds.
An Integrated Testbed for Cooperative Perception with Heterogeneous Mobile and Static Sensors
Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal
2011-01-01
Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper. PMID:22247679
An integrated testbed for cooperative perception with heterogeneous mobile and static sensors.
Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal
2011-01-01
Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper.
Heterogeneity induces rhythms of weakly coupled circadian neurons
NASA Astrophysics Data System (ADS)
Gu, Changgui; Liang, Xiaoming; Yang, Huijie; Rohling, Jos H. T.
2016-02-01
The main clock located in the suprachiasmatic nucleus (SCN) regulates circadian rhythms in mammals. The SCN is composed of approximately twenty thousand heterogeneous self-oscillating neurons, that have intrinsic periods varying from 22 h to 28 h. They are coupled through neurotransmitters and neuropeptides to form a network and output a uniform periodic rhythm. Previous studies found that the heterogeneity of the neurons leads to attenuation of the circadian rhythm with strong cellular coupling. In the present study, we investigate the heterogeneity of the neurons and of the network in the condition of constant darkness. Interestingly, we found that the heterogeneity of weakly coupled neurons enables them to oscillate and strengthen the circadian rhythm. In addition, we found that the period of the SCN network increases with the increase of the degree of heterogeneity. As the network heterogeneity does not change the dynamics of the rhythm, our study shows that the heterogeneity of the neurons is vitally important for rhythm generation in weakly coupled systems, such as the SCN, and it provides a new method to strengthen the circadian rhythm, as well as an alternative explanation for differences in free running periods between species in the absence of the daily cycle.
Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures
2017-10-04
Report: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures The views, opinions and/or findings contained in this...Chapel Hill Title: Efficient Numeric and Geometric Computations using Heterogeneous Shared Memory Architectures Report Term: 0-Other Email: dm...algorithms for scientific and geometric computing by exploiting the power and performance efficiency of heterogeneous shared memory architectures . These
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
NASA Astrophysics Data System (ADS)
Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang
2017-11-01
To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.
NASA Astrophysics Data System (ADS)
Jablonski, Piotr; Poe, Gina; Zochowski, Michal
2007-03-01
The hippocampus has the capacity for reactivating recently acquired memories and it is hypothesized that one of the functions of sleep reactivation is the facilitation of consolidation of novel memory traces. The dynamic and network processes underlying such a reactivation remain, however, unknown. We show that such a reactivation characterized by local, self-sustained activity of a network region may be an inherent property of the recurrent excitatory-inhibitory network with a heterogeneous structure. The entry into the reactivation phase is mediated through a physiologically feasible regulation of global excitability and external input sources, while the reactivated component of the network is formed through induced network heterogeneities during learning. We show that structural changes needed for robust reactivation of a given network region are well within known physiological parameters.
NASA Astrophysics Data System (ADS)
Jablonski, Piotr; Poe, Gina R.; Zochowski, Michal
2007-01-01
The hippocampus has the capacity for reactivating recently acquired memories and it is hypothesized that one of the functions of sleep reactivation is the facilitation of consolidation of novel memory traces. The dynamic and network processes underlying such a reactivation remain, however, unknown. We show that such a reactivation characterized by local, self-sustained activity of a network region may be an inherent property of the recurrent excitatory-inhibitory network with a heterogeneous structure. The entry into the reactivation phase is mediated through a physiologically feasible regulation of global excitability and external input sources, while the reactivated component of the network is formed through induced network heterogeneities during learning. We show that structural changes needed for robust reactivation of a given network region are well within known physiological parameters.
Toward heterogeneity in feedforward network with synaptic delays based on FitzHugh-Nagumo model
NASA Astrophysics Data System (ADS)
Qin, Ying-Mei; Men, Cong; Zhao, Jia; Han, Chun-Xiao; Che, Yan-Qiu
2018-01-01
We focus on the role of heterogeneity on the propagation of firing patterns in feedforward network (FFN). Effects of heterogeneities both in parameters of neuronal excitability and synaptic delays are investigated systematically. Neuronal heterogeneity is found to modulate firing rates and spiking regularity by changing the excitability of the network. Synaptic delays are strongly related with desynchronized and synchronized firing patterns of the FFN, which indicate that synaptic delays may play a significant role in bridging rate coding and temporal coding. Furthermore, quasi-coherence resonance (quasi-CR) phenomenon is observed in the parameter domain of connection probability and delay-heterogeneity. All these phenomena above enable a detailed characterization of neuronal heterogeneity in FFN, which may play an indispensable role in reproducing the important properties of in vivo experiments.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
Individual-based approach to epidemic processes on arbitrary dynamic contact networks
NASA Astrophysics Data System (ADS)
Rocha, Luis E. C.; Masuda, Naoki
2016-08-01
The dynamics of contact networks and epidemics of infectious diseases often occur on comparable time scales. Ignoring one of these time scales may provide an incomplete understanding of the population dynamics of the infection process. We develop an individual-based approximation for the susceptible-infected-recovered epidemic model applicable to arbitrary dynamic networks. Our framework provides, at the individual-level, the probability flow over time associated with the infection dynamics. This computationally efficient framework discards the correlation between the states of different nodes, yet provides accurate results in approximating direct numerical simulations. It naturally captures the temporal heterogeneities and correlations of contact sequences, fundamental ingredients regulating the timing and size of an epidemic outbreak, and the number of secondary infections. The high accuracy of our approximation further allows us to detect the index individual of an epidemic outbreak in real-life network data.
Adaptive Control of Synchronization in Delay-Coupled Heterogeneous Networks of FitzHugh-Nagumo Nodes
NASA Astrophysics Data System (ADS)
Plotnikov, S. A.; Lehnert, J.; Fradkov, A. L.; Schöll, E.
We study synchronization in delay-coupled neural networks of heterogeneous nodes. It is well known that heterogeneities in the nodes hinder synchronization when becoming too large. We show that an adaptive tuning of the overall coupling strength can be used to counteract the effect of the heterogeneity. Our adaptive controller is demonstrated on ring networks of FitzHugh-Nagumo systems which are paradigmatic for excitable dynamics but can also — depending on the system parameters — exhibit self-sustained periodic firing. We show that the adaptively tuned time-delayed coupling enables synchronization even if parameter heterogeneities are so large that excitable nodes coexist with oscillatory ones.
Mechanical Stress Induces Remodeling of Vascular Networks in Growing Leaves
Bar-Sinai, Yohai; Julien, Jean-Daniel; Sharon, Eran; Armon, Shahaf; Nakayama, Naomi; Adda-Bedia, Mokhtar; Boudaoud, Arezki
2016-01-01
Differentiation into well-defined patterns and tissue growth are recognized as key processes in organismal development. However, it is unclear whether patterns are passively, homogeneously dilated by growth or whether they remodel during tissue expansion. Leaf vascular networks are well-fitted to investigate this issue, since leaves are approximately two-dimensional and grow manyfold in size. Here we study experimentally and computationally how vein patterns affect growth. We first model the growing vasculature as a network of viscoelastic rods and consider its response to external mechanical stress. We use the so-called texture tensor to quantify the local network geometry and reveal that growth is heterogeneous, resembling non-affine deformations in composite materials. We then apply mechanical forces to growing leaves after veins have differentiated, which respond by anisotropic growth and reorientation of the network in the direction of external stress. External mechanical stress appears to make growth more homogeneous, in contrast with the model with viscoelastic rods. However, we reconcile the model with experimental data by incorporating randomness in rod thickness and a threshold in the rod growth law, making the rods viscoelastoplastic. Altogether, we show that the higher stiffness of veins leads to their reorientation along external forces, along with a reduction in growth heterogeneity. This process may lead to the reinforcement of leaves against mechanical stress. More generally, our work contributes to a framework whereby growth and patterns are coordinated through the differences in mechanical properties between cell types. PMID:27074136
Heterogeneous Spacecraft Networks
NASA Technical Reports Server (NTRS)
Nakamura, Yosuke (Inventor); Faber, Nicolas T. (Inventor); Frost, Chad R. (Inventor); Alena, Richard L. (Inventor)
2018-01-01
The present invention provides a heterogeneous spacecraft network including a network management architecture to facilitate communication between a plurality of operations centers and a plurality of data user communities. The network management architecture includes a plurality of network nodes in communication with the plurality of operations centers. The present invention also provides a method of communication for a heterogeneous spacecraft network. The method includes: transmitting data from a first space segment to a first ground segment; transmitting the data from the first ground segment to a network management architecture; transmitting data from a second space segment to a second ground segment, the second space and ground segments having incompatible communication systems with the first space and ground segments; transmitting the data from the second ground station to the network management architecture; and, transmitting data from the network management architecture to a plurality of data user communities.
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Regional gas transport in the heterogeneous lung during oscillatory ventilation.
Herrmann, Jacob; Tawhai, Merryn H; Kaczka, David W
2016-12-01
Regional ventilation in the injured lung is heterogeneous and frequency dependent, making it difficult to predict how an oscillatory flow waveform at a specified frequency will be distributed throughout the periphery. To predict the impact of mechanical heterogeneity on regional ventilation distribution and gas transport, we developed a computational model of distributed gas flow and CO 2 elimination during oscillatory ventilation from 0.1 to 30 Hz. The model consists of a three-dimensional airway network of a canine lung, with heterogeneous parenchymal tissues to mimic effects of gravity and injury. Model CO 2 elimination during single frequency oscillation was validated against previously published experimental data (Venegas JG, Hales CA, Strieder DJ, J Appl Physiol 60: 1025-1030, 1986). Simulations of gas transport demonstrated a critical transition in flow distribution at the resonant frequency, where the reactive components of mechanical impedance due to airway inertia and parenchymal elastance were equal. For frequencies above resonance, the distribution of ventilation became spatially clustered and frequency dependent. These results highlight the importance of oscillatory frequency in managing the regional distribution of ventilation and gas exchange in the heterogeneous lung. Copyright © 2016 the American Physiological Society.
Rate decline curves analysis of multiple-fractured horizontal wells in heterogeneous reservoirs
NASA Astrophysics Data System (ADS)
Wang, Jiahang; Wang, Xiaodong; Dong, Wenxiu
2017-10-01
In heterogeneous reservoir with multiple-fractured horizontal wells (MFHWs), due to the high density network of artificial hydraulic fractures, the fluid flow around fracture tips behaves like non-linear flow. Moreover, the production behaviors of different artificial hydraulic fractures are also different. A rigorous semi-analytical model for MFHWs in heterogeneous reservoirs is presented by combining source function with boundary element method. The model are first validated by both analytical model and simulation model. Then new Blasingame type curves are established. Finally, the effects of critical parameters on the rate decline characteristics of MFHWs are discussed. The results show that heterogeneity has significant influence on the rate decline characteristics of MFHWs; the parameters related to the MFHWs, such as fracture conductivity and length also can affect the rate characteristics of MFHWs. One novelty of this model is to consider the elliptical flow around artificial hydraulic fracture tips. Therefore, our model can be used to predict rate performance more accurately for MFHWs in heterogeneous reservoir. The other novelty is the ability to model the different production behavior at different fracture stages. Compared to numerical and analytic methods, this model can not only reduce extensive computing processing but also show high accuracy.
Yu, Meichen; Engels, Marjolein M A; Hillebrand, Arjan; van Straaten, Elisabeth C W; Gouw, Alida A; Teunissen, Charlotte; van der Flier, Wiesje M; Scheltens, Philip; Stam, Cornelis J
2017-05-01
Although frequency-specific network analyses have shown that functional brain networks are altered in patients with Alzheimer's disease, the relationships between these frequency-specific network alterations remain largely unknown. Multiplex network analysis is a novel network approach to study complex systems consisting of subsystems with different types of connectivity patterns. In this study, we used magnetoencephalography to integrate five frequency-band specific brain networks in a multiplex framework. Previous structural and functional brain network studies have consistently shown that hub brain areas are selectively disrupted in Alzheimer's disease. Accordingly, we hypothesized that hub regions in the multiplex brain networks are selectively targeted in patients with Alzheimer's disease in comparison to healthy control subjects. Eyes-closed resting-state magnetoencephalography recordings from 27 patients with Alzheimer's disease (60.6 ± 5.4 years, 12 females) and 26 controls (61.8 ± 5.5 years, 14 females) were projected onto atlas-based regions of interest using beamforming. Subsequently, source-space time series for both 78 cortical and 12 subcortical regions were reconstructed in five frequency bands (delta, theta, alpha 1, alpha 2 and beta band). Multiplex brain networks were constructed by integrating frequency-specific magnetoencephalography networks. Functional connections between all pairs of regions of interests were quantified using a phase-based coupling metric, the phase lag index. Several multiplex hub and heterogeneity metrics were computed to capture both overall importance of each brain area and heterogeneity of the connectivity patterns across frequency-specific layers. Different nodal centrality metrics showed consistently that several hub regions, particularly left hippocampus, posterior parts of the default mode network and occipital regions, were vulnerable in patients with Alzheimer's disease compared to control subjects. Of note, these detected vulnerable hubs in Alzheimer's disease were absent in each individual frequency-specific network, thus showing the value of integrating the networks. The connectivity patterns of these vulnerable hub regions in the patients were heterogeneously distributed across layers. Perturbed cognitive function and abnormal cerebrospinal fluid amyloid-β42 levels correlated positively with the vulnerability of the hub regions in patients with Alzheimer's disease. Our analysis therefore demonstrates that the magnetoencephalography-based multiplex brain networks contain important information that cannot be revealed by frequency-specific brain networks. Furthermore, this indicates that functional networks obtained in different frequency bands do not act as independent entities. Overall, our multiplex network study provides an effective framework to integrate the frequency-specific networks with different frequency patterns and reveal neuropathological mechanism of hub disruption in Alzheimer's disease. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Adaptive multi-sensor biomimetics for unsupervised submarine hunt (AMBUSH): Early results
NASA Astrophysics Data System (ADS)
Blouin, Stéphane
2014-10-01
Underwater surveillance is inherently difficult because acoustic wave propagation and transmission are limited and unpredictable when targets and sensors move around in the communication-opaque undersea environment. Today's Navy underwater sensors enable the collection of a massive amount of data, often analyzed offtine. The Navy of tomorrow will dominate by making sense of that data in real-time. DRDC's AMBUSH project proposes a new undersea-surveillance network paradigm that will enable such a real-time operation. Nature abounds with examples of collaborative tasks taking place despite limited communication and computational capabilities. This publication describes a year's worth of research efforts finding inspiration in Nature's collaborative tasks such as wolves hunting in packs. This project proposes the utilization of a heterogeneous network combining both static and mobile network nodes. The military objective is to enable an unsupervised surveillance capability while maximizing target localization performance and endurance. The scientific objective is to develop the necessary technology to acoustically and passively localize a noise-source of interest in shallow waters. The project fulfills these objectives via distributed computing and adaptation to changing undersea conditions. Specific research interests discussed here relate to approaches for performing: (a) network self-discovery, (b) network connectivity self-assessment, (c) opportunistic network routing, (d) distributed data-aggregation, and (e) simulation of underwater acoustic propagation. We present early results then followed by a discussion about future work.
Origin of Permeability and Structure of Flows in Fractured Media
NASA Astrophysics Data System (ADS)
De Dreuzy, J.; Darcel, C.; Davy, P.; Erhel, J.; Le Goc, R.; Maillot, J.; Meheust, Y.; Pichot, G.; Poirriez, B.
2013-12-01
After more than three decades of research, flows in fractured media have been shown to result from multi-scale geological structures. Flows result non-exclusively from the damage zone of the large faults, from the percolation within denser networks of smaller fractures, from the aperture heterogeneity within the fracture planes and from some remaining permeability within the matrix. While the effect of each of these causes has been studied independently, global assessments of the main determinisms is still needed. We propose a general approach to determine the geological structures responsible for flows, their permeability and their organization based on field data and numerical modeling [de Dreuzy et al., 2012b]. Multi-scale synthetic networks are reconstructed from field data and simplified mechanical modeling [Davy et al., 2010]. High-performance numerical methods are developed to comply with the specificities of the geometry and physical properties of the fractured media [Pichot et al., 2010; Pichot et al., 2012]. And, based on a large Monte-Carlo sampling, we determine the key determinisms of fractured permeability and flows (Figure). We illustrate our approach on the respective influence of fracture apertures and fracture correlation patterns at large scale. We show the potential role of fracture intersections, so far overlooked between the fracture and the network scales. We also demonstrate how fracture correlations reduce the bulk fracture permeability. Using this analysis, we highlight the need for more specific in-situ characterization of fracture flow structures. Fracture modeling and characterization are necessary to meet the new requirements of a growing number of applications where fractures appear both as potential advantages to enhance permeability and drawbacks for safety, e.g. in energy storage, stimulated geothermal energy and non-conventional gas productions. References Davy, P., et al. (2010), A likely universal model of fracture scaling and its consequence for crustal hydromechanics, Journal of Geophysical Research-Solid Earth, 115, 13. de Dreuzy, J.-R., et al. (2012a), Influence of fracture scale heterogeneity on the flow properties of three-dimensional Discrete Fracture Networks (DFN), J. Geophys. Res.-Earth Surf., 117(B11207), 21 PP. de Dreuzy, J.-R., et al. (2012b), Synthetic benchmark for modeling flow in 3D fractured media, Computers and Geosciences(0). Pichot, G., et al. (2010), A Mixed Hybrid Mortar Method for solving flow in Discrete Fracture Networks, Applicable Analysis, 89(10), 1729-1643. Pichot, G., et al. (2012), Flow simulation in 3D multi-scale fractured networks using non-matching meshes, SIAM Journal on Scientific Computing (SISC), 34(1). Figure: (a) Fracture network with a broad-range of fracture lengths. (b) Flows (log-scale) with homogeneous fractures. (c) Flows (log-scale) with heterogeneous fractures [de Dreuzy et al., 2012a]. The impact of the fracture apertures (c) is illustrated on the organization of flows.
NASA Astrophysics Data System (ADS)
Bultreys, Tom; Van Hoorebeke, Luc; Cnudde, Veerle
2016-09-01
The two-phase flow properties of natural rocks depend strongly on their pore structure and wettability, both of which are often heterogeneous throughout the rock. To better understand and predict these properties, image-based models are being developed. Resulting simulations are however problematic in several important classes of rocks with broad pore-size distributions. We present a new multiscale pore network model to simulate secondary waterflooding in these rocks, which may undergo wettability alteration after primary drainage. This novel approach permits to include the effect of microporosity on the imbibition sequence without the need to describe each individual micropore. Instead, we show that fluid transport through unresolved pores can be taken into account in an upscaled fashion, by the inclusion of symbolic links between macropores, resulting in strongly decreased computational demands. Rules to describe the behavior of these links in the quasistatic invasion sequence are derived from percolation theory. The model is validated by comparison to a fully detailed network representation, which takes each separate micropore into account. Strongly and weakly water-and oil-wet simulations show good results, as do mixed-wettability scenarios with different pore-scale wettability distributions. We also show simulations on a network extracted from a micro-CT scan of Estaillades limestone, which yields good agreement with water-wet and mixed-wet experimental results.
Cellular automata with object-oriented features for parallel molecular network modeling.
Zhu, Hao; Wu, Yinghui; Huang, Sui; Sun, Yan; Dhar, Pawan
2005-06-01
Cellular automata are an important modeling paradigm for studying the dynamics of large, parallel systems composed of multiple, interacting components. However, to model biological systems, cellular automata need to be extended beyond the large-scale parallelism and intensive communication in order to capture two fundamental properties characteristic of complex biological systems: hierarchy and heterogeneity. This paper proposes extensions to a cellular automata language, Cellang, to meet this purpose. The extended language, with object-oriented features, can be used to describe the structure and activity of parallel molecular networks within cells. Capabilities of this new programming language include object structure to define molecular programs within a cell, floating-point data type and mathematical functions to perform quantitative computation, message passing capability to describe molecular interactions, as well as new operators, statements, and built-in functions. We discuss relevant programming issues of these features, including the object-oriented description of molecular interactions with molecule encapsulation, message passing, and the description of heterogeneity and anisotropy at the cell and molecule levels. By enabling the integration of modeling at the molecular level with system behavior at cell, tissue, organ, or even organism levels, the program will help improve our understanding of how complex and dynamic biological activities are generated and controlled by parallel functioning of molecular networks. Index Terms-Cellular automata, modeling, molecular network, object-oriented.
NASA Astrophysics Data System (ADS)
Marinos, Alexandros; Briscoe, Gerard
Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.
NASA Astrophysics Data System (ADS)
Liu, Peng; Ju, Yang; Gao, Feng; Ranjith, Pathegama G.; Zhang, Qianbing
2018-03-01
Understanding and characterization of the three-dimensional (3-D) propagation and distribution of hydrofracturing cracks in heterogeneous rock are key for enhancing the stimulation of low-permeability petroleum reservoirs. In this study, we investigated the propagation and distribution characteristics of hydrofracturing cracks, by conducting true triaxial hydrofracturing tests and computed tomography on artificial heterogeneous rock specimens. Silica sand, Portland cement, and aedelforsite were mixed to create artificial heterogeneous rock specimens using the data of mineral compositions, coarse gravel distribution, and mechanical properties that were measured from the natural heterogeneous glutenite cores. To probe the effects of material heterogeneity on hydrofracturing cracks, the artificial homogenous specimens were created using the identical matrix compositions of the heterogeneous rock specimens and then fractured for comparison. The effects of horizontal geostress ratio on the 3-D growth and distribution of cracks during hydrofracturing were examined. A fractal-based method was proposed to characterize the complexity of fractures and the efficiency of hydrofracturing stimulation of heterogeneous media. The material heterogeneity and horizontal geostress ratio were found to significantly influence the 3-D morphology, growth, and distribution of hydrofracturing cracks. A horizontal geostress ratio of 1.7 appears to be the upper limit for the occurrence of multiple cracks, and higher ratios cause a single crack perpendicular to the minimum horizontal geostress component. The fracturing efficiency is associated with not only the fractured volume but also the complexity of the crack network.
Employees and Creativity: Social Ties and Access to Heterogeneous Knowledge
ERIC Educational Resources Information Center
Huang, Chiung-En; Liu, Chih-Hsing Sam
2015-01-01
This study dealt with employee social ties, knowledge heterogeneity contacts, and the generation of creativity. Although prior studies demonstrated a relationship between network position and creativity, inadequate attention has been paid to network ties and heterogeneity knowledge contacts. This study considered the social interaction processes…
Kashkooli, Ali Ghorbani; Foreman, Evan; Farhad, Siamak; ...
2017-09-21
In this study, synchrotron X-ray computed tomography has been utilized using two different imaging modes, absorption and Zernike phase contrast, to reconstruct the real three-dimensional (3D) morphology of nanostructured Li 4Ti 5O 12 (LTO) electrodes. The morphology of the high atomic number active material has been obtained using the absorption contrast mode, whereas the percolated solid network composed of active material and carbon-doped polymer binder domain (CBD) has been obtained using the Zernike phase contrast mode. The 3D absorption contrast image revealed that some LTO nano-particles tend to agglomerate and form secondary micro-sized particles with varying degrees of sphericity. Themore » tortuosity of electrode’s pore and solid phases were found to have directional dependence, different from Bruggeman’s tortuosity commonly used in macro-homogeneous models. The electrode’s heterogeneous structure was investigated by developing a numerical model to simulate galvanostatic discharge process using the Zernike phase contrast mode. The inclusion of CBD in the Zernike phase contrast results in an integrated percolated network of active material and CBD that is highly suited for continuum modeling. As a result, the simulation results highlight the importance of using the real 3D geometry since the spatial distribution of physical and electrochemical properties have a strong non-uniformity due to microstructural heterogeneities.« less
Mobility in hospital work: towards a pervasive computing hospital environment.
Morán, Elisa B; Tentori, Monica; González, Víctor M; Favela, Jesus; Martínez-Garcia, Ana I
2007-01-01
Handheld computers are increasingly being used by hospital workers. With the integration of wireless networks into hospital information systems, handheld computers can provide the basis for a pervasive computing hospital environment; to develop this designers need empirical information to understand how hospital workers interact with information while moving around. To characterise the medical phenomena we report the results of a workplace study conducted in a hospital. We found that individuals spend about half of their time at their base location, where most of their interactions occur. On average, our informants spent 23% of their time performing information management tasks, followed by coordination (17.08%), clinical case assessment (15.35%) and direct patient care (12.6%). We discuss how our results offer insights for the design of pervasive computing technology, and directions for further research and development in this field such as transferring information between heterogeneous devices and integration of the physical and digital domains.
A Latency-Tolerant Partitioner for Distributed Computing on the Information Power Grid
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biwas, Rupak; Kwak, Dochan (Technical Monitor)
2001-01-01
NASA's Information Power Grid (IPG) is an infrastructure designed to harness the power of graphically distributed computers, databases, and human expertise, in order to solve large-scale realistic computational problems. This type of a meta-computing environment is necessary to present a unified virtual machine to application developers that hides the intricacies of a highly heterogeneous environment and yet maintains adequate security. In this paper, we present a novel partitioning scheme. called MinEX, that dynamically balances processor workloads while minimizing data movement and runtime communication, for applications that are executed in a parallel distributed fashion on the IPG. We also analyze the conditions that are required for the IPG to be an effective tool for such distributed computations. Our results show that MinEX is a viable load balancer provided the nodes of the IPG are connected by a high-speed asynchronous interconnection network.
Hierarchical Trust Management of COI in Heterogeneous Mobile Networks
2017-08-01
PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704...Report: Hierarchical Trust Management of COI in Heterogeneous Mobile Networks The views, opinions and/or findings contained in this report are those of...Institute & State University Title: Hierarchical Trust Management of COI in Heterogeneous Mobile Networks Report Term: 0-Other Email: irchen@vt.edu
2013-12-01
AbdelWahab, “ 2G / 3G Inter-RAT Handover Performance Analysis,” Second European Conference on Antennas and Propagation, pp. 1, 8, 11–16, Nov. 2007. [19] J...RADIO GLOBAL SYSTEM FOR MOBILE COMMUNICATIONS TRANSMITTER DEVELOPMENT FOR HETEROGENEOUS NETWORK VULNERABILITY TESTING by Carson C. McAbee... MOBILE COMMUNICATIONS TRANSMITTER DEVELOPMENT FOR HETEROGENEOUS NETWORK VULNERABILITY TESTING 5. FUNDING NUMBERS 6. AUTHOR(S) Carson C. McAbee
Smith, Joseph M.; Mather, Martha E.
2013-01-01
In summary, within a stream network, beaver dams maintained fish biodiversity by altering in-stream habitat and increasing habitat heterogeneity. Understanding the relationship between habitat heterogeneity and biodiversity can advance basic freshwater ecology and provide science-based support for applied aquatic conservation
RKNNMDA: Ranking-based KNN for MiRNA-Disease Association prediction.
Chen, Xing; Wu, Qiao-Feng; Yan, Gui-Ying
2017-07-03
Cumulative verified experimental studies have demonstrated that microRNAs (miRNAs) could be closely related with the development and progression of human complex diseases. Based on the assumption that functional similar miRNAs may have a strong correlation with phenotypically similar diseases and vice versa, researchers developed various effective computational models which combine heterogeneous biologic data sets including disease similarity network, miRNA similarity network, and known disease-miRNA association network to identify potential relationships between miRNAs and diseases in biomedical research. Considering the limitations in previous computational study, we introduced a novel computational method of Ranking-based KNN for miRNA-Disease Association prediction (RKNNMDA) to predict potential related miRNAs for diseases, and our method obtained an AUC of 0.8221 based on leave-one-out cross validation. In addition, RKNNMDA was applied to 3 kinds of important human cancers for further performance evaluation. The results showed that 96%, 80% and 94% of predicted top 50 potential related miRNAs for Colon Neoplasms, Esophageal Neoplasms, and Prostate Neoplasms have been confirmed by experimental literatures, respectively. Moreover, RKNNMDA could be used to predict potential miRNAs for diseases without any known miRNAs, and it is anticipated that RKNNMDA would be of great use for novel miRNA-disease association identification.
RKNNMDA: Ranking-based KNN for MiRNA-Disease Association prediction
Chen, Xing; Yan, Gui-Ying
2017-01-01
ABSTRACT Cumulative verified experimental studies have demonstrated that microRNAs (miRNAs) could be closely related with the development and progression of human complex diseases. Based on the assumption that functional similar miRNAs may have a strong correlation with phenotypically similar diseases and vice versa, researchers developed various effective computational models which combine heterogeneous biologic data sets including disease similarity network, miRNA similarity network, and known disease-miRNA association network to identify potential relationships between miRNAs and diseases in biomedical research. Considering the limitations in previous computational study, we introduced a novel computational method of Ranking-based KNN for miRNA-Disease Association prediction (RKNNMDA) to predict potential related miRNAs for diseases, and our method obtained an AUC of 0.8221 based on leave-one-out cross validation. In addition, RKNNMDA was applied to 3 kinds of important human cancers for further performance evaluation. The results showed that 96%, 80% and 94% of predicted top 50 potential related miRNAs for Colon Neoplasms, Esophageal Neoplasms, and Prostate Neoplasms have been confirmed by experimental literatures, respectively. Moreover, RKNNMDA could be used to predict potential miRNAs for diseases without any known miRNAs, and it is anticipated that RKNNMDA would be of great use for novel miRNA-disease association identification. PMID:28421868
Computational modeling of heterogeneity and function of CD4+ T cells
Carbo, Adria; Hontecillas, Raquel; Andrew, Tricity; Eden, Kristin; Mei, Yongguo; Hoops, Stefan; Bassaganya-Riera, Josep
2014-01-01
The immune system is composed of many different cell types and hundreds of intersecting molecular pathways and signals. This large biological complexity requires coordination between distinct pro-inflammatory and regulatory cell subsets to respond to infection while maintaining tissue homeostasis. CD4+ T cells play a central role in orchestrating immune responses and in maintaining a balance between pro- and anti- inflammatory responses. This tight balance between regulatory and effector reactions depends on the ability of CD4+ T cells to modulate distinct pathways within large molecular networks, since dysregulated CD4+ T cell responses may result in chronic inflammatory and autoimmune diseases. The CD4+ T cell differentiation process comprises an intricate interplay between cytokines, their receptors, adaptor molecules, signaling cascades and transcription factors that help delineate cell fate and function. Computational modeling can help to describe, simulate, analyze, and predict some of the behaviors in this complicated differentiation network. This review provides a comprehensive overview of existing computational immunology methods as well as novel strategies used to model immune responses with a particular focus on CD4+ T cell differentiation. PMID:25364738
Lakin, Matthew R.; Brown, Carl W.; Horwitz, Eli K.; Fanning, M. Leigh; West, Hannah E.; Stefanovic, Darko; Graves, Steven W.
2014-01-01
The development of large-scale molecular computational networks is a promising approach to implementing logical decision making at the nanoscale, analogous to cellular signaling and regulatory cascades. DNA strands with catalytic activity (DNAzymes) are one means of systematically constructing molecular computation networks with inherent signal amplification. Linking multiple DNAzymes into a computational circuit requires the design of substrate molecules that allow a signal to be passed from one DNAzyme to another through programmed biochemical interactions. In this paper, we chronicle an iterative design process guided by biophysical and kinetic constraints on the desired reaction pathways and use the resulting substrate design to implement heterogeneous DNAzyme signaling cascades. A key aspect of our design process is the use of secondary structure in the substrate molecule to sequester a downstream effector sequence prior to cleavage by an upstream DNAzyme. Our goal was to develop a concrete substrate molecule design to achieve efficient signal propagation with maximal activation and minimal leakage. We have previously employed the resulting design to develop high-performance DNAzyme-based signaling systems with applications in pathogen detection and autonomous theranostics. PMID:25347066
Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array
NASA Astrophysics Data System (ADS)
Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul
2008-04-01
This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.
NASA Astrophysics Data System (ADS)
Barreiro, Andrea K.; Ly, Cheng
2017-08-01
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Rich, Scott; Booth, Victoria; Zochowski, Michal
2016-01-01
The plethora of inhibitory interneurons in the hippocampus and cortex play a pivotal role in generating rhythmic activity by clustering and synchronizing cell firing. Results of our simulations demonstrate that both the intrinsic cellular properties of neurons and the degree of network connectivity affect the characteristics of clustered dynamics exhibited in randomly connected, heterogeneous inhibitory networks. We quantify intrinsic cellular properties by the neuron's current-frequency relation (IF curve) and Phase Response Curve (PRC), a measure of how perturbations given at various phases of a neurons firing cycle affect subsequent spike timing. We analyze network bursting properties of networks of neurons with Type I or Type II properties in both excitability and PRC profile; Type I PRCs strictly show phase advances and IF curves that exhibit frequencies arbitrarily close to zero at firing threshold while Type II PRCs display both phase advances and delays and IF curves that have a non-zero frequency at threshold. Type II neurons whose properties arise with or without an M-type adaptation current are considered. We analyze network dynamics under different levels of cellular heterogeneity and as intrinsic cellular firing frequency and the time scale of decay of synaptic inhibition are varied. Many of the dynamics exhibited by these networks diverge from the predictions of the interneuron network gamma (ING) mechanism, as well as from results in all-to-all connected networks. Our results show that randomly connected networks of Type I neurons synchronize into a single cluster of active neurons while networks of Type II neurons organize into two mutually exclusive clusters segregated by the cells' intrinsic firing frequencies. Networks of Type II neurons containing the adaptation current behave similarly to networks of either Type I or Type II neurons depending on network parameters; however, the adaptation current creates differences in the cluster dynamics compared to those in networks of Type I or Type II neurons. To understand these results, we compute neuronal PRCs calculated with a perturbation matching the profile of the synaptic current in our networks. Differences in profiles of these PRCs across the different neuron types reveal mechanisms underlying the divergent network dynamics. PMID:27812323
Self-attracting walk on heterogeneous networks
NASA Astrophysics Data System (ADS)
Kim, Kanghun; Kyoung, Jaegu; Lee, D.-S.
2016-05-01
Understanding human mobility in cyberspace becomes increasingly important in this information era. While human mobility, memory-dependent and subdiffusive, is well understood in Euclidean space, it remains elusive in random heterogeneous networks like the World Wide Web. Here we study the diffusion characteristics of self-attracting walks, in which a walker is more likely to move to the locations visited previously than to unvisited ones, on scale-free networks. Under strong attraction, the number of distinct visited nodes grows linearly in time with larger coefficients in more heterogeneous networks. More interestingly, crossovers to sublinear growths occur in strongly heterogeneous networks. To understand these phenomena, we investigate the characteristic volumes and topology of the cluster of visited nodes and find that the reinforced attraction to hubs results in expediting exploration first but delaying later, as characterized by the scaling exponents that we derive. Our findings and analysis method can be useful for understanding various diffusion processes mediated by human.
Analysis and Visualization of Internet QA Bulletin Boards Represented as Heterogeneous Networks
NASA Astrophysics Data System (ADS)
Murata, Tsuyoshi; Ikeya, Tomoyuki
Visualizing and analyzing social interactions of CGM (Consumer Generated Media) are important for understanding overall activities on the internet. Social interactions are often represented as simple networks that are composed of homogeneous nodes and edges between them. However, related entities in real world are often not homogeneous. Such relations are naturally represented as heterogeneous networks composed of more than one kind of nodes and edges connecting them. In the case of CGM, for example, users and their contents constitute nodes of heterogeneous networks. There are related users (user communities) and related contents (contents communities) in the heterogeneous networks. Discovering both communities and finding correspondence among them will clarify the characteristics of the communites. This paper describes an attempt for visualizing and analyzing social interactions of Yahoo! Chiebukuro (Japanese Yahoo! Answers). New criteria for measuring correspondence between user communities and board communites are defined, and characteristics of both communities are analyzed using the criteria.
SNM-DAT: Simulation of a heterogeneous network for nuclear border security
NASA Astrophysics Data System (ADS)
Nemzek, R.; Kenyon, G.; Koehler, A.; Lee, D. M.; Priedhorsky, W.; Raby, E. Y.
2007-08-01
We approach the problem of detecting Special Nuclear Material (SNM) smuggling across open borders by modeling a heterogeneous sensor network using an agent-based simulation. Our simulation SNM Data Analysis Tool (SNM-DAT) combines fixed seismic, metal, and radiation detectors with a mobile gamma spectrometer. Decision making within the simulation determines threat levels by combined signatures. The spectrometer is a limited-availability asset, and is only deployed for substantial threats. "Crossers" can be benign or carrying shielded SNM. Signatures and sensors are physics based, allowing us to model realistic sensor networks. The heterogeneous network provides great gains in detection efficiency compared to a radiation-only system. We can improve the simulation through better sensor and terrain models, additional signatures, and crossers that mimic actual trans-border traffic. We expect further gains in our ability to design sensor networks as we learn the emergent properties of heterogeneous detection, and potential adversary responses.
Thunes, James R.; Pal, Siladitya; Fortunato, Ronald N.; Phillippi, Julie A.; Gleason, Thomas G.; Vorp, David A.; Maiti, Spandan
2016-01-01
Incorporation of collagen structural information into the study of biomechanical behavior of ascending thoracic aortic (ATA) wall tissue should provide better insight into the pathophysiology of ATA. Structurally motivated constitutive models that include fiber dispersion and recruitment can successfully capture overall mechanical response of the arterial wall tissue. However, these models cannot examine local microarchitectural features of the collagen network, such as the effect of fiber disruptions and interaction between fibrous and non-fibrous components, which may influence emergent biomechanical properties of the tissue. Motivated by this need, we developed a finite element based three-dimensional structural model of the lamellar units of the ATA media that directly incorporates the collagen fiber microarchitecture. The fiber architecture was computer generated utilizing network features, namely fiber orientation distribution, intersection density and areal concentration, obtained from image analysis of multiphoton microscopy images taken from human aneurysmal ascending thoracic aortic media specimens with bicuspid aortic valve (BAV) phenotype. Our model reproduces the typical J-shaped constitutive response of the aortic wall tissue. We found that the stress state in the non-fibrous matrix was homogeneous until the collagen fibers were recruited, but became highly heterogeneous after that event. The degree of heterogeneity was dependent upon local network architecture with high stresses observed near disrupted fibers. The magnitude of non-fibrous matrix stress at higher stretch levels was negatively correlated with local fiber density. The localized stress concentrations, elucidated by this model, may be a factor in the degenerative changes in aneurysmal ATA tissue. PMID:27113538
Quantitative Characterization of the Microstructure and Transport Properties of Biopolymer Networks
Jiao, Yang; Torquato, Salvatore
2012-01-01
Biopolymer networks are of fundamental importance to many biological processes in normal and tumorous tissues. In this paper, we employ the panoply of theoretical and simulation techniques developed for characterizing heterogeneous materials to quantify the microstructure and effective diffusive transport properties (diffusion coefficient De and mean survival time τ) of collagen type I networks at various collagen concentrations. In particular, we compute the pore-size probability density function P(δ) for the networks and present a variety of analytical estimates of the effective diffusion coefficient De for finite-sized diffusing particles, including the low-density approximation, the Ogston approximation, and the Torquato approximation. The Hashin-Strikman upper bound on the effective diffusion coefficient De and the pore-size lower bound on the mean survival time τ are used as benchmarks to test our analytical approximations and numerical results. Moreover, we generalize the efficient first-passage-time techniques for Brownian-motion simulations in suspensions of spheres to the case of fiber networks and compute the associated effective diffusion coefficient De as well as the mean survival time τ, which is related to nuclear magnetic resonance (NMR) relaxation times. Our numerical results for De are in excellent agreement with analytical results for simple network microstructures, such as periodic arrays of parallel cylinders. Specifically, the Torquato approximation provides the most accurate estimates of De for all collagen concentrations among all of the analytical approximations we consider. We formulate a universal curve for τ for the networks at different collagen concentrations, extending the work of Yeong and Torquato [J. Chem. Phys. 106, 8814 (1997)]. We apply rigorous cross-property relations to estimate the effective bulk modulus of collagen networks from a knowledge of the effective diffusion coefficient computed here. The use of cross-property relations to link other physical properties to the transport properties of collagen networks is also discussed. PMID:22683739
Statistically Validated Networks in Bipartite Complex Systems
Tumminello, Michele; Miccichè, Salvatore; Lillo, Fabrizio; Piilo, Jyrki; Mantegna, Rosario N.
2011-01-01
Many complex systems present an intrinsic bipartite structure where elements of one set link to elements of the second set. In these complex systems, such as the system of actors and movies, elements of one set are qualitatively different than elements of the other set. The properties of these complex systems are typically investigated by constructing and analyzing a projected network on one of the two sets (for example the actor network or the movie network). Complex systems are often very heterogeneous in the number of relationships that the elements of one set establish with the elements of the other set, and this heterogeneity makes it very difficult to discriminate links of the projected network that are just reflecting system's heterogeneity from links relevant to unveil the properties of the system. Here we introduce an unsupervised method to statistically validate each link of a projected network against a null hypothesis that takes into account system heterogeneity. We apply the method to a biological, an economic and a social complex system. The method we propose is able to detect network structures which are very informative about the organization and specialization of the investigated systems, and identifies those relationships between elements of the projected network that cannot be explained simply by system heterogeneity. We also show that our method applies to bipartite systems in which different relationships might have different qualitative nature, generating statistically validated networks in which such difference is preserved. PMID:21483858
Ali, Nora A; Mourad, Hebat-Allah M; ElSayed, Hany M; El-Soudani, Magdy; Amer, Hassanein H; Daoud, Ramez M
2016-11-01
The interference is the most important problem in LTE or LTE-Advanced networks. In this paper, the interference was investigated in terms of the downlink signal to interference and noise ratio (SINR). In order to compare the different frequency reuse methods that were developed to enhance the SINR, it would be helpful to have a generalized expression to study the performance of the different methods. Therefore, this paper introduces general expressions for the SINR in homogeneous and in heterogeneous networks. In homogeneous networks, the expression was applied for the most common types of frequency reuse techniques: soft frequency reuse (SFR) and fractional frequency reuse (FFR). The expression was examined by comparing it with previously developed ones in the literature and the comparison showed that the expression is valid for any type of frequency reuse scheme and any network topology. Furthermore, the expression was extended to include the heterogeneous network; the expression includes the problem of co-tier and cross-tier interference in heterogeneous networks (HetNet) and it was examined by the same method of the homogeneous one.
Laplacian normalization and random walk on heterogeneous networks for disease-gene prioritization.
Zhao, Zhi-Qin; Han, Guo-Sheng; Yu, Zu-Guo; Li, Jinyan
2015-08-01
Random walk on heterogeneous networks is a recently emerging approach to effective disease gene prioritization. Laplacian normalization is a technique capable of normalizing the weight of edges in a network. We use this technique to normalize the gene matrix and the phenotype matrix before the construction of the heterogeneous network, and also use this idea to define the transition matrices of the heterogeneous network. Our method has remarkably better performance than the existing methods for recovering known gene-phenotype relationships. The Shannon information entropy of the distribution of the transition probabilities in our networks is found to be smaller than the networks constructed by the existing methods, implying that a higher number of top-ranked genes can be verified as disease genes. In fact, the most probable gene-phenotype relationships ranked within top 3 or top 5 in our gene lists can be confirmed by the OMIM database for many cases. Our algorithms have shown remarkably superior performance over the state-of-the-art algorithms for recovering gene-phenotype relationships. All Matlab codes can be available upon email request. Copyright © 2015 Elsevier Ltd. All rights reserved.
Seamless interworking architecture for WBAN in heterogeneous wireless networks with QoS guarantees.
Khan, Pervez; Ullah, Niamat; Ullah, Sana; Kwak, Kyung Sup
2011-10-01
The IEEE 802.15.6 standard is a communication standard optimized for low-power and short-range in-body/on-body nodes to serve a variety of medical, consumer electronics and entertainment applications. Providing high mobility with guaranteed Quality of Service (QoS) to a WBAN user in heterogeneous wireless networks is a challenging task. A WBAN uses a Personal Digital Assistant (PDA) to gather data from body sensors and forwards it to a remote server through wide range wireless networks. In this paper, we present a coexistence study of WBAN with Wireless Local Area Networks (WLAN) and Wireless Wide Area Networks (WWANs). The main issue is interworking of WBAN in heterogenous wireless networks including seamless handover, QoS, emergency services, cooperation and security. We propose a Seamless Interworking Architecture (SIA) for WBAN in heterogenous wireless networks based on a cost function. The cost function is based on power consumption and data throughput costs. Our simulation results show that the proposed scheme outperforms typical approaches in terms of throughput, delay and packet loss rate.
Collectives for Multiple Resource Job Scheduling Across Heterogeneous Servers
NASA Technical Reports Server (NTRS)
Tumer, K.; Lawson, J.
2003-01-01
Efficient management of large-scale, distributed data storage and processing systems is a major challenge for many computational applications. Many of these systems are characterized by multi-resource tasks processed across a heterogeneous network. Conventional approaches, such as load balancing, work well for centralized, single resource problems, but breakdown in the more general case. In addition, most approaches are often based on heuristics which do not directly attempt to optimize the world utility. In this paper, we propose an agent based control system using the theory of collectives. We configure the servers of our network with agents who make local job scheduling decisions. These decisions are based on local goals which are constructed to be aligned with the objective of optimizing the overall efficiency of the system. We demonstrate that multi-agent systems in which all the agents attempt to optimize the same global utility function (team game) only marginally outperform conventional load balancing. On the other hand, agents configured using collectives outperform both team games and load balancing (by up to four times for the latter), despite their distributed nature and their limited access to information.
Shamwell, E Jared; Nothwang, William D; Perlis, Donald
2018-05-04
Aimed at improving size, weight, and power (SWaP)-constrained robotic vision-aided state estimation, we describe our unsupervised, deep convolutional-deconvolutional sensor fusion network, Multi-Hypothesis DeepEfference (MHDE). MHDE learns to intelligently combine noisy heterogeneous sensor data to predict several probable hypotheses for the dense, pixel-level correspondence between a source image and an unseen target image. We show how our multi-hypothesis formulation provides increased robustness against dynamic, heteroscedastic sensor and motion noise by computing hypothesis image mappings and predictions at 76⁻357 Hz depending on the number of hypotheses being generated. MHDE fuses noisy, heterogeneous sensory inputs using two parallel, inter-connected architectural pathways and n (1⁻20 in this work) multi-hypothesis generating sub-pathways to produce n global correspondence estimates between a source and a target image. We evaluated MHDE on the KITTI Odometry dataset and benchmarked it against the vision-only DeepMatching and Deformable Spatial Pyramids algorithms and were able to demonstrate a significant runtime decrease and a performance increase compared to the next-best performing method.
Quasirandom geometric networks from low-discrepancy sequences
NASA Astrophysics Data System (ADS)
Estrada, Ernesto
2017-08-01
We define quasirandom geometric networks using low-discrepancy sequences, such as Halton, Sobol, and Niederreiter. The networks are built in d dimensions by considering the d -tuples of digits generated by these sequences as the coordinates of the vertices of the networks in a d -dimensional Id unit hypercube. Then, two vertices are connected by an edge if they are at a distance smaller than a connection radius. We investigate computationally 11 network-theoretic properties of two-dimensional quasirandom networks and compare them with analogous random geometric networks. We also study their degree distribution and their spectral density distributions. We conclude from this intensive computational study that in terms of the uniformity of the distribution of the vertices in the unit square, the quasirandom networks look more random than the random geometric networks. We include an analysis of potential strategies for generating higher-dimensional quasirandom networks, where it is know that some of the low-discrepancy sequences are highly correlated. In this respect, we conclude that up to dimension 20, the use of scrambling, skipping and leaping strategies generate quasirandom networks with the desired properties of uniformity. Finally, we consider a diffusive process taking place on the nodes and edges of the quasirandom and random geometric graphs. We show that the diffusion time is shorter in the quasirandom graphs as a consequence of their larger structural homogeneity. In the random geometric graphs the diffusion produces clusters of concentration that make the process more slow. Such clusters are a direct consequence of the heterogeneous and irregular distribution of the nodes in the unit square in which the generation of random geometric graphs is based on.
A survey of CPU-GPU heterogeneous computing techniques
Mittal, Sparsh; Vetter, Jeffrey S.
2015-07-04
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
A survey of CPU-GPU heterogeneous computing techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
As both CPU and GPU become employed in a wide range of applications, it has been acknowledged that both of these processing units (PUs) have their unique features and strengths and hence, CPU-GPU collaboration is inevitable to achieve high-performance computing. This has motivated significant amount of research on heterogeneous computing techniques, along with the design of CPU-GPU fused chips and petascale heterogeneous supercomputers. In this paper, we survey heterogeneous computing techniques (HCTs) such as workload-partitioning which enable utilizing both CPU and GPU to improve performance and/or energy efficiency. We review heterogeneous computing approaches at runtime, algorithm, programming, compiler and applicationmore » level. Further, we review both discrete and fused CPU-GPU systems; and discuss benchmark suites designed for evaluating heterogeneous computing systems (HCSs). Furthermore, we believe that this paper will provide insights into working and scope of applications of HCTs to researchers and motivate them to further harness the computational powers of CPUs and GPUs to achieve the goal of exascale performance.« less
Towards ubiquitous access of computer-assisted surgery systems.
Liu, Hui; Lufei, Hanping; Shi, Weishong; Chaudhary, Vipin
2006-01-01
Traditional stand-alone computer-assisted surgery (CAS) systems impede the ubiquitous and simultaneous access by multiple users. With advances in computing and networking technologies, ubiquitous access to CAS systems becomes possible and promising. Based on our preliminary work, CASMIL, a stand-alone CAS server developed at Wayne State University, we propose a novel mobile CAS system, UbiCAS, which allows surgeons to retrieve, review and interpret multimodal medical images, and to perform some critical neurosurgical procedures on heterogeneous devices from anywhere at anytime. Furthermore, various optimization techniques, including caching, prefetching, pseudo-streaming-model, and compression, are used to guarantee the QoS of the UbiCAS system. UbiCAS enables doctors at remote locations to actively participate remote surgeries, share patient information in real time before, during, and after the surgery.
CFD Research, Parallel Computation and Aerodynamic Optimization
NASA Technical Reports Server (NTRS)
Ryan, James S.
1995-01-01
During the last five years, CFD has matured substantially. Pure CFD research remains to be done, but much of the focus has shifted to integration of CFD into the design process. The work under these cooperative agreements reflects this trend. The recent work, and work which is planned, is designed to enhance the competitiveness of the US aerospace industry. CFD and optimization approaches are being developed and tested, so that the industry can better choose which methods to adopt in their design processes. The range of computer architectures has been dramatically broadened, as the assumption that only huge vector supercomputers could be useful has faded. Today, researchers and industry can trade off time, cost, and availability, choosing vector supercomputers, scalable parallel architectures, networked workstations, or heterogenous combinations of these to complete required computations efficiently.
2005-06-01
virtualisation of distributed computing and data resources such as processing, network bandwidth, and storage capacity, to create a single system...and Simulation (M&S) will be integrated into this heterogeneous SOA. M&S functionality will be available in the form of operational M&S services. One...documents defining net centric warfare, the use of M&S functionality is a common theme. Alberts and Hayes give a good overview on net centric operations
2010-01-01
Smoking Behavior and Friendship Formation: The Importance of Time Heterogeneity in Studying Social Network Dynamics Joshua A. Lospinoso Department of...djsatchell@gmail.com Abstract—This study illustrates the importance of assessing and accounting for time heterogeneity in longitudinal social net- work...analysis. We apply the time heterogeneity model selection procedure of [1] to a dataset collected on social tie formation for university freshman in the
Enhancing gene regulatory network inference through data integration with markov random fields
Banf, Michael; Rhee, Seung Y.
2017-02-01
Here, a gene regulatory network links transcription factors to their target genes and represents a map of transcriptional regulation. Much progress has been made in deciphering gene regulatory networks computationally. However, gene regulatory network inference for most eukaryotic organisms remain challenging. To improve the accuracy of gene regulatory network inference and facilitate candidate selection for experimentation, we developed an algorithm called GRACE (Gene Regulatory network inference ACcuracy Enhancement). GRACE exploits biological a priori and heterogeneous data integration to generate high- confidence network predictions for eukaryotic organisms using Markov Random Fields in a semi-supervised fashion. GRACE uses a novel optimization schememore » to integrate regulatory evidence and biological relevance. It is particularly suited for model learning with sparse regulatory gold standard data. We show GRACE’s potential to produce high confidence regulatory networks compared to state of the art approaches using Drosophila melanogaster and Arabidopsis thaliana data. In an A. thaliana developmental gene regulatory network, GRACE recovers cell cycle related regulatory mechanisms and further hypothesizes several novel regulatory links, including a putative control mechanism of vascular structure formation due to modifications in cell proliferation.« less
Enhancing gene regulatory network inference through data integration with markov random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banf, Michael; Rhee, Seung Y.
Here, a gene regulatory network links transcription factors to their target genes and represents a map of transcriptional regulation. Much progress has been made in deciphering gene regulatory networks computationally. However, gene regulatory network inference for most eukaryotic organisms remain challenging. To improve the accuracy of gene regulatory network inference and facilitate candidate selection for experimentation, we developed an algorithm called GRACE (Gene Regulatory network inference ACcuracy Enhancement). GRACE exploits biological a priori and heterogeneous data integration to generate high- confidence network predictions for eukaryotic organisms using Markov Random Fields in a semi-supervised fashion. GRACE uses a novel optimization schememore » to integrate regulatory evidence and biological relevance. It is particularly suited for model learning with sparse regulatory gold standard data. We show GRACE’s potential to produce high confidence regulatory networks compared to state of the art approaches using Drosophila melanogaster and Arabidopsis thaliana data. In an A. thaliana developmental gene regulatory network, GRACE recovers cell cycle related regulatory mechanisms and further hypothesizes several novel regulatory links, including a putative control mechanism of vascular structure formation due to modifications in cell proliferation.« less
Mercado, Eduardo; Church, Barbara A
2016-08-01
Children with autism spectrum disorder (ASD) sometimes have difficulties learning categories. Past computational work suggests that such deficits may result from atypical representations in cortical maps. Here we use neural networks to show that idiosyncratic transformations of inputs can result in the formation of feature maps that impair category learning for some inputs, but not for other closely related inputs. These simulations suggest that large inter- and intra-individual variations in learning capacities shown by children with ASD across similar categorization tasks may similarly result from idiosyncratic perceptual encoding that is resistant to experience-dependent changes. If so, then both feedback- and exposure-based category learning should lead to heterogeneous, stimulus-dependent deficits in children with ASD.
Effects of Heterogeneous Social Interactions on Flocking Dynamics
NASA Astrophysics Data System (ADS)
Miguel, M. Carmen; Parley, Jack T.; Pastor-Satorras, Romualdo
2018-02-01
Social relationships characterize the interactions that occur within social species and may have an important impact on collective animal motion. Here, we consider a variation of the standard Vicsek model for collective motion in which interactions are mediated by an empirically motivated scale-free topology that represents a heterogeneous pattern of social contacts. We observe that the degree of order of the model is strongly affected by network heterogeneity: more heterogeneous networks show a more resilient ordered state, while less heterogeneity leads to a more fragile ordered state that can be destroyed by sufficient external noise. Our results challenge the previously accepted equivalence between the static Vicsek model and the equilibrium X Y model on the network of connections, and point towards a possible equivalence with models exhibiting a different symmetry.
Synchronization in networks with heterogeneous coupling delays
NASA Astrophysics Data System (ADS)
Otto, Andreas; Radons, Günter; Bachrathy, Dániel; Orosz, Gábor
2018-01-01
Synchronization in networks of identical oscillators with heterogeneous coupling delays is studied. A decomposition of the network dynamics is obtained by block diagonalizing a newly introduced adjacency lag operator which contains the topology of the network as well as the corresponding coupling delays. This generalizes the master stability function approach, which was developed for homogenous delays. As a result the network dynamics can be analyzed by delay differential equations with distributed delay, where different delay distributions emerge for different network modes. Frequency domain methods are used for the stability analysis of synchronized equilibria and synchronized periodic orbits. As an example, the synchronization behavior in a system of delay-coupled Hodgkin-Huxley neurons is investigated. It is shown that the parameter regions where synchronized periodic spiking is unstable expand when increasing the delay heterogeneity.
Distributed sensor coordination for advanced energy systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumer, Kagan
Motivation: The ability to collect key system level information is critical to the safe, efficient and reliable operation of advanced power systems. Recent advances in sensor technology have enabled some level of decision making directly at the sensor level. However, coordinating large numbers of sensors, particularly heterogeneous sensors, to achieve system level objectives such as predicting plant efficiency, reducing downtime or predicting outages requires sophisticated coordination algorithms. Indeed, a critical issue in such systems is how to ensure the interaction of a large number of heterogenous system components do not interfere with one another and lead to undesirable behavior. Objectivesmore » and Contributions: The long-term objective of this work is to provide sensor deployment, coordination and networking algorithms for large numbers of sensors to ensure the safe, reliable, and robust operation of advanced energy systems. Our two specific objectives are to: 1. Derive sensor performance metrics for heterogeneous sensor networks. 2. Demonstrate effectiveness, scalability and reconfigurability of heterogeneous sensor network in advanced power systems. The key technical contribution of this work is to push the coordination step to the design of the objective functions of the sensors, allowing networks of heterogeneous sensors to be controlled. By ensuring that the control and coordination is not specific to particular sensor hardware, this approach enables the design and operation of large heterogeneous sensor networks. In addition to the coordination coordination mechanism, this approach allows the system to be reconfigured in response to changing needs (e.g., sudden external events requiring new responses) or changing sensor network characteristics (e.g., sudden changes to plant condition). Impact: The impact of this work extends to a large class of problems relevant to the National Energy Technology Laboratory including sensor placement, heterogeneous sensor coordination, and sensor network control in advanced power systems. Each application has specific needs, but they all share the one crucial underlying problem: how to ensure that the interactions of a large number of heterogenous agents lead to coordinated system behavior. This proposal describes a new paradigm that addresses that very issue in a systematic way. Key Results and Findings: All milestones have been completed. Our results demonstrate that by properly shaping agent objective functions, we can develop large (up to 10,000 devices) heterogeneous sensor networks with key desirable properties. The first milestone shows that properly choosing agent-specific objective functions increases system performance by up to 99.9% compared to global evaluations. The second milestone shows evolutionary algorithms learn excellent sensor network coordination policies prior to network deployment, and these policies can be refined online once the network is deployed. The third milestone shows the resulting sensor networks networks are extremely robust to sensor noise, where networks with up to 25% sensor noise are capable of providing measurements with errors on the order of 10⁻³. The fourth milestone shows the resulting sensor networks are extremely robust to sensor failure, with 25% of the sensors in the system failing resulting in no significant performance losses after system reconfiguration.« less
A mixed SIR-SIS model to contain a virus spreading through networks with two degrees
NASA Astrophysics Data System (ADS)
Essouifi, Mohamed; Achahbar, Abdelfattah
Due to the fact that the “nodes” and “links” of real networks are heterogeneous, to model computer viruses prevalence throughout the Internet, we borrow the idea of the reduced scale free network which was introduced recently. The purpose of this paper is to extend the previous deterministic two subchains of Susceptible-Infected-Susceptible (SIS) model into a mixed Susceptible-Infected-Recovered and Susceptible-Infected-Susceptible (SIR-SIS) model to contain the computer virus spreading over networks with two degrees. Moreover, we develop its stochastic counterpart. Due to the high protection and security taken for hubs class, we suggest to treat it by using SIR epidemic model rather than the SIS one. The analytical study reveals that the proposed model admits a stable viral equilibrium. Thus, it is shown numerically that the mean dynamic behavior of the stochastic model is in agreement with the deterministic one. Unlike the infection densities i2 and i which both tend to a viral equilibrium for both approaches as in the previous study, i1 tends to the virus-free equilibrium. Furthermore, since a proportion of infectives are recovered, the global infection density i is minimized. Therefore, the permanent presence of viruses in the network due to the lower-degree nodes class. Many suggestions are put forward for containing viruses propagation and minimizing their damages.
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.
He, Ziyang; Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan
2018-04-17
By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.
LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices
Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan
2018-01-01
By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices. PMID:29673171
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Computing architectures are being assembled that extend concurrent engineering practices by providing more efficient execution and collaboration on distributed, heterogeneous computing networks. Built on the successes of initial architectures, requirements for a next-generation design computing infrastructure can be developed. These requirements concentrate on those needed by a designer in decision-making processes from product conception to recycling and can be categorized in two areas: design process and design information management. A designer both designs and executes design processes throughout design time to achieve better product and process capabilities while expanding fewer resources. In order to accomplish this, information, or more appropriately design knowledge, needs to be adequately managed during product and process decomposition as well as recomposition. A foundation has been laid that captures these requirements in a design architecture called DREAMS (Developing Robust Engineering Analysis Models and Specifications). In addition, a computing infrastructure, called IMAGE (Intelligent Multidisciplinary Aircraft Generation Environment), is being developed that satisfies design requirements defined in DREAMS and incorporates enabling computational technologies.
Chevalier, Marc; Toporikova, Natalia; Simmers, John; Thoby-Brisson, Muriel
2016-01-01
Breathing is a vital rhythmic behavior generated by hindbrain neuronal circuitry, including the preBötzinger complex network (preBötC) that controls inspiration. The emergence of preBötC network activity during prenatal development has been described, but little is known regarding inspiratory neurons expressing pacemaker properties at embryonic stages. Here, we combined calcium imaging and electrophysiological recordings in mouse embryo brainstem slices together with computational modeling to reveal the existence of heterogeneous pacemaker oscillatory properties relying on distinct combinations of burst-generating INaP and ICAN conductances. The respective proportion of the different inspiratory pacemaker subtypes changes during prenatal development. Concomitantly, network rhythmogenesis switches from a purely INaP/ICAN-dependent mechanism at E16.5 to a combined pacemaker/network-driven process at E18.5. Our results provide the first description of pacemaker bursting properties in embryonic preBötC neurons and indicate that network rhythmogenesis undergoes important changes during prenatal development through alterations in both circuit properties and the biophysical characteristics of pacemaker neurons. DOI: http://dx.doi.org/10.7554/eLife.16125.001 PMID:27434668
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity. PMID:25469704
Makedonska, Nataliia; Hyman, Jeffrey D.; Karra, Satish; ...
2016-08-01
The apertures of natural fractures in fractured rock are highly heterogeneous. However, in-fracture aperture variability is often neglected in flow and transport modeling and individual fractures are assumed to have uniform aperture distribution. The relative importance of in-fracture variability in flow and transport modeling within kilometer-scale fracture networks has been under debate for a long time, since the flow in each single fracture is controlled not only by in-fracture variability but also by boundary conditions. Computational limitations have previously prohibited researchers from investigating the relative importance of in-fracture variability in flow and transport modeling within large-scale fracture networks. We addressmore » this question by incorporating internal heterogeneity of individual fractures into flow simulations within kilometer scale three-dimensional fracture networks, where fracture intensity, P 32 (ratio between total fracture area and domain volume) is between 0.027 and 0.031 [1/m]. The recently developed discrete fracture network (DFN) simulation capability, dfnWorks, is used to generate kilometer scale DFNs that include in-fracture aperture variability represented by a stationary log-normal stochastic field with various correlation lengths and variances. The Lagrangian transport parameters, non-reacting travel time, , and cumulative retention, , are calculated along particles streamlines. As a result, it is observed that due to local flow channeling early particle travel times are more sensitive to in-fracture aperture variability than the tails of travel time distributions, where no significant effect of the in-fracture aperture variations and spatial correlation length is observed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makedonska, Nataliia; Hyman, Jeffrey D.; Karra, Satish
The apertures of natural fractures in fractured rock are highly heterogeneous. However, in-fracture aperture variability is often neglected in flow and transport modeling and individual fractures are assumed to have uniform aperture distribution. The relative importance of in-fracture variability in flow and transport modeling within kilometer-scale fracture networks has been under debate for a long time, since the flow in each single fracture is controlled not only by in-fracture variability but also by boundary conditions. Computational limitations have previously prohibited researchers from investigating the relative importance of in-fracture variability in flow and transport modeling within large-scale fracture networks. We addressmore » this question by incorporating internal heterogeneity of individual fractures into flow simulations within kilometer scale three-dimensional fracture networks, where fracture intensity, P 32 (ratio between total fracture area and domain volume) is between 0.027 and 0.031 [1/m]. The recently developed discrete fracture network (DFN) simulation capability, dfnWorks, is used to generate kilometer scale DFNs that include in-fracture aperture variability represented by a stationary log-normal stochastic field with various correlation lengths and variances. The Lagrangian transport parameters, non-reacting travel time, , and cumulative retention, , are calculated along particles streamlines. As a result, it is observed that due to local flow channeling early particle travel times are more sensitive to in-fracture aperture variability than the tails of travel time distributions, where no significant effect of the in-fracture aperture variations and spatial correlation length is observed.« less
Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
Grid and Cloud for Developing Countries
NASA Astrophysics Data System (ADS)
Petitdidier, Monique
2014-05-01
The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.
NASA Astrophysics Data System (ADS)
Wang, Qingyun; Zhang, Honghui; Chen, Guanrong
2012-12-01
We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.
Visual analysis of large heterogeneous social networks by semantic and structural abstraction.
Shen, Zeqian; Ma, Kwan-Liu; Eliassi-Rad, Tina
2006-01-01
Social network analysis is an active area of study beyond sociology. It uncovers the invisible relationships between actors in a network and provides understanding of social processes and behaviors. It has become an important technique in a variety of application areas such as the Web, organizational studies, and homeland security. This paper presents a visual analytics tool, OntoVis, for understanding large, heterogeneous social networks, in which nodes and links could represent different concepts and relations, respectively. These concepts and relations are related through an ontology (also known as a schema). OntoVis is named such because it uses information in the ontology associated with a social network to semantically prune a large, heterogeneous network. In addition to semantic abstraction, OntoVis also allows users to do structural abstraction and importance filtering to make large networks manageable and to facilitate analytic reasoning. All these unique capabilities of OntoVis are illustrated with several case studies.
Henry, Teague; Gesell, Sabina B.; Ip, Edward H.
2016-01-01
Background Social networks influence children and adolescents’ physical activity. The focus of this paper is to examine the differences in the effects of physical activity on friendship selection, with eye to the implications on physical activity interventions for young children. Network interventions to increase physical activity are warranted but have not been conducted. Prior to implementing a network intervention in the field, it is important to understand potential heterogeneities in the effects that activity level have on network structure. In this study, the associations between activity level and cross sectional network structure, and activity level and change in network structure are assessed. Methods We studied a real-world friendship network among 81 children (average age 7.96 years) who lived in low SES neighborhoods, attended public schools, and attended one of two structured aftercare programs, of which one has existed and the other was new. We used the exponential random graph model (ERGMs) and its longitudinal extension to evaluate the association between activity level and various demographic factors in having, forming, and dissolving friendship. Due to heterogeneity between the friendship networks within the aftercare programs, separate analyses were conducted for each network. Results There was heterogeneity in the effect of physical activity on both cross sectional network structure and the formation and dissolution processes, both across time and between networks. Conclusions Network analysis could be used to assess the unique structure and dynamics of a social network before an intervention is implemented, so as to optimize the effects of the network intervention for increasing childhood physical activity. Additionally, if peer selection processes are changing within a network, a static network intervention strategy for childhood physical activity could become inefficient as the network evolves. PMID:27867518
Epidemic dynamics on a risk-based evolving social network
NASA Astrophysics Data System (ADS)
Antwi, Shadrack; Shaw, Leah
2013-03-01
Social network models have been used to study how behavior affects the dynamics of an infection in a population. Motivated by HIV, we consider how a trade-off between benefits and risks of sexual connections determine network structure and disease prevalence. We define a stochastic network model with formation and breaking of links as changes in sexual contacts. Each node has an intrinsic benefit its neighbors derive from connecting to it. Nodes' infection status is not apparent to others, but nodes with more connections (higher degree) are assumed more likely to be infected. The probability to form and break links is determined by a payoff computed from the benefit and degree-dependent risk. The disease is represented by a SI (susceptible-infected) model. We study network and epidemic evolution via Monte Carlo simulation and analytically predict the behavior with a heterogeneous mean field approach. The dependence of network connectivity and infection threshold on parameters is determined, and steady state degree distribution and epidemic levels are obtained. We also study a situation where system-wide infection levels alter perception of risk and cause nodes to adjust their behavior. This is a case of an adaptive network, where node status feeds back to change network geometry.
NASA Astrophysics Data System (ADS)
Wollheim, W. M.; Stewart, R. J.
2011-12-01
Numerous types of heterogeneity exist within river systems, leading to hotspots of nutrient sources, sinks, and impacts embedded within an underlying gradient defined by river size. This heterogeneity influences the downstream propagation of anthropogenic impacts across flow conditions. We applied a river network model to explore how nitrogen saturation at river network scales is influenced by the abundance and distribution of potential nutrient processing hotspots (lakes, beaver ponds, tributary junctions, hyporheic zones) under different flow conditions. We determined that under low flow conditions, whole network nutrient removal is relatively insensitive to the number of hotspots because the underlying river network structure has sufficient nutrient processing capacity. However, hotspots become more important at higher flows and greatly influence the spatial distribution of removal within the network at all flows, suggesting that identification of heterogeneity is critical to develop predictive understanding of nutrient removal processes under changing loading and climate conditions. New temporally intensive data from in situ sensors can potentially help to better understand and constrain these dynamics.
Epidemic outbreaks in complex heterogeneous networks
NASA Astrophysics Data System (ADS)
Moreno, Y.; Pastor-Satorras, R.; Vespignani, A.
2002-04-01
We present a detailed analytical and numerical study for the spreading of infections with acquired immunity in complex population networks. We show that the large connectivity fluctuations usually found in these networks strengthen considerably the incidence of epidemic outbreaks. Scale-free networks, which are characterized by diverging connectivity fluctuations in the limit of a very large number of nodes, exhibit the lack of an epidemic threshold and always show a finite fraction of infected individuals. This particular weakness, observed also in models without immunity, defines a new epidemiological framework characterized by a highly heterogeneous response of the system to the introduction of infected individuals with different connectivity. The understanding of epidemics in complex networks might deliver new insights in the spread of information and diseases in biological and technological networks that often appear to be characterized by complex heterogeneous architectures.
The circadian rhythm induced by the heterogeneous network structure of the suprachiasmatic nucleus
NASA Astrophysics Data System (ADS)
Gu, Changgui; Yang, Huijie
2016-05-01
In mammals, the master clock is located in the suprachiasmatic nucleus (SCN), which is composed of about 20 000 nonidentical neuronal oscillators expressing different intrinsic periods. These neurons are coupled through neurotransmitters to form a network consisting of two subgroups, i.e., a ventrolateral (VL) subgroup and a dorsomedial (DM) subgroup. The VL contains about 25% SCN neurons that receive photic input from the retina, and the DM comprises the remaining 75% SCN neurons which are coupled to the VL. The synapses from the VL to the DM are evidently denser than that from the DM to the VL, in which the VL dominates the DM. Therefore, the SCN is a heterogeneous network where the neurons of the VL are linked with a large number of SCN neurons. In the present study, we mimicked the SCN network based on Goodwin model considering four types of networks including an all-to-all network, a Newman-Watts (NW) small world network, an Erdös-Rényi (ER) random network, and a Barabási-Albert (BA) scale free network. We found that the circadian rhythm was induced in the BA, ER, and NW networks, while the circadian rhythm was absent in the all-to-all network with weak cellular coupling, where the amplitude of the circadian rhythm is largest in the BA network which is most heterogeneous in the network structure. Our finding provides an alternative explanation for the induction or enhancement of circadian rhythm by the heterogeneity of the network structure.
Brain Performance versus Phase Transitions
NASA Astrophysics Data System (ADS)
Torres, Joaquín J.; Marro, J.
2015-07-01
We here illustrate how a well-founded study of the brain may originate in assuming analogies with phase-transition phenomena. Analyzing to what extent a weak signal endures in noisy environments, we identify the underlying mechanisms, and it results a description of how the excitability associated to (non-equilibrium) phase changes and criticality optimizes the processing of the signal. Our setting is a network of integrate-and-fire nodes in which connections are heterogeneous with rapid time-varying intensities mimicking fatigue and potentiation. Emergence then becomes quite robust against wiring topology modification—in fact, we considered from a fully connected network to the Homo sapiens connectome—showing the essential role of synaptic flickering on computations. We also suggest how to experimentally disclose significant changes during actual brain operation.
Nanoparticle transport and delivery in a heterogeneous pulmonary vasculature.
Sohrabi, Salman; Wang, Shunqiang; Tan, Jifu; Xu, Jiang; Yang, Jie; Liu, Yaling
2017-01-04
Quantitative understanding of nanoparticles delivery in a complex vascular networks is very challenging because it involves interplay of transport, hydrodynamic force, and multivalent interactions across different scales. Heterogeneous pulmonary network includes up to 16 generations of vessels in its arterial tree. Modeling the complete pulmonary vascular system in 3D is computationally unrealistic. To save computational cost, a model reconstructed from MRI scanned images is cut into an arbitrary pathway consisting of the upper 4-generations. The remaining generations are represented by an artificially rebuilt pathway. Physiological data such as branch information and connectivity matrix are used for geometry reconstruction. A lumped model is used to model the flow resistance of the branches that are cut off from the truncated pathway. Moreover, since the nanoparticle binding process is stochastic in nature, a binding probability function is used to simplify the carrier attachment and detachment processes. The stitched realistic and artificial geometries coupled with the lumped model at the unresolved outlets are used to resolve the flow field within the truncated arterial tree. Then, the biodistribution of 200nm, 700nm and 2µm particles at different vessel generations is studied. At the end, 0.2-0.5% nanocarrier deposition is predicted during one time passage of drug carriers through pulmonary vascular tree. Our truncated approach enabled us to efficiently model hemodynamics and accordingly particle distribution in a complex 3D vasculature providing a simple, yet efficient predictive tool to study drug delivery at organ level. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Framework for Integration of Heterogeneous Medical Imaging Networks
Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos
2014-01-01
Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS. PMID:25279021
A framework for integration of heterogeneous medical imaging networks.
Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos
2014-01-01
Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations
NASA Astrophysics Data System (ADS)
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I.; Strydis, Christos
2017-12-01
Objective. The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. Approach. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload’s performance characteristics. Main results. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. Significance. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
BrainFrame: a node-level heterogeneous accelerator platform for neuron simulations.
Smaragdos, Georgios; Chatzikonstantis, Georgios; Kukreja, Rahul; Sidiropoulos, Harry; Rodopoulos, Dimitrios; Sourdis, Ioannis; Al-Ars, Zaid; Kachris, Christoforos; Soudris, Dimitrios; De Zeeuw, Chris I; Strydis, Christos
2017-12-01
The advent of high-performance computing (HPC) in recent years has led to its increasing use in brain studies through computational models. The scale and complexity of such models are constantly increasing, leading to challenging computational requirements. Even though modern HPC platforms can often deal with such challenges, the vast diversity of the modeling field does not permit for a homogeneous acceleration platform to effectively address the complete array of modeling requirements. In this paper we propose and build BrainFrame, a heterogeneous acceleration platform that incorporates three distinct acceleration technologies, an Intel Xeon-Phi CPU, a NVidia GP-GPU and a Maxeler Dataflow Engine. The PyNN software framework is also integrated into the platform. As a challenging proof of concept, we analyze the performance of BrainFrame on different experiment instances of a state-of-the-art neuron model, representing the inferior-olivary nucleus using a biophysically-meaningful, extended Hodgkin-Huxley representation. The model instances take into account not only the neuronal-network dimensions but also different network-connectivity densities, which can drastically affect the workload's performance characteristics. The combined use of different HPC technologies demonstrates that BrainFrame is better able to cope with the modeling diversity encountered in realistic experiments while at the same time running on significantly lower energy budgets. Our performance analysis clearly shows that the model directly affects performance and all three technologies are required to cope with all the model use cases. The BrainFrame framework is designed to transparently configure and select the appropriate back-end accelerator technology for use per simulation run. The PyNN integration provides a familiar bridge to the vast number of models already available. Additionally, it gives a clear roadmap for extending the platform support beyond the proof of concept, with improved usability and directly useful features to the computational-neuroscience community, paving the way for wider adoption.
Query-Based Outlier Detection in Heterogeneous Information Networks.
Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei
2015-03-01
Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user's search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks.
Query-Based Outlier Detection in Heterogeneous Information Networks
Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei
2015-01-01
Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user’s search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks. PMID:27064397
Sensitivity of surface meteorological analyses to observation networks
NASA Astrophysics Data System (ADS)
Tyndall, Daniel Paul
A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.
Object-oriented Approach to High-level Network Monitoring and Management
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
2000-01-01
An absolute prerequisite for the management of large investigating methods to build high-level monitoring computer networks is the ability to measure their systems that are built on top of existing monitoring performance. Unless we monitor a system, we cannot tools. Due to the heterogeneous nature of the hope to manage and control its performance. In this underlying systems at NASA Langley Research Center, paper, we describe a network monitoring system that we use an object-oriented approach for the design, we are currently designing and implementing. Keeping, first, we use UML (Unified Modeling Language) to in mind the complexity of the task and the required model users' requirements. Second, we identify the flexibility for future changes, we use an object-oriented existing capabilities of the underlying monitoring design methodology. The system is built using the system. Third, we try to map the former with the latter. APIs offered by the HP OpenView system.
Vital nodes identification in complex networks
NASA Astrophysics Data System (ADS)
Lü, Linyuan; Chen, Duanbing; Ren, Xiao-Long; Zhang, Qian-Ming; Zhang, Yi-Cheng; Zhou, Tao
2016-09-01
Real networks exhibit heterogeneous nature with nodes playing far different roles in structure and function. To identify vital nodes is thus very significant, allowing us to control the outbreak of epidemics, to conduct advertisements for e-commercial products, to predict popular scientific publications, and so on. The vital nodes identification attracts increasing attentions from both computer science and physical societies, with algorithms ranging from simply counting the immediate neighbors to complicated machine learning and message passing approaches. In this review, we clarify the concepts and metrics, classify the problems and methods, as well as review the important progresses and describe the state of the art. Furthermore, we provide extensive empirical analyses to compare well-known methods on disparate real networks, and highlight the future directions. In spite of the emphasis on physics-rooted approaches, the unification of the language and comparison with cross-domain methods would trigger interdisciplinary solutions in the near future.
RAIN: RNA–protein Association and Interaction Networks
Junge, Alexander; Refsgaard, Jan C.; Garde, Christian; Pan, Xiaoyong; Santos, Alberto; Alkan, Ferhat; Anthon, Christian; von Mering, Christian; Workman, Christopher T.; Jensen, Lars Juhl; Gorodkin, Jan
2017-01-01
Protein association networks can be inferred from a range of resources including experimental data, literature mining and computational predictions. These types of evidence are emerging for non-coding RNAs (ncRNAs) as well. However, integration of ncRNAs into protein association networks is challenging due to data heterogeneity. Here, we present a database of ncRNA–RNA and ncRNA–protein interactions and its integration with the STRING database of protein–protein interactions. These ncRNA associations cover four organisms and have been established from curated examples, experimental data, interaction predictions and automatic literature mining. RAIN uses an integrative scoring scheme to assign a confidence score to each interaction. We demonstrate that RAIN outperforms the underlying microRNA-target predictions in inferring ncRNA interactions. RAIN can be operated through an easily accessible web interface and all interaction data can be downloaded. Database URL: http://rth.dk/resources/rain PMID:28077569
Advanced information processing system: Authentication protocols for network communication
NASA Technical Reports Server (NTRS)
Harper, Richard E.; Adams, Stuart J.; Babikyan, Carol A.; Butler, Bryan P.; Clark, Anne L.; Lala, Jaynarayan H.
1994-01-01
In safety critical I/O and intercomputer communication networks, reliable message transmission is an important concern. Difficulties of communication and fault identification in networks arise primarily because the sender of a transmission cannot be identified with certainty, an intermediate node can corrupt a message without certainty of detection, and a babbling node cannot be identified and silenced without lengthy diagnosis and reconfiguration . Authentication protocols use digital signature techniques to verify the authenticity of messages with high probability. Such protocols appear to provide an efficient solution to many of these problems. The objective of this program is to develop, demonstrate, and evaluate intercomputer communication architectures which employ authentication. As a context for the evaluation, the authentication protocol-based communication concept was demonstrated under this program by hosting a real-time flight critical guidance, navigation and control algorithm on a distributed, heterogeneous, mixed redundancy system of workstations and embedded fault-tolerant computers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas
Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less
Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; ...
2017-03-06
Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less
Data and Network Science for Noisy Heterogeneous Systems
ERIC Educational Resources Information Center
Rider, Andrew Kent
2013-01-01
Data in many growing fields has an underlying network structure that can be taken advantage of. In this dissertation we apply data and network science to problems in the domains of systems biology and healthcare. Data challenges in these fields include noisy, heterogeneous data, and a lack of ground truth. The primary thesis of this work is that…
NASA Astrophysics Data System (ADS)
Tsakiroglou, C. D.; Aggelopoulos, C. A.; Sygouni, V.
2009-04-01
A hierarchical, network-type, dynamic simulator of the immiscible displacement of water by oil in heterogeneous porous media is developed to simulate the rate-controlled displacement of two fluids at the soil column scale. A cubic network is constructed, where each node is assigned a permeability which is chosen randomly from a distribution function. The intensity of heterogeneities is quantified by the width of the permeability distribution function. The capillary pressure at each node is calculated by combining a generalized Leverett J-function with a Corey type model. Information about the heterogeneity of soils at the pore network scale is obtained by combining mercury intrusion porosimetry (MIP) data with back-scattered scanning electron microscope (BSEM) images [1]. In order to estimate the two-phase flow properties of nodes (relative permeability and capillary pressure functions, permeability distribution function) immiscible and miscible displacement experiments are performed on undisturbed soil columns. The transient responses of measured variables (pressure drop, fluid saturation averaged over five successive segments, solute concentration averaged over three cross-sections) are fitted with models accounting for the preferential flow paths at the micro- (multi-region model) and macro-scale (multi flowpath model) because of multi-scale heterogeneities [2,3]. Simulating the immiscible displacement of water by oil (drainage) in a large netork, at each time step, the fluid saturation and pressure of each node are calculated formulating mass balances at each node, accounting for capillary, viscous and gravity forces, and solving the system of coupled equations. At each iteration of the algorithm, the pressure drop is so selected that the total flow rate of the injected fluid is kept constant. The dynamic large-scale network simulator is used (1) to examine the sensitivity of the transient responses of the axial distribution of fluid saturation and total pressure drop across the network to the permeability distribution function, spatial correlations of permeability, and capillary number, and (2) to estimate the effective (up-scaled) relative permeability functions at the soil column scale. In an attempt to clarify potential effects of the permeability distribution and spatial permeability correlations on the transient responses of the pressure drop across a soil column, signal analysis with wavelets is performed [4] on experimental and simulated results. The transient variation of signal energy and frequency of pressure drop fluctuations at the wavelet domain are correlated with macroscopic properties such as the effective water and oil relative permeabilities of the porous medium, and microscopic properties such as the variation of the permeability distribution of oil-occupied nodes. Toward the solution of the inverse problem, a general procedure is suggested to identify macro-heterogeneities from the fast analysis of pressure drop signals. References 1. Tsakiroglou, C.D. and M.A. Ioannidis, "Dual porosity modeling of the pore structure and transport properties of a contaminated soil", Eur. J. Soil Sci., 59, 744-761 (2008). 2. Aggelopoulos, C.A., and C.D. Tsakiroglou, "Quantifying the Soil Heterogeneity from Solute Dispersion Experiments", Geoderma, 146, 412-424 (2008). 3. Aggelopoulos, C.A., and C.D. Tsakiroglou, "A multi-flow path approach to model immiscible displacement in undisturbed heterogeneous soil columns", J. Contam. Hydrol., in press (2009). 4. Sygouni, V., C.D. Tsakiroglou, and A.C. Payatakes, "Using wavelets to characterize the wettability of porous materials", Phys. Rev. E, 76, 056304 (2007).
A Component-based Programming Model for Composite, Distributed Applications
NASA Technical Reports Server (NTRS)
Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.
Nourani, Esmaeil; Khunjush, Farshad; Durmuş, Saliha
2016-05-24
Pathogenic microorganisms exploit host cellular mechanisms and evade host defense mechanisms through molecular pathogen-host interactions (PHIs). Therefore, comprehensive analysis of these PHI networks should be an initial step for developing effective therapeutics against infectious diseases. Computational prediction of PHI data is gaining increasing demand because of scarcity of experimental data. Prediction of protein-protein interactions (PPIs) within PHI systems can be formulated as a classification problem, which requires the knowledge of non-interacting protein pairs. This is a restricting requirement since we lack datasets that report non-interacting protein pairs. In this study, we formulated the "computational prediction of PHI data" problem using kernel embedding of heterogeneous data. This eliminates the abovementioned requirement and enables us to predict new interactions without randomly labeling protein pairs as non-interacting. Domain-domain associations are used to filter the predicted results leading to 175 novel PHIs between 170 human proteins and 105 viral proteins. To compare our results with the state-of-the-art studies that use a binary classification formulation, we modified our settings to consider the same formulation. Detailed evaluations are conducted and our results provide more than 10 percent improvements for accuracy and AUC (area under the receiving operating curve) results in comparison with state-of-the-art methods.
Overload cascading failure on complex networks with heterogeneous load redistribution
NASA Astrophysics Data System (ADS)
Hou, Yueyi; Xing, Xiaoyun; Li, Menghui; Zeng, An; Wang, Yougui
2017-09-01
Many real systems including the Internet, power-grid and financial networks experience rare but large overload cascading failures triggered by small initial shocks. Many models on complex networks have been developed to investigate this phenomenon. Most of these models are based on the load redistribution process and assume that the load on a failed node shifts to nearby nodes in the networks either evenly or according to the load distribution rule before the cascade. Inspired by the fact that real power-grid tends to place the excess load on the nodes with high remaining capacities, we study a heterogeneous load redistribution mechanism in a simplified sandpile model in this paper. We find that weak heterogeneity in load redistribution can effectively mitigate the cascade while strong heterogeneity in load redistribution may even enlarge the size of the final failure. With a parameter θ to control the degree of the redistribution heterogeneity, we identify a rather robust optimal θ∗ = 1. Finally, we find that θ∗ tends to shift to a larger value if the initial sand distribution is homogeneous.
Le, Duc-Hau; Verbeke, Lieven; Son, Le Hoang; Chu, Dinh-Toi; Pham, Van-Huy
2017-11-14
MicroRNAs (miRNAs) have been shown to play an important role in pathological initiation, progression and maintenance. Because identification in the laboratory of disease-related miRNAs is not straightforward, numerous network-based methods have been developed to predict novel miRNAs in silico. Homogeneous networks (in which every node is a miRNA) based on the targets shared between miRNAs have been widely used to predict their role in disease phenotypes. Although such homogeneous networks can predict potential disease-associated miRNAs, they do not consider the roles of the target genes of the miRNAs. Here, we introduce a novel method based on a heterogeneous network that not only considers miRNAs but also the corresponding target genes in the network model. Instead of constructing homogeneous miRNA networks, we built heterogeneous miRNA networks consisting of both miRNAs and their target genes, using databases of known miRNA-target gene interactions. In addition, as recent studies demonstrated reciprocal regulatory relations between miRNAs and their target genes, we considered these heterogeneous miRNA networks to be undirected, assuming mutual miRNA-target interactions. Next, we introduced a novel method (RWRMTN) operating on these mutual heterogeneous miRNA networks to rank candidate disease-related miRNAs using a random walk with restart (RWR) based algorithm. Using both known disease-associated miRNAs and their target genes as seed nodes, the method can identify additional miRNAs involved in the disease phenotype. Experiments indicated that RWRMTN outperformed two existing state-of-the-art methods: RWRMDA, a network-based method that also uses a RWR on homogeneous (rather than heterogeneous) miRNA networks, and RLSMDA, a machine learning-based method. Interestingly, we could relate this performance gain to the emergence of "disease modules" in the heterogeneous miRNA networks used as input for the algorithm. Moreover, we could demonstrate that RWRMTN is stable, performing well when using both experimentally validated and predicted miRNA-target gene interaction data for network construction. Finally, using RWRMTN, we identified 76 novel miRNAs associated with 23 disease phenotypes which were present in a recent database of known disease-miRNA associations. Summarizing, using random walks on mutual miRNA-target networks improves the prediction of novel disease-associated miRNAs because of the existence of "disease modules" in these networks.
Cellular network entropy as the energy potential in Waddington's differentiation landscape
Banerji, Christopher R. S.; Miranda-Saavedra, Diego; Severini, Simone; Widschwendter, Martin; Enver, Tariq; Zhou, Joseph X.; Teschendorff, Andrew E.
2013-01-01
Differentiation is a key cellular process in normal tissue development that is significantly altered in cancer. Although molecular signatures characterising pluripotency and multipotency exist, there is, as yet, no single quantitative mark of a cellular sample's position in the global differentiation hierarchy. Here we adopt a systems view and consider the sample's network entropy, a measure of signaling pathway promiscuity, computable from a sample's genome-wide expression profile. We demonstrate that network entropy provides a quantitative, in-silico, readout of the average undifferentiated state of the profiled cells, recapitulating the known hierarchy of pluripotent, multipotent and differentiated cell types. Network entropy further exhibits dynamic changes in time course differentiation data, and in line with a sample's differentiation stage. In disease, network entropy predicts a higher level of cellular plasticity in cancer stem cell populations compared to ordinary cancer cells. Importantly, network entropy also allows identification of key differentiation pathways. Our results are consistent with the view that pluripotency is a statistical property defined at the cellular population level, correlating with intra-sample heterogeneity, and driven by the degree of signaling promiscuity in cells. In summary, network entropy provides a quantitative measure of a cell's undifferentiated state, defining its elevation in Waddington's landscape. PMID:24154593
Nagatani, Takashi; Ichinose, Genki; Tainaka, Kei-Ichi
2018-05-04
Understanding mechanisms of biodiversity has been a central question in ecology. The coexistence of three species in rock-paper-scissors (RPS) systems are discussed by many authors; however, the relation between coexistence and network structure is rarely discussed. Here we present a metapopulation model for RPS game. The total population is assumed to consist of three subpopulations (nodes). Each individual migrates by random walk; the destination of migration is randomly determined. From reaction-migration equations, we obtain the population dynamics. It is found that the dynamic highly depends on network structures. When a network is homogeneous, the dynamics are neutrally stable: each node has a periodic solution, and the oscillations synchronize in all nodes. However, when a network is heterogeneous, the dynamics approach stable focus and all nodes reach equilibriums with different densities. Hence, the heterogeneity of the network promotes biodiversity.
Capacity of Heterogeneous Mobile Wireless Networks with D-Delay Transmission Strategy.
Wu, Feng; Zhu, Jiang; Xi, Zhipeng; Gao, Kai
2016-03-25
This paper investigates the capacity problem of heterogeneous wireless networks in mobility scenarios. A heterogeneous network model which consists of n normal nodes and m helping nodes is proposed. Moreover, we propose a D-delay transmission strategy to ensure that every packet can be delivered to its destination nodes with limited delay. Different from most existing network schemes, our network model has a novel two-tier architecture. The existence of helping nodes greatly improves the network capacity. Four types of mobile networks are studied in this paper: i.i.d. fast mobility model and slow mobility model in two-dimensional space, i.i.d. fast mobility model and slow mobility model in three-dimensional space. Using the virtual channel model, we present an intuitive analysis of the capacity of two-dimensional mobile networks and three-dimensional mobile networks, respectively. Given a delay constraint D, we derive the asymptotic expressions for the capacity of the four types of mobile networks. Furthermore, the impact of D and m to the capacity of the whole network is analyzed. Our findings provide great guidance for the future design of the next generation of networks.
Average is Boring: How Similarity Kills a Meme's Success
NASA Astrophysics Data System (ADS)
Coscia, Michele
2014-09-01
Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success.
Average is boring: how similarity kills a meme's success.
Coscia, Michele
2014-09-26
Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success.
NASA Astrophysics Data System (ADS)
Frampton, A.; Hyman, J.; Zou, L.
2017-12-01
Analysing flow and transport in sparsely fractured media is important for understanding how crystalline bedrock environments function as barriers to transport of contaminants, with important applications towards subsurface repositories for storage of spent nuclear fuel. Crystalline bedrocks are particularly favourable due to their geological stability, low advective flow and strong hydrogeochemical retention properties, which can delay transport of radionuclides, allowing decay to limit release to the biosphere. There are however many challenges involved in quantifying and modelling subsurface flow and transport in fractured media, largely due to geological complexity and heterogeneity, where the interplay between advective and dispersive flow strongly impacts both inert and reactive transport. A key to modelling transport in a Lagrangian framework involves quantifying pathway travel times and the hydrodynamic control of retention, and both these quantities strongly depend on heterogeneity of the fracture network at different scales. In this contribution, we present recent analysis of flow and transport considering fracture networks with single-fracture heterogeneity described by different multivariate normal distributions. A coherent triad of fields with identical correlation length and variance are created but which greatly differ in structure, corresponding to textures with well-connected low, medium and high permeability structures. Through numerical modelling of multiple scales in a stochastic setting we quantify the relative impact of texture type and correlation length against network topological measures, and identify key thresholds for cases where flow dispersion is controlled by single-fracture heterogeneity versus network-scale heterogeneity. This is achieved by using a recently developed novel numerical discrete fracture network model. Furthermore, we highlight enhanced flow channelling for cases where correlation structure continues across intersections in a network, and discuss application to realistic fracture networks using field data of sparsely fractured crystalline rock from the Swedish candidate repository site for spent nuclear fuel.
Two-way communication with neural networks in vivo using focused light
Wilson, Nathan R.; Schummers, James; Runyan, Caroline A.; Yan, Sherry; Chen, Robert F.; Deng, Yuting; Sur, Mriganka
2014-01-01
Neuronal networks process information in a distributed, spatially heterogeneous fashion that transcends the layout of electrodes. In contrast, directed and steerable light offers the potential to engage specific cells on demand. We present a unified framework for adapting microscopes to use light for simultaneous in vivo stimulation and recording of cells at fine spatiotemporal resolutions. We utilize straightforward optics to lock onto networks in vivo, steer light to activate circuit elements, and simultaneously record from other cells. We then actualize this “free” augmentation on both an “open” two-photon microscope, and a leading commercial one. Following this protocol, setup of the system takes a few days and the result is a non-invasive interface to brain dynamics based on directed light, at a network resolution that was not previously possible and which will further improve with the rapid advance in development of optical reporters and effectors. This protocol is for physiologists who are competent with computers and wish to extend hardware and software to interface more fluidly with neuronal networks. PMID:23702834
Clinical results of HIS, RIS, PACS integration using data integration CASE tools
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.
1995-05-01
Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.
Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons
Nicola, Wilten; Campbell, Sue Ann
2013-01-01
We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons. PMID:24416013
An Outline of Data Aggregation Security in Heterogeneous Wireless Sensor Networks.
Boubiche, Sabrina; Boubiche, Djallel Eddine; Bilami, Azzedine; Toral-Cruz, Homero
2016-04-12
Data aggregation processes aim to reduce the amount of exchanged data in wireless sensor networks and consequently minimize the packet overhead and optimize energy efficiency. Securing the data aggregation process is a real challenge since the aggregation nodes must access the relayed data to apply the aggregation functions. The data aggregation security problem has been widely addressed in classical homogeneous wireless sensor networks, however, most of the proposed security protocols cannot guarantee a high level of security since the sensor node resources are limited. Heterogeneous wireless sensor networks have recently emerged as a new wireless sensor network category which expands the sensor nodes' resources and capabilities. These new kinds of WSNs have opened new research opportunities where security represents a most attractive area. Indeed, robust and high security level algorithms can be used to secure the data aggregation at the heterogeneous aggregation nodes which is impossible in classical homogeneous WSNs. Contrary to the homogeneous sensor networks, the data aggregation security problem is still not sufficiently covered and the proposed data aggregation security protocols are numberless. To address this recent research area, this paper describes the data aggregation security problem in heterogeneous wireless sensor networks and surveys a few proposed security protocols. A classification and evaluation of the existing protocols is also introduced based on the adopted data aggregation security approach.
NASA Astrophysics Data System (ADS)
Mavelli, Fabio; Ruiz-Mirazo, Kepa
2010-09-01
'ENVIRONMENT' is a computational platform that has been developed in the last few years with the aim to simulate stochastically the dynamics and stability of chemically reacting protocellular systems. Here we present and describe some of its main features, showing how the stochastic kinetics approach can be applied to study the time evolution of reaction networks in heterogeneous conditions, particularly when supramolecular lipid structures (micelles, vesicles, etc) coexist with aqueous domains. These conditions are of special relevance to understand the origins of cellular, self-reproducing compartments, in the context of prebiotic chemistry and evolution. We contrast our simulation results with real lab experiments, with the aim to bring together theoretical and experimental research on protocell and minimal artificial cell systems.
Economic networks: Heterogeneity-induced vulnerability and loss of synchronization
NASA Astrophysics Data System (ADS)
Colon, Célian; Ghil, Michael
2017-12-01
Interconnected systems are prone to propagation of disturbances, which can undermine their resilience to external perturbations. Propagation dynamics can clearly be affected by potential time delays in the underlying processes. We investigate how such delays influence the resilience of production networks facing disruption of supply. Interdependencies between economic agents are modeled using systems of Boolean delay equations (BDEs); doing so allows us to introduce heterogeneity in production delays and in inventories. Complex network topologies are considered that reproduce realistic economic features, including a network of networks. Perturbations that would otherwise vanish can, because of delay heterogeneity, amplify and lead to permanent disruptions. This phenomenon is enabled by the interactions between short cyclic structures. Difference in delays between two interacting, and otherwise resilient, structures can in turn lead to loss of synchronization in damage propagation and thus prevent recovery. Finally, this study also shows that BDEs on complex networks can lead to metastable relaxation oscillations, which are damped out in one part of a network while moving on to another part.
Network meta-analysis, electrical networks and graph theory.
Rücker, Gerta
2012-12-01
Network meta-analysis is an active field of research in clinical biostatistics. It aims to combine information from all randomized comparisons among a set of treatments for a given medical condition. We show how graph-theoretical methods can be applied to network meta-analysis. A meta-analytic graph consists of vertices (treatments) and edges (randomized comparisons). We illustrate the correspondence between meta-analytic networks and electrical networks, where variance corresponds to resistance, treatment effects to voltage, and weighted treatment effects to current flows. Based thereon, we then show that graph-theoretical methods that have been routinely applied to electrical networks also work well in network meta-analysis. In more detail, the resulting consistent treatment effects induced in the edges can be estimated via the Moore-Penrose pseudoinverse of the Laplacian matrix. Moreover, the variances of the treatment effects are estimated in analogy to electrical effective resistances. It is shown that this method, being computationally simple, leads to the usual fixed effect model estimate when applied to pairwise meta-analysis and is consistent with published results when applied to network meta-analysis examples from the literature. Moreover, problems of heterogeneity and inconsistency, random effects modeling and including multi-armed trials are addressed. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Sinha, Shriprakash
2016-12-01
Simulation study in systems biology involving computational experiments dealing with Wnt signaling pathways abound in literature but often lack a pedagogical perspective that might ease the understanding of beginner students and researchers in transition, who intend to work on the modeling of the pathway. This paucity might happen due to restrictive business policies which enforce an unwanted embargo on the sharing of important scientific knowledge. A tutorial introduction to computational modeling of Wnt signaling pathway in a human colorectal cancer dataset using static Bayesian network models is provided. The walkthrough might aid biologists/informaticians in understanding the design of computational experiments that is interleaved with exposition of the Matlab code and causal models from Bayesian network toolbox. The manuscript elucidates the coding contents of the advance article by Sinha (Integr. Biol. 6:1034-1048, 2014) and takes the reader in a step-by-step process of how (a) the collection and the transformation of the available biological information from literature is done, (b) the integration of the heterogeneous data and prior biological knowledge in the network is achieved, (c) the simulation study is designed, (d) the hypothesis regarding a biological phenomena is transformed into computational framework, and (e) results and inferences drawn using d -connectivity/separability are reported. The manuscript finally ends with a programming assignment to help the readers get hands-on experience of a perturbation project. Description of Matlab files is made available under GNU GPL v3 license at the Google code project on https://code.google.com/p/static-bn-for-wnt-signaling-pathway and https: //sites.google.com/site/shriprakashsinha/shriprakashsinha/projects/static-bn-for-wnt-signaling-pathway. Latest updates can be found in the latter website.
Systematic review of computational methods for identifying miRNA-mediated RNA-RNA crosstalk.
Li, Yongsheng; Jin, Xiyun; Wang, Zishan; Li, Lili; Chen, Hong; Lin, Xiaoyu; Yi, Song; Zhang, Yunpeng; Xu, Juan
2017-10-25
Posttranscriptional crosstalk and communication between RNAs yield large regulatory competing endogenous RNA (ceRNA) networks via shared microRNAs (miRNAs), as well as miRNA synergistic networks. The ceRNA crosstalk represents a novel layer of gene regulation that controls both physiological and pathological processes such as development and complex diseases. The rapidly expanding catalogue of ceRNA regulation has provided evidence for exploitation as a general model to predict the ceRNAs in silico. In this article, we first reviewed the current progress of RNA-RNA crosstalk in human complex diseases. Then, the widely used computational methods for modeling ceRNA-ceRNA interaction networks are further summarized into five types: two types of global ceRNA regulation prediction methods and three types of context-specific prediction methods, which are based on miRNA-messenger RNA regulation alone, or by integrating heterogeneous data, respectively. To provide guidance in the computational prediction of ceRNA-ceRNA interactions, we finally performed a comparative study of different combinations of miRNA-target methods as well as five types of ceRNA identification methods by using literature-curated ceRNA regulation and gene perturbation. The results revealed that integration of different miRNA-target prediction methods and context-specific miRNA/gene expression profiles increased the performance for identifying ceRNA regulation. Moreover, different computational methods were complementary in identifying ceRNA regulation and captured different functional parts of similar pathways. We believe that the application of these computational techniques provides valuable functional insights into ceRNA regulation and is a crucial step for informing subsequent functional validation studies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Epidemic transmission on random mobile network with diverse infection periods
NASA Astrophysics Data System (ADS)
Li, Kezan; Yu, Hong; Zeng, Zhaorong; Ding, Yong; Ma, Zhongjun
2015-05-01
The heterogeneity of individual susceptibility and infectivity and time-varying topological structure are two realistic factors when we study epidemics on complex networks. Current research results have shown that the heterogeneity of individual susceptibility and infectivity can increase the epidemic threshold in a random mobile dynamical network with the same infection period. In this paper, we will focus on random mobile dynamical networks with diverse infection periods due to people's different constitutions and external circumstances. Theoretical results indicate that the epidemic threshold of the random mobile network with diverse infection periods is larger than the counterpart with the same infection period. Moreover, the heterogeneity of individual susceptibility and infectivity can play a significant impact on disease transmission. In particular, the homogeneity of individuals will avail to the spreading of epidemics. Numerical examples verify further our theoretical results very well.
Le, Duc-Hau; Pham, Van-Huy
2017-06-15
Finding gene-disease and disease-disease associations play important roles in the biomedical area and many prioritization methods have been proposed for this goal. Among them, approaches based on a heterogeneous network of genes and diseases are considered state-of-the-art ones, which achieve high prediction performance and can be used for diseases with/without known molecular basis. Here, we developed a Cytoscape app, namely HGPEC, based on a random walk with restart algorithm on a heterogeneous network of genes and diseases. This app can prioritize candidate genes and diseases by employing a heterogeneous network consisting of a network of genes/proteins and a phenotypic disease similarity network. Based on the rankings, novel disease-gene and disease-disease associations can be identified. These associations can be supported with network- and rank-based visualization as well as evidences and annotations from biomedical data. A case study on prediction of novel breast cancer-associated genes and diseases shows the abilities of HGPEC. In addition, we showed prominence in the performance of HGPEC compared to other tools for prioritization of candidate disease genes. Taken together, our app is expected to effectively predict novel disease-gene and disease-disease associations and support network- and rank-based visualization as well as biomedical evidences for such the associations.
NASA Astrophysics Data System (ADS)
Pham, Ngoc; Papavassiliou, Dimitrios
2014-03-01
In this study, transport behavior of nanoparticles under different pore surface conditions of consolidated Berea sandstone is numerically investigated. Micro-CT scanning technique is applied to obtain 3D grayscale images of the rock sample geometry. Quantitative characterization, which is based on image analysis is done to obtain physical properties of the pore network, such as the pore size distribution and the type of each pore (dead-end, isolated, and fully connected pore). Transport of water through the rock is simulated by employing a 3D lattice Boltzmann method. The trajectories of nanopaticles moving under convection in the simulated flow field and due to molecular diffusion are monitored in the Lagrangian framework. It is assumed in the model that the particle adsorption on the pore surface, which is modeled as a pseudo-first order adsorption, is the only factor hindering particle propagation. The effect of pore surface heterogeneity to the particle breakthrough is considered, and the role of particle radial diffusion is also addressed in details. The financial support of the Advanced Energy Consortium (AEC BEG08-022) and the computational support of XSEDE (CTS090017) are acknowledged.
Self-assembly programming of DNA polyominoes.
Ong, Hui San; Syafiq-Rahim, Mohd; Kasim, Noor Hayaty Abu; Firdaus-Raih, Mohd; Ramlan, Effirul Ikhwan
2016-10-20
Fabrication of functional DNA nanostructures operating at a cellular level has been accomplished through molecular programming techniques such as DNA origami and single-stranded tiles (SST). During implementation, restrictive and constraint dependent designs are enforced to ensure conformity is attainable. We propose a concept of DNA polyominoes that promotes flexibility in molecular programming. The fabrication of complex structures is achieved through self-assembly of distinct heterogeneous shapes (i.e., self-organised optimisation among competing DNA basic shapes) with total flexibility during the design and assembly phases. In this study, the plausibility of the approach is validated using the formation of multiple 3×4 DNA network fabricated from five basic DNA shapes with distinct configurations (monomino, tromino and tetrominoes). Computational tools to aid the design of compatible DNA shapes and the structure assembly assessment are presented. The formations of the desired structures were validated using Atomic Force Microscopy (AFM) imagery. Five 3×4 DNA networks were successfully constructed using combinatorics of these five distinct DNA heterogeneous shapes. Our findings revealed that the construction of DNA supra-structures could be achieved using a more natural-like orchestration as compared to the rigid and restrictive conventional approaches adopted previously. Copyright © 2016 Elsevier B.V. All rights reserved.
A Distributed Transmission Rate Adjustment Algorithm in Heterogeneous CSMA/CA Networks
Xie, Shuanglong; Low, Kay Soon; Gunawan, Erry
2015-01-01
Distributed transmission rate tuning is important for a wide variety of IEEE 802.15.4 network applications such as industrial network control systems. Such systems often require each node to sustain certain throughput demand in order to guarantee the system performance. It is thus essential to determine a proper transmission rate that can meet the application requirement and compensate for network imperfections (e.g., packet loss). Such a tuning in a heterogeneous network is difficult due to the lack of modeling techniques that can deal with the heterogeneity of the network as well as the network traffic changes. In this paper, a distributed transmission rate tuning algorithm in a heterogeneous IEEE 802.15.4 CSMA/CA network is proposed. Each node uses the results of clear channel assessment (CCA) to estimate the busy channel probability. Then a mathematical framework is developed to estimate the on-going heterogeneous traffics using the busy channel probability at runtime. Finally a distributed algorithm is derived to tune the transmission rate of each node to accurately meet the throughput requirement. The algorithm does not require modifications on IEEE 802.15.4 MAC layer and it has been experimentally implemented and extensively tested using TelosB nodes with the TinyOS protocol stack. The results reveal that the algorithm is accurate and can satisfy the throughput demand. Compared with existing techniques, the algorithm is fully distributed and thus does not require any central coordination. With this property, it is able to adapt to traffic changes and re-adjust the transmission rate to the desired level, which cannot be achieved using the traditional modeling techniques. PMID:25822140
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291
Highly dynamic animal contact network and implications on disease transmission
Chen, Shi; White, Brad J.; Sanderson, Michael W.; Amrine, David E.; Ilany, Amiyaal; Lanzas, Cristina
2014-01-01
Contact patterns among hosts are considered as one of the most critical factors contributing to unequal pathogen transmission. Consequently, networks have been widely applied in infectious disease modeling. However most studies assume static network structure due to lack of accurate observation and appropriate analytic tools. In this study we used high temporal and spatial resolution animal position data to construct a high-resolution contact network relevant to infectious disease transmission. The animal contact network aggregated at hourly level was highly variable and dynamic within and between days, for both network structure (network degree distribution) and individual rank of degree distribution in the network (degree order). We integrated network degree distribution and degree order heterogeneities with a commonly used contact-based, directly transmitted disease model to quantify the effect of these two sources of heterogeneity on the infectious disease dynamics. Four conditions were simulated based on the combination of these two heterogeneities. Simulation results indicated that disease dynamics and individual contribution to new infections varied substantially among these four conditions under both parameter settings. Changes in the contact network had a greater effect on disease dynamics for pathogens with smaller basic reproduction number (i.e. R0 < 2). PMID:24667241
Unsupervised learning of digit recognition using spike-timing-dependent plasticity
Diehl, Peter U.; Cook, Matthew
2015-01-01
In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN) can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns), since most such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e., conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks. PMID:26941637
Bio-inspired Autonomic Structures: a middleware for Telecommunications Ecosystems
NASA Astrophysics Data System (ADS)
Manzalini, Antonio; Minerva, Roberto; Moiso, Corrado
Today, people are making use of several devices for communications, for accessing multi-media content services, for data/information retrieving, for processing, computing, etc.: examples are laptops, PDAs, mobile phones, digital cameras, mp3 players, smart cards and smart appliances. One of the most attracting service scenarios for future Telecommunications and Internet is the one where people will be able to browse any object in the environment they live: communications, sensing and processing of data and services will be highly pervasive. In this vision, people, machines, artifacts and the surrounding space will create a kind of computational environment and, at the same time, the interfaces to the network resources. A challenging technological issue will be interconnection and management of heterogeneous systems and a huge amount of small devices tied together in networks of networks. Moreover, future network and service infrastructures should be able to provide Users and Application Developers (at different levels, e.g., residential Users but also SMEs, LEs, ASPs/Web2.0 Service roviders, ISPs, Content Providers, etc.) with the most appropriate "environment" according to their context and specific needs. Operators must be ready to manage such level of complication enabling their latforms with technological advanced allowing network and services self-supervision and self-adaptation capabilities. Autonomic software solutions, enhanced with innovative bio-inspired mechanisms and algorithms, are promising areas of long term research to face such challenges. This chapter proposes a bio-inspired autonomic middleware capable of leveraging the assets of the underlying network infrastructure whilst, at the same time, supporting the development of future Telecommunications and Internet Ecosystems.
Le, Duc-Hau
2015-01-01
Protein complexes formed by non-covalent interaction among proteins play important roles in cellular functions. Computational and purification methods have been used to identify many protein complexes and their cellular functions. However, their roles in terms of causing disease have not been well discovered yet. There exist only a few studies for the identification of disease-associated protein complexes. However, they mostly utilize complicated heterogeneous networks which are constructed based on an out-of-date database of phenotype similarity network collected from literature. In addition, they only apply for diseases for which tissue-specific data exist. In this study, we propose a method to identify novel disease-protein complex associations. First, we introduce a framework to construct functional similarity protein complex networks where two protein complexes are functionally connected by either shared protein elements, shared annotating GO terms or based on protein interactions between elements in each protein complex. Second, we propose a simple but effective neighborhood-based algorithm, which yields a local similarity measure, to rank disease candidate protein complexes. Comparing the predictive performance of our proposed algorithm with that of two state-of-the-art network propagation algorithms including one we used in our previous study, we found that it performed statistically significantly better than that of these two algorithms for all the constructed functional similarity protein complex networks. In addition, it ran about 32 times faster than these two algorithms. Moreover, our proposed method always achieved high performance in terms of AUC values irrespective of the ways to construct the functional similarity protein complex networks and the used algorithms. The performance of our method was also higher than that reported in some existing methods which were based on complicated heterogeneous networks. Finally, we also tested our method with prostate cancer and selected the top 100 highly ranked candidate protein complexes. Interestingly, 69 of them were evidenced since at least one of their protein elements are known to be associated with prostate cancer. Our proposed method, including the framework to construct functional similarity protein complex networks and the neighborhood-based algorithm on these networks, could be used for identification of novel disease-protein complex associations.
Tao, Yuan; Liu, Juan
2005-01-01
The Internet has already deflated our world of working and living into a very small scope, thus bringing out the concept of Earth Village, in which people could communicate and co-work though thousands' miles far away from each other. This paper describes a prototype, which is just like an Earth Lab for bioinformatics, based on Web services framework to build up a network architecture for bioinformatics research and for world wide biologists to easily implement enormous, complex processes, and effectively share and access computing resources and data, regardless of how heterogeneous the format of the data is and how decentralized and distributed these resources are around the world. A diminutive and simplified example scenario is given out to realize the prototype after that.
Xia, Cheng-Yi; Meng, Xiao-Kun; Wang, Zhen
2015-01-01
In the research realm of game theory, interdependent networks have extended the content of spatial reciprocity, which needs the suitable coupling between networks. However, thus far, the vast majority of existing works just assume that the coupling strength between networks is symmetric. This hypothesis, to some extent, seems inconsistent with the ubiquitous observation of heterogeneity. Here, we study how the heterogeneous coupling strength, which characterizes the interdependency of utility between corresponding players of both networks, affects the evolution of cooperation in the prisoner’s dilemma game with two types of coupling schemes (symmetric and asymmetric ones). Compared with the traditional case, we show that heterogeneous coupling greatly promotes the collective cooperation. The symmetric scheme seems much better than the asymmetric case. Moreover, the role of varying amplitude of coupling strength is also studied on these two interdependent ways. Current findings are helpful for us to understand the evolution of cooperation within many real-world systems, in particular for the interconnected and interrelated systems. PMID:26102082
Xia, Cheng-Yi; Meng, Xiao-Kun; Wang, Zhen
2015-01-01
In the research realm of game theory, interdependent networks have extended the content of spatial reciprocity, which needs the suitable coupling between networks. However, thus far, the vast majority of existing works just assume that the coupling strength between networks is symmetric. This hypothesis, to some extent, seems inconsistent with the ubiquitous observation of heterogeneity. Here, we study how the heterogeneous coupling strength, which characterizes the interdependency of utility between corresponding players of both networks, affects the evolution of cooperation in the prisoner's dilemma game with two types of coupling schemes (symmetric and asymmetric ones). Compared with the traditional case, we show that heterogeneous coupling greatly promotes the collective cooperation. The symmetric scheme seems much better than the asymmetric case. Moreover, the role of varying amplitude of coupling strength is also studied on these two interdependent ways. Current findings are helpful for us to understand the evolution of cooperation within many real-world systems, in particular for the interconnected and interrelated systems.
On the robustness of complex heterogeneous gene expression networks.
Gómez-Gardeñes, Jesús; Moreno, Yamir; Floría, Luis M
2005-04-01
We analyze a continuous gene expression model on the underlying topology of a complex heterogeneous network. Numerical simulations aimed at studying the chaotic and periodic dynamics of the model are performed. The results clearly indicate that there is a region in which the dynamical and structural complexity of the system avoid chaotic attractors. However, contrary to what has been reported for Random Boolean Networks, the chaotic phase cannot be completely suppressed, which has important bearings on network robustness and gene expression modeling.
Practical management of heterogeneous neuroimaging metadata by global neuroimaging data repositories
Neu, Scott C.; Crawford, Karen L.; Toga, Arthur W.
2012-01-01
Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead. PMID:22470336
Neu, Scott C; Crawford, Karen L; Toga, Arthur W
2012-01-01
Rapidly evolving neuroimaging techniques are producing unprecedented quantities of digital data at the same time that many research studies are evolving into global, multi-disciplinary collaborations between geographically distributed scientists. While networked computers have made it almost trivial to transmit data across long distances, collecting and analyzing this data requires extensive metadata if the data is to be maximally shared. Though it is typically straightforward to encode text and numerical values into files and send content between different locations, it is often difficult to attach context and implicit assumptions to the content. As the number of and geographic separation between data contributors grows to national and global scales, the heterogeneity of the collected metadata increases and conformance to a single standardization becomes implausible. Neuroimaging data repositories must then not only accumulate data but must also consolidate disparate metadata into an integrated view. In this article, using specific examples from our experiences, we demonstrate how standardization alone cannot achieve full integration of neuroimaging data from multiple heterogeneous sources and why a fundamental change in the architecture of neuroimaging data repositories is needed instead.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs.
Bhuiyan, Hasanuzzaman; Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-06-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors.
Parallel Algorithms for Switching Edges in Heterogeneous Graphs☆
Khan, Maleq; Chen, Jiangzhuo; Marathe, Madhav
2017-01-01
An edge switch is an operation on a graph (or network) where two edges are selected randomly and one of their end vertices are swapped with each other. Edge switch operations have important applications in graph theory and network analysis, such as in generating random networks with a given degree sequence, modeling and analyzing dynamic networks, and in studying various dynamic phenomena over a network. The recent growth of real-world networks motivates the need for efficient parallel algorithms. The dependencies among successive edge switch operations and the requirement to keep the graph simple (i.e., no self-loops or parallel edges) as the edges are switched lead to significant challenges in designing a parallel algorithm. Addressing these challenges requires complex synchronization and communication among the processors leading to difficulties in achieving a good speedup by parallelization. In this paper, we present distributed memory parallel algorithms for switching edges in massive networks. These algorithms provide good speedup and scale well to a large number of processors. A harmonic mean speedup of 73.25 is achieved on eight different networks with 1024 processors. One of the steps in our edge switch algorithms requires the computation of multinomial random variables in parallel. This paper presents the first non-trivial parallel algorithm for the problem, achieving a speedup of 925 using 1024 processors. PMID:28757680
Schmidt, Helmut; Petkov, George; Richardson, Mark P; Terry, John R
2014-11-01
Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of computational modeling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit, which in the field of complexity sciences is known as dynamics on networks. In this study we describe the development and application of this framework using modular networks of Kuramoto oscillators. We use this framework to understand functional networks inferred from resting state EEG recordings of a cohort of 35 adults with heterogeneous idiopathic generalized epilepsies and 40 healthy adult controls. Taking emergent synchrony across the global network as a proxy for seizures, our study finds that the critical strength of coupling required to synchronize the global network is significantly decreased for the epilepsy cohort for functional networks inferred from both theta (3-6 Hz) and low-alpha (6-9 Hz) bands. We further identify left frontal regions as a potential driver of seizure activity within these networks. We also explore the ability of our method to identify individuals with epilepsy, observing up to 80% predictive power through use of receiver operating characteristic analysis. Collectively these findings demonstrate that a computer model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which should ultimately enable a more appropriate mechanistic stratification of people with epilepsy leading to improved diagnostics and therapeutics.
Xia, Kai; Dong, Dong; Han, Jing-Dong J
2006-01-01
Background Although protein-protein interaction (PPI) networks have been explored by various experimental methods, the maps so built are still limited in coverage and accuracy. To further expand the PPI network and to extract more accurate information from existing maps, studies have been carried out to integrate various types of functional relationship data. A frequently updated database of computationally analyzed potential PPIs to provide biological researchers with rapid and easy access to analyze original data as a biological network is still lacking. Results By applying a probabilistic model, we integrated 27 heterogeneous genomic, proteomic and functional annotation datasets to predict PPI networks in human. In addition to previously studied data types, we show that phenotypic distances and genetic interactions can also be integrated to predict PPIs. We further built an easy-to-use, updatable integrated PPI database, the Integrated Network Database (IntNetDB) online, to provide automatic prediction and visualization of PPI network among genes of interest. The networks can be visualized in SVG (Scalable Vector Graphics) format for zooming in or out. IntNetDB also provides a tool to extract topologically highly connected network neighborhoods from a specific network for further exploration and research. Using the MCODE (Molecular Complex Detections) algorithm, 190 such neighborhoods were detected among all the predicted interactions. The predicted PPIs can also be mapped to worm, fly and mouse interologs. Conclusion IntNetDB includes 180,010 predicted protein-protein interactions among 9,901 human proteins and represents a useful resource for the research community. Our study has increased prediction coverage by five-fold. IntNetDB also provides easy-to-use network visualization and analysis tools that allow biological researchers unfamiliar with computational biology to access and analyze data over the internet. The web interface of IntNetDB is freely accessible at . Visualization requires Mozilla version 1.8 (or higher) or Internet Explorer with installation of SVGviewer. PMID:17112386
Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L.; Moya, Jose M.; Risco-Martín, José L.
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time. PMID:23112621
Object-based media and stream-based computing
NASA Astrophysics Data System (ADS)
Bove, V. Michael, Jr.
1998-03-01
Object-based media refers to the representation of audiovisual information as a collection of objects - the result of scene-analysis algorithms - and a script describing how they are to be rendered for display. Such multimedia presentations can adapt to viewing circumstances as well as to viewer preferences and behavior, and can provide a richer link between content creator and consumer. With faster networks and processors, such ideas become applicable to live interpersonal communications as well, creating a more natural and productive alternative to traditional videoconferencing. In this paper is outlined an example of object-based media algorithms and applications developed by my group, and present new hardware architectures and software methods that we have developed to enable meeting the computational requirements of object- based and other advanced media representations. In particular we describe stream-based processing, which enables automatic run-time parallelization of multidimensional signal processing tasks even given heterogenous computational resources.
Ubiquitous green computing techniques for high demand applications in Smart environments.
Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L
2012-01-01
Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.
Nabavi, Sheida
2016-08-15
With advances in technologies, huge amounts of multiple types of high-throughput genomics data are available. These data have tremendous potential to identify new and clinically valuable biomarkers to guide the diagnosis, assessment of prognosis, and treatment of complex diseases, such as cancer. Integrating, analyzing, and interpreting big and noisy genomics data to obtain biologically meaningful results, however, remains highly challenging. Mining genomics datasets by utilizing advanced computational methods can help to address these issues. To facilitate the identification of a short list of biologically meaningful genes as candidate drivers of anti-cancer drug resistance from an enormous amount of heterogeneous data, we employed statistical machine-learning techniques and integrated genomics datasets. We developed a computational method that integrates gene expression, somatic mutation, and copy number aberration data of sensitive and resistant tumors. In this method, an integrative method based on module network analysis is applied to identify potential driver genes. This is followed by cross-validation and a comparison of the results of sensitive and resistance groups to obtain the final list of candidate biomarkers. We applied this method to the ovarian cancer data from the cancer genome atlas. The final result contains biologically relevant genes, such as COL11A1, which has been reported as a cis-platinum resistant biomarker for epithelial ovarian carcinoma in several recent studies. The described method yields a short list of aberrant genes that also control the expression of their co-regulated genes. The results suggest that the unbiased data driven computational method can identify biologically relevant candidate biomarkers. It can be utilized in a wide range of applications that compare two conditions with highly heterogeneous datasets.
Interplay of network dynamics and heterogeneity of ties on spreading dynamics.
Ferreri, Luca; Bajardi, Paolo; Giacobini, Mario; Perazzo, Silvia; Venturino, Ezio
2014-07-01
The structure of a network dramatically affects the spreading phenomena unfolding upon it. The contact distribution of the nodes has long been recognized as the key ingredient in influencing the outbreak events. However, limited knowledge is currently available on the role of the weight of the edges on the persistence of a pathogen. At the same time, recent works showed a strong influence of temporal network dynamics on disease spreading. In this work we provide an analytical understanding, corroborated by numerical simulations, about the conditions for infected stable state in weighted networks. In particular, we reveal the role of heterogeneity of edge weights and of the dynamic assignment of weights on the ties in the network in driving the spread of the epidemic. In this context we show that when weights are dynamically assigned to ties in the network, a heterogeneous distribution is able to hamper the diffusion of the disease, contrary to what happens when weights are fixed in time.
Ostojic, Srdjan; Brunel, Nicolas; Hakim, Vincent
2009-06-01
We investigate how synchrony can be generated or induced in networks of electrically coupled integrate-and-fire neurons subject to noisy and heterogeneous inputs. Using analytical tools, we find that in a network under constant external inputs, synchrony can appear via a Hopf bifurcation from the asynchronous state to an oscillatory state. In a homogeneous net work, in the oscillatory state all neurons fire in synchrony, while in a heterogeneous network synchrony is looser, many neurons skipping cycles of the oscillation. If the transmission of action potentials via the electrical synapses is effectively excitatory, the Hopf bifurcation is supercritical, while effectively inhibitory transmission due to pronounced hyperpolarization leads to a subcritical bifurcation. In the latter case, the network exhibits bistability between an asynchronous state and an oscillatory state where all the neurons fire in synchrony. Finally we show that for time-varying external inputs, electrical coupling enhances the synchronization in an asynchronous network via a resonance at the firing-rate frequency.
Buskens, Vincent; Snijders, Chris
2016-01-01
We study how payoffs and network structure affect reaching the payoff-dominant equilibrium in a [Formula: see text] coordination game that actors play with their neighbors in a network. Using an extensive simulation analysis of over 100,000 networks with 2-25 actors, we show that the importance of network characteristics is restricted to a limited part of the payoff space. In this part, we conclude that the payoff-dominant equilibrium is chosen more often if network density is larger, the network is more centralized, and segmentation of the network is smaller. Moreover, it is more likely that heterogeneity in behavior persists if the network is more segmented and less centralized. Persistence of heterogeneous behavior is not related to network density.
An Outline of Data Aggregation Security in Heterogeneous Wireless Sensor Networks
Boubiche, Sabrina; Boubiche, Djallel Eddine; Bilami, Azzedine; Toral-Cruz, Homero
2016-01-01
Data aggregation processes aim to reduce the amount of exchanged data in wireless sensor networks and consequently minimize the packet overhead and optimize energy efficiency. Securing the data aggregation process is a real challenge since the aggregation nodes must access the relayed data to apply the aggregation functions. The data aggregation security problem has been widely addressed in classical homogeneous wireless sensor networks, however, most of the proposed security protocols cannot guarantee a high level of security since the sensor node resources are limited. Heterogeneous wireless sensor networks have recently emerged as a new wireless sensor network category which expands the sensor nodes’ resources and capabilities. These new kinds of WSNs have opened new research opportunities where security represents a most attractive area. Indeed, robust and high security level algorithms can be used to secure the data aggregation at the heterogeneous aggregation nodes which is impossible in classical homogeneous WSNs. Contrary to the homogeneous sensor networks, the data aggregation security problem is still not sufficiently covered and the proposed data aggregation security protocols are numberless. To address this recent research area, this paper describes the data aggregation security problem in heterogeneous wireless sensor networks and surveys a few proposed security protocols. A classification and evaluation of the existing protocols is also introduced based on the adopted data aggregation security approach. PMID:27077866
W-MAC: A Workload-Aware MAC Protocol for Heterogeneous Convergecast in Wireless Sensor Networks
Xia, Ming; Dong, Yabo; Lu, Dongming
2011-01-01
The power consumption and latency of existing MAC protocols for wireless sensor networks (WSNs) are high in heterogeneous convergecast, where each sensor node generates different amounts of data in one convergecast operation. To solve this problem, we present W-MAC, a workload-aware MAC protocol for heterogeneous convergecast in WSNs. A subtree-based iterative cascading scheduling mechanism and a workload-aware time slice allocation mechanism are proposed to minimize the power consumption of nodes, while offering a low data latency. In addition, an efficient schedule adjustment mechanism is provided for adapting to data traffic variation and network topology change. Analytical and simulation results show that the proposed protocol provides a significant energy saving and latency reduction in heterogeneous convergecast, and can effectively support data aggregation to further improve the performance. PMID:22163753
Average is Boring: How Similarity Kills a Meme's Success
Coscia, Michele
2014-01-01
Every day we are exposed to different ideas, or memes, competing with each other for our attention. Previous research explained popularity and persistence heterogeneity of memes by assuming them in competition for limited attention resources, distributed in a heterogeneous social network. Little has been said about what characteristics make a specific meme more likely to be successful. We propose a similarity-based explanation: memes with higher similarity to other memes have a significant disadvantage in their potential popularity. We employ a meme similarity measure based on semantic text analysis and computer vision to prove that a meme is more likely to be successful and to thrive if its characteristics make it unique. Our results show that indeed successful memes are located in the periphery of the meme similarity space and that our similarity measure is a promising predictor of a meme success. PMID:25257730
Community-driven computational biology with Debian Linux.
Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles
2010-12-21
The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.
NASA Astrophysics Data System (ADS)
Maqueda, A.; Renard, P.; Cornaton, F. J.
2014-12-01
Coastal karst networks are formed by mineral dissolution, mainly calcite, in the freshwater-saltwater mixing zone. The problem has been approached first by studying the kinetics of calcite dissolution and then coupling ion-pairing software with flow and mass transport models. Porosity development models require high computational power. A workaround to reduce computational complexity is to assume the calcite dissolution reaction is relatively fast, thus equilibrium chemistry can be used to model it (Sanford & Konikow, 1989). Later developments allowed the full coupling of kinetics and transport in a model. However kinetics effects of calcite dissolution were found negligible under the single set of assumed hydrological and geochemical boundary conditions. A model is implemented with the coupling of FeFlow software as the flow & transport module and PHREEQC4FEFLOW (Wissmeier, 2013) ion-pairing module. The model is used to assess the influence of heterogeneities in hydrological, geochemical and lithological boundary conditions on porosity evolution. The hydrologic conditions present in the karst aquifer of Quintana Roo coast in Mexico are used as a guide for generating inputs for simulations.
Ding, Xuemei; Bucholc, Magda; Wang, Haiying; Glass, David H; Wang, Hui; Clarke, Dave H; Bjourson, Anthony John; Dowey, Le Roy C; O'Kane, Maurice; Prasad, Girijesh; Maguire, Liam; Wong-Lin, KongFatt
2018-06-27
There is currently a lack of an efficient, objective and systemic approach towards the classification of Alzheimer's disease (AD), due to its complex etiology and pathogenesis. As AD is inherently dynamic, it is also not clear how the relationships among AD indicators vary over time. To address these issues, we propose a hybrid computational approach for AD classification and evaluate it on the heterogeneous longitudinal AIBL dataset. Specifically, using clinical dementia rating as an index of AD severity, the most important indicators (mini-mental state examination, logical memory recall, grey matter and cerebrospinal volumes from MRI and active voxels from PiB-PET brain scans, ApoE, and age) can be automatically identified from parallel data mining algorithms. In this work, Bayesian network modelling across different time points is used to identify and visualize time-varying relationships among the significant features, and importantly, in an efficient way using only coarse-grained data. Crucially, our approach suggests key data features and their appropriate combinations that are relevant for AD severity classification with high accuracy. Overall, our study provides insights into AD developments and demonstrates the potential of our approach in supporting efficient AD diagnosis.
Tseng, Jui-Pin
2017-02-01
This investigation establishes the global cluster synchronization of complex networks with a community structure based on an iterative approach. The units comprising the network are described by differential equations, and can be non-autonomous and involve time delays. In addition, units in the different communities can be governed by different equations. The coupling configuration of the network is rather general. The coupling terms can be non-diffusive, nonlinear, asymmetric, and with heterogeneous coupling delays. Based on this approach, both delay-dependent and delay-independent criteria for global cluster synchronization are derived. We implement the present approach for a nonlinearly coupled neural network with heterogeneous coupling delays. Two numerical examples are given to show that neural networks can behave in a variety of new collective ways under the synchronization criteria. These examples also demonstrate that neural networks remain synchronized in spite of coupling delays between neurons across different communities; however, they may lose synchrony if the coupling delays between the neurons within the same community are too large, such that the synchronization criteria are violated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shrestha, Bharat; Hossain, Ekram; Camorlinga, Sergio
2011-09-01
In wireless personal area networks, such as wireless body-area sensor networks, stations or devices have different bandwidth requirements and, thus, create heterogeneous traffics. For such networks, the IEEE 802.15.4 medium access control (MAC) can be used in the beacon-enabled mode, which supports guaranteed time slot (GTS) allocation for time-critical data transmissions. This paper presents a general discrete-time Markov chain model for the IEEE 802.15.4-based networks taking into account the slotted carrier sense multiple access with collision avoidance and GTS transmission phenomena together in the heterogeneous traffic scenario and under nonsaturated condition. For this purpose, the standard GTS allocation scheme is modified. For each non-identical device, the Markov model is solved and the average service time and the service utilization factor are analyzed in the non-saturated mode. The analysis is validated by simulations using network simulator version 2.33. Also, the model is enhanced with a wireless propagation model and the performance of the MAC is evaluated in a wheelchair body-area sensor network scenario.
Exploration of Heterogeneity in Distributed Research Network Drug Safety Analyses
ERIC Educational Resources Information Center
Hansen, Richard A.; Zeng, Peng; Ryan, Patrick; Gao, Juan; Sonawane, Kalyani; Teeter, Benjamin; Westrich, Kimberly; Dubois, Robert W.
2014-01-01
Distributed data networks representing large diverse populations are an expanding focus of drug safety research. However, interpreting results is difficult when treatment effect estimates vary across datasets (i.e., heterogeneity). In a previous study, risk estimates were generated for selected drugs and potential adverse outcomes. Analyses were…
A market-based optimization approach to sensor and resource management
NASA Astrophysics Data System (ADS)
Schrage, Dan; Farnham, Christopher; Gonsalves, Paul G.
2006-05-01
Dynamic resource allocation for sensor management is a problem that demands solutions beyond traditional approaches to optimization. Market-based optimization applies solutions from economic theory, particularly game theory, to the resource allocation problem by creating an artificial market for sensor information and computational resources. Intelligent agents are the buyers and sellers in this market, and they represent all the elements of the sensor network, from sensors to sensor platforms to computational resources. These agents interact based on a negotiation mechanism that determines their bidding strategies. This negotiation mechanism and the agents' bidding strategies are based on game theory, and they are designed so that the aggregate result of the multi-agent negotiation process is a market in competitive equilibrium, which guarantees an optimal allocation of resources throughout the sensor network. This paper makes two contributions to the field of market-based optimization: First, we develop a market protocol to handle heterogeneous goods in a dynamic setting. Second, we develop arbitrage agents to improve the efficiency in the market in light of its dynamic nature.
Integration science and distributed networks
NASA Astrophysics Data System (ADS)
Landauer, Christopher; Bellman, Kirstie L.
2002-07-01
Our work on integration of data and knowledge sources is based in a common theoretical treatment of 'Integration Science', which leads to systematic processes for combining formal logical and mathematical systems, computational and physical systems, and human systems and organizations. The theory is based on the processing of explicit meta-knowledge about the roles played by the different knowledge sources and the methods of analysis and semantic implications of the different data values, together with information about the context in which and the purpose for which they are being combined. The research treatment is primarily mathematical, and though this kind of integration mathematics is still under development, there are some applicable common threads that have emerged already. Instead of describing the current state of the mathematical investigations, since they are not yet crystallized enough for formalisms, we describe our applications of the approach in several different areas, including our focus area of 'Constructed Complex Systems', which are complex heterogeneous systems managed or mediated by computing systems. In this context, it is important to remember that all systems are embedded, all systems are autonomous, and that all systems are distributed networks.
A survey and taxonomy on energy efficient resource allocation techniques for cloud computing systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hameed, Abdul; Khoshkbarforoushha, Alireza; Ranjan, Rajiv
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subjectmore » that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.« less
Pinter-Wollman, Noa; Wollman, Roy; Guetz, Adam; Holmes, Susan; Gordon, Deborah M.
2011-01-01
Social insects exhibit coordinated behaviour without central control. Local interactions among individuals determine their behaviour and regulate the activity of the colony. Harvester ants are recruited for outside work, using networks of brief antennal contacts, in the nest chamber closest to the nest exit: the entrance chamber. Here, we combine empirical observations, image analysis and computer simulations to investigate the structure and function of the interaction network in the entrance chamber. Ant interactions were distributed heterogeneously in the chamber, with an interaction hot-spot at the entrance leading further into the nest. The distribution of the total interactions per ant followed a right-skewed distribution, indicating the presence of highly connected individuals. Numbers of ant encounters observed positively correlated with the duration of observation. Individuals varied in interaction frequency, even after accounting for the duration of observation. An ant's interaction frequency was explained by its path shape and location within the entrance chamber. Computer simulations demonstrate that variation among individuals in connectivity accelerates information flow to an extent equivalent to an increase in the total number of interactions. Individual variation in connectivity, arising from variation among ants in location and spatial behaviour, creates interaction centres, which may expedite information flow. PMID:21490001
A moment-convergence method for stochastic analysis of biochemical reaction networks.
Zhang, Jiajun; Nie, Qing; Zhou, Tianshou
2016-05-21
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in terms of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.
NASA Astrophysics Data System (ADS)
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs.
WebGIS based on semantic grid model and web services
NASA Astrophysics Data System (ADS)
Zhang, WangFei; Yue, CaiRong; Gao, JianGuo
2009-10-01
As the combination point of the network technology and GIS technology, WebGIS has got the fast development in recent years. With the restriction of Web and the characteristics of GIS, traditional WebGIS has some prominent problems existing in development. For example, it can't accomplish the interoperability of heterogeneous spatial databases; it can't accomplish the data access of cross-platform. With the appearance of Web Service and Grid technology, there appeared great change in field of WebGIS. Web Service provided an interface which can give information of different site the ability of data sharing and inter communication. The goal of Grid technology was to make the internet to a large and super computer, with this computer we can efficiently implement the overall sharing of computing resources, storage resource, data resource, information resource, knowledge resources and experts resources. But to WebGIS, we only implement the physically connection of data and information and these is far from the enough. Because of the different understanding of the world, following different professional regulations, different policies and different habits, the experts in different field will get different end when they observed the same geographic phenomenon and the semantic heterogeneity produced. Since these there are large differences to the same concept in different field. If we use the WebGIS without considering of the semantic heterogeneity, we will answer the questions users proposed wrongly or we can't answer the questions users proposed. To solve this problem, this paper put forward and experienced an effective method of combing semantic grid and Web Services technology to develop WebGIS. In this paper, we studied the method to construct ontology and the method to combine Grid technology and Web Services and with the detailed analysis of computing characteristics and application model in the distribution of data, we designed the WebGIS query system driven by ontology based on Grid technology and Web Services.
Mobile high-performance computing (HPC) for synthetic aperture radar signal processing
NASA Astrophysics Data System (ADS)
Misko, Joshua; Kim, Youngsoo; Qi, Chenchen; Sirkeci, Birsen
2018-04-01
The importance of mobile high-performance computing has emerged in numerous battlespace applications at the tactical edge in hostile environments. Energy efficient computing power is a key enabler for diverse areas ranging from real-time big data analytics and atmospheric science to network science. However, the design of tactical mobile data centers is dominated by power, thermal, and physical constraints. Presently, it is very unlikely to achieve required computing processing power by aggregating emerging heterogeneous many-core processing platforms consisting of CPU, Field Programmable Gate Arrays and Graphic Processor cores constrained by power and performance. To address these challenges, we performed a Synthetic Aperture Radar case study for Automatic Target Recognition (ATR) using Deep Neural Networks (DNNs). However, these DNN models are typically trained using GPUs with gigabytes of external memories and massively used 32-bit floating point operations. As a result, DNNs do not run efficiently on hardware appropriate for low power or mobile applications. To address this limitation, we proposed for compressing DNN models for ATR suited to deployment on resource constrained hardware. This proposed compression framework utilizes promising DNN compression techniques including pruning and weight quantization while also focusing on processor features common to modern low-power devices. Following this methodology as a guideline produced a DNN for ATR tuned to maximize classification throughput, minimize power consumption, and minimize memory footprint on a low-power device.
Regenbogen, Christina; Herrmann, Manfred; Fehr, Thorsten
2010-01-01
Studies investigating the effects of violent computer and video game playing have resulted in heterogeneous outcomes. It has been assumed that there is a decreased ability to differentiate between virtuality and reality in people that play these games intensively. FMRI data of a group of young males with (gamers) and without (controls) a history of long-term violent computer game playing experience were obtained during the presentation of computer game and realistic video sequences. In gamers the processing of real violence in contrast to nonviolence produced activation clusters in right inferior frontal, left lingual and superior temporal brain regions. Virtual violence activated a network comprising bilateral inferior frontal, occipital, postcentral, right middle temporal, and left fusiform regions. Control participants showed extended left frontal, insula and superior frontal activations during the processing of real, and posterior activations during the processing of virtual violent scenarios. The data suggest that the ability to differentiate automatically between real and virtual violence has not been diminished by a long-term history of violent video game play, nor have gamers' neural responses to real violence in particular been subject to desensitization processes. However, analyses of individual data indicated that group-related analyses reflect only a small part of actual individual different neural network involvement, suggesting that the consideration of individual learning history is sufficient for the present discussion.
NASA Astrophysics Data System (ADS)
Kohanpur, A. H.; Chen, Y.; Valocchi, A. J.; Tudek, J.; Crandall, D.
2016-12-01
CO2-brine flow in deep natural rocks is the focus of attention in geological storage of CO2. Understanding rock/flow properties at pore-scale is a vital component in field-scale modeling and prediction of fate of injected CO2. There are many challenges in working at the pore scale, such as size and selection of representative elementary volume (REV), particularly for material with complex geometry and heterogeneity, and the high computational costs. These issues factor into trade-offs that need to be made in choosing and applying pore-scale models. On one hand, pore-network modeling (PNM) simplifies the geometry and flow equations but can provide characteristic curves on fairly large samples. On the other hand, the lattice Boltzmann method (LBM) solves Navier-Stokes equations on the real geometry but is limited to small samples due to its high computational costs. Thus, both methods have some advantages but also face some challenges, which warrants a more detailed comparison and evaluation. In this study, we used industrial and micro-CT scans of actual reservoir rock samples to characterize pore structure at different resolutions. We ran LBM models directly on the characterized geometry and PNM on the equivalent 3D extracted network to determine single/two-phase flow properties during drainage and imbibition processes. Specifically, connectivity, absolute permeability, relative permeability curve, capillary pressure curve, and interface location are compared between models. We also did simulations on several subsamples from different locations including different domain sizes and orientations to encompass analysis of heterogeneity and isotropy. This work is primarily supported as part of the Center for Geologic Storage of CO2, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science and partially supported by the International Institute for Carbon-Neutral Energy Research (WPI-I2CNER) based at Kyushu University, Japan.
Federated data storage and management infrastructure
NASA Astrophysics Data System (ADS)
Zarochentsev, A.; Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Hristov, P.
2016-10-01
The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics.
Coverage centralities for temporal networks*
NASA Astrophysics Data System (ADS)
Takaguchi, Taro; Yano, Yosuke; Yoshida, Yuichi
2016-02-01
Structure of real networked systems, such as social relationship, can be modeled as temporal networks in which each edge appears only at the prescribed time. Understanding the structure of temporal networks requires quantifying the importance of a temporal vertex, which is a pair of vertex index and time. In this paper, we define two centrality measures of a temporal vertex based on the fastest temporal paths which use the temporal vertex. The definition is free from parameters and robust against the change in time scale on which we focus. In addition, we can efficiently compute these centrality values for all temporal vertices. Using the two centrality measures, we reveal that distributions of these centrality values of real-world temporal networks are heterogeneous. For various datasets, we also demonstrate that a majority of the highly central temporal vertices are located within a narrow time window around a particular time. In other words, there is a bottleneck time at which most information sent in the temporal network passes through a small number of temporal vertices, which suggests an important role of these temporal vertices in spreading phenomena. Contribution to the Topical Issue "Temporal Network Theory and Applications", edited by Petter Holme.Supplementary material in the form of one pdf file available from the Journal web page at http://dx.doi.org/10.1140/epjb/e2016-60498-7
Ondex Web: web-based visualization and exploration of heterogeneous biological networks.
Taubert, Jan; Hassani-Pak, Keywan; Castells-Brooke, Nathalie; Rawlings, Christopher J
2014-04-01
Ondex Web is a new web-based implementation of the network visualization and exploration tools from the Ondex data integration platform. New features such as context-sensitive menus and annotation tools provide users with intuitive ways to explore and manipulate the appearance of heterogeneous biological networks. Ondex Web is open source, written in Java and can be easily embedded into Web sites as an applet. Ondex Web supports loading data from a variety of network formats, such as XGMML, NWB, Pajek and OXL. http://ondex.rothamsted.ac.uk/OndexWeb.
Socially Aware Heterogeneous Wireless Networks
Kosmides, Pavlos; Adamopoulou, Evgenia; Demestichas, Konstantinos; Theologou, Michael; Anagnostou, Miltiades; Rouskas, Angelos
2015-01-01
The development of smart cities has been the epicentre of many researchers’ efforts during the past decade. One of the key requirements for smart city networks is mobility and this is the reason stable, reliable and high-quality wireless communications are needed in order to connect people and devices. Most research efforts so far, have used different kinds of wireless and sensor networks, making interoperability rather difficult to accomplish in smart cities. One common solution proposed in the recent literature is the use of software defined networks (SDNs), in order to enhance interoperability among the various heterogeneous wireless networks. In addition, SDNs can take advantage of the data retrieved from available sensors and use them as part of the intelligent decision making process contacted during the resource allocation procedure. In this paper, we propose an architecture combining heterogeneous wireless networks with social networks using SDNs. Specifically, we exploit the information retrieved from location based social networks regarding users’ locations and we attempt to predict areas that will be crowded by using specially-designed machine learning techniques. By recognizing possible crowded areas, we can provide mobile operators with recommendations about areas requiring datacell activation or deactivation. PMID:26110402
Visualization of metabolic interaction networks in microbial communities using VisANT 5.0
Granger, Brian R.; Chang, Yi -Chien; Wang, Yan; ...
2016-04-15
Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less
NASA Astrophysics Data System (ADS)
Standvoss, K.; Crijns, T.; Goerke, L.; Janssen, D.; Kern, S.; van Niedek, T.; van Vugt, J.; Alfonso Burgos, N.; Gerritse, E. J.; Mol, J.; van de Vooren, D.; Ghafoorian, M.; van den Heuvel, T. L. A.; Manniesing, R.
2018-02-01
The number and location of cerebral microbleeds (CMBs) in patients with traumatic brain injury (TBI) is important to determine the severity of trauma and may hold prognostic value for patient outcome. However, manual assessment is subjective and time-consuming due to the resemblance of CMBs to blood vessels, the possible presence of imaging artifacts, and the typical heterogeneity of trauma imaging data. In this work, we present a computer aided detection system based on 3D convolutional neural networks for detecting CMBs in 3D susceptibility weighted images. Network architectures with varying depth were evaluated. Data augmentation techniques were employed to improve the networks' generalization ability and selective sampling was implemented to handle class imbalance. The predictions of the models were clustered using a connected component analysis. The system was trained on ten annotated scans and evaluated on an independent test set of eight scans. Despite this limited data set, the system reached a sensitivity of 0.87 at 16.75 false positives per scan (2.5 false positives per CMB), outperforming related work on CMB detection in TBI patients.
Collective dynamics in heterogeneous networks of neuronal cellular automata
NASA Astrophysics Data System (ADS)
Manchanda, Kaustubh; Bose, Amitabha; Ramaswamy, Ramakrishna
2017-12-01
We examine the collective dynamics of heterogeneous random networks of model neuronal cellular automata. Each automaton has b active states, a single silent state and r - b - 1 refractory states, and can show 'spiking' or 'bursting' behavior, depending on the values of b. We show that phase transitions that occur in the dynamical activity can be related to phase transitions in the structure of Erdõs-Rényi graphs as a function of edge probability. Different forms of heterogeneity allow distinct structural phase transitions to become relevant. We also show that the dynamics on the network can be described by a semi-annealed process and, as a result, can be related to the Boolean Lyapunov exponent.
NASA Astrophysics Data System (ADS)
Li, Yu-Ye; Ding, Xue-Li
2014-12-01
Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns.
Deep Convolutional Neural Networks Enable Discrimination of Heterogeneous Digital Pathology Images.
Khosravi, Pegah; Kazemi, Ehsan; Imielinski, Marcin; Elemento, Olivier; Hajirasouliha, Iman
2018-01-01
Pathological evaluation of tumor tissue is pivotal for diagnosis in cancer patients and automated image analysis approaches have great potential to increase precision of diagnosis and help reduce human error. In this study, we utilize several computational methods based on convolutional neural networks (CNN) and build a stand-alone pipeline to effectively classify different histopathology images across different types of cancer. In particular, we demonstrate the utility of our pipeline to discriminate between two subtypes of lung cancer, four biomarkers of bladder cancer, and five biomarkers of breast cancer. In addition, we apply our pipeline to discriminate among four immunohistochemistry (IHC) staining scores of bladder and breast cancers. Our classification pipeline includes a basic CNN architecture, Google's Inceptions with three training strategies, and an ensemble of two state-of-the-art algorithms, Inception and ResNet. Training strategies include training the last layer of Google's Inceptions, training the network from scratch, and fine-tunning the parameters for our data using two pre-trained version of Google's Inception architectures, Inception-V1 and Inception-V3. We demonstrate the power of deep learning approaches for identifying cancer subtypes, and the robustness of Google's Inceptions even in presence of extensive tumor heterogeneity. On average, our pipeline achieved accuracies of 100%, 92%, 95%, and 69% for discrimination of various cancer tissues, subtypes, biomarkers, and scores, respectively. Our pipeline and related documentation is freely available at https://github.com/ih-_lab/CNN_Smoothie. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Principles of E-network modelling of heterogeneous systems
NASA Astrophysics Data System (ADS)
Tarakanov, D.; Tsapko, I.; Tsapko, S.; Buldygin, R.
2016-04-01
The present article is concerned with the analytical and simulation modelling of heterogeneous technical systems using E-network mathematical apparatus (the expansion of Petri nets). The distinguishing feature of the given system is the presence of the module6 which identifies the parameters of the controlled object as well as the external environment.
A further analysis of the role of heterogeneity in coevolutionary spatial games
NASA Astrophysics Data System (ADS)
Cardinot, Marcos; Griffith, Josephine; O'Riordan, Colm
2018-03-01
Heterogeneity has been studied as one of the most common explanations of the puzzle of cooperation in social dilemmas. A large number of papers have been published discussing the effects of increasing heterogeneity in structured populations of agents, where it has been established that heterogeneity may favour cooperative behaviour if it supports agents to locally coordinate their strategies. In this paper, assuming an existing model of a heterogeneous weighted network, we aim to further this analysis by exploring the relationship (if any) between heterogeneity and cooperation. We adopt a weighted network which is fully populated by agents playing both the Prisoner's Dilemma or the Optional Prisoner's Dilemma games with coevolutionary rules, i.e., not only the strategies but also the link weights evolve over time. Surprisingly, results show that the heterogeneity of link weights (states) on their own does not always promote cooperation; rather cooperation is actually favoured by the increase in the number of overlapping states and not by the heterogeneity itself. We believe that these results can guide further research towards a more accurate analysis of the role of heterogeneity in social dilemmas.
Crossing disciplines and scales to understand the critical zone
Brantley, S.L.; Goldhaber, M.B.; Vala, Ragnarsdottir K.
2007-01-01
The Critical Zone (CZ) is the system of coupled chemical, biological, physical, and geological processes operating together to support life at the Earth's surface. While our understanding of this zone has increased over the last hundred years, further advance requires scientists to cross disciplines and scales to integrate understanding of processes in the CZ, ranging in scale from the mineral-water interface to the globe. Despite the extreme heterogeneities manifest in the CZ, patterns are observed at all scales. Explanations require the use of new computational and analytical tools, inventive interdisciplinary approaches, and growing networks of sites and people.
Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0
Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun
2016-01-01
The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT’s unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the “symbiotic layout” of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu. PMID:27081850
Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0.
Granger, Brian R; Chang, Yi-Chien; Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun
2016-04-01
The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu.
Sociospace: A smart social framework based on the IP Multimedia Subsystem
NASA Astrophysics Data System (ADS)
Hasswa, Ahmed
Advances in smart technologies, wireless networking, and increased interest in contextual services have led to the emergence of ubiquitous and pervasive computing as one of the most promising areas of computing in recent years. Smart Spaces, in particular, have gained significant interest within the research community. Currently, most Smart Spaces rely on physical components, such as sensors, to acquire information about the real-world environment. Although current sensor networks can acquire some useful contextual information from the physical environment, their information resources are often limited, and the data acquired is often unreliable. We argue that by introducing social network information into such systems, smarter and more adaptive spaces can be created. Social networks have recently become extremely popular, and are now an integral part of millions of people's daily lives. Through social networks, users create profiles, build relationships, and join groups, forming intermingled sets and communities. Social Networks contain a wealth of information, which, if exploited properly, can lead to a whole new level of smart contextual services. A mechanism is therefore needed to extract data from heterogeneous social networks, to link profiles across different networks, and to aggregate the data obtained. We therefore propose the design and implementation of a Smart Spaces framework that utilizes the social context. In order to manage services and sessions, we integrate our system with the IP Multimedia Subsystem. Our system, which we call SocioSpace, includes full design and implementation of all components, including the central server, the location management system, the social network interfacing system, the service delivery platform, and user agents. We have built a prototype for proof of concept and carried out exhaustive performance analysis; the results show that SocioSpace is scalable, extensible, and fault-tolerant. It is capable of creating Smart Spaces that can truly deliver adaptive services that enhance the users' overall experience, increase their satisfaction, and make the surroundings more beneficial and interesting to them.
2010-01-01
Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU) opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU) code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a starting point for modelers to develop their own GPU implementations, and encourage others to implement their modeling methods on the GPU and to make that code available to the wider community. PMID:20696053
Ferromagnetic transition in a simple variant of the Ising model on multiplex networks
NASA Astrophysics Data System (ADS)
Krawiecki, A.
2018-02-01
Multiplex networks consist of a fixed set of nodes connected by several sets of edges which are generated separately and correspond to different networks ("layers"). Here, a simple variant of the Ising model on multiplex networks with two layers is considered, with spins located in the nodes and edges corresponding to ferromagnetic interactions between them. Critical temperatures for the ferromagnetic transition are evaluated for the layers in the form of random Erdös-Rényi graphs or heterogeneous scale-free networks using the mean-field approximation and the replica method, from the replica symmetric solution. Both methods require the use of different "partial" magnetizations, associated with different layers of the multiplex network, and yield qualitatively similar results. If the layers are strongly heterogeneous the critical temperature differs noticeably from that for the Ising model on a network being a superposition of the two layers, evaluated in the mean-field approximation neglecting the effect of the underlying multiplex structure on the correlations between the degrees of nodes. The critical temperature evaluated from the replica symmetric solution depends sensitively on the correlations between the degrees of nodes in different layers and shows satisfactory quantitative agreement with that obtained from Monte Carlo simulations. The critical behavior of the magnetization for the model with strongly heterogeneous layers can depend on the distributions of the degrees of nodes and is then determined by the properties of the most heterogeneous layer.
Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631
Global detection of live virtual machine migration based on cellular neural networks.
Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian
2014-01-01
In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.
Human mobility in an emerging epidemic: a key aspect for response planning
NASA Astrophysics Data System (ADS)
Poletto, Chiara; Bajardi, Paolo; Colizza, Vittoria; Ramasco, Jose J.; Tizzoni, Michele; Vespignani, Alessandro
2010-03-01
Human mobility and interactions represent key ingredients in the spreading dynamics of an infectious disease. The flows of traveling people form a network characterized by complex features, such as strong topological and traffic heterogeneities, that unfolds at different temporal and spatial scales, from short ranges to the global scale. Computational models can be developed that integrate detailed network structures based on demographic and mobility data, in order to simulate the spatial evolution of an epidemic. Focusing on the recent A(H1N1) influenza pandemic as a paradigmatic example, these approaches allow the assessment of the interplay between individual mobility and epidemic dynamics, quantifying the effects of travel restrictions in delaying the epidemic spread and the role of mobility as an additional source of information for the understanding of the early outbreak.
Scattering Properties of Heterogeneous Mineral Particles with Absorbing Inclusions
NASA Technical Reports Server (NTRS)
Dlugach, Janna M.; Mishchenko, Michael I.
2015-01-01
We analyze the results of numerically exact computer modeling of scattering and absorption properties of randomly oriented poly-disperse heterogeneous particles obtained by placing microscopic absorbing grains randomly on the surfaces of much larger spherical mineral hosts or by imbedding them randomly inside the hosts. These computations are paralleled by those for heterogeneous particles obtained by fully encapsulating fractal-like absorbing clusters in the mineral hosts. All computations are performed using the superposition T-matrix method. In the case of randomly distributed inclusions, the results are compared with the outcome of Lorenz-Mie computations for an external mixture of the mineral hosts and absorbing grains. We conclude that internal aggregation can affect strongly both the integral radiometric and differential scattering characteristics of the heterogeneous particle mixtures.
ProphTools: general prioritization tools for heterogeneous biological networks.
Navarro, Carmen; Martínez, Victor; Blanco, Armando; Cano, Carlos
2017-12-01
Networks have been proven effective representations for the analysis of biological data. As such, there exist multiple methods to extract knowledge from biological networks. However, these approaches usually limit their scope to a single biological entity type of interest or they lack the flexibility to analyze user-defined data. We developed ProphTools, a flexible open-source command-line tool that performs prioritization on a heterogeneous network. ProphTools prioritization combines a Flow Propagation algorithm similar to a Random Walk with Restarts and a weighted propagation method. A flexible model for the representation of a heterogeneous network allows the user to define a prioritization problem involving an arbitrary number of entity types and their interconnections. Furthermore, ProphTools provides functionality to perform cross-validation tests, allowing users to select the best network configuration for a given problem. ProphTools core prioritization methodology has already been proven effective in gene-disease prioritization and drug repositioning. Here we make ProphTools available to the scientific community as flexible, open-source software and perform a new proof-of-concept case study on long noncoding RNAs (lncRNAs) to disease prioritization. ProphTools is robust prioritization software that provides the flexibility not present in other state-of-the-art network analysis approaches, enabling researchers to perform prioritization tasks on any user-defined heterogeneous network. Furthermore, the application to lncRNA-disease prioritization shows that ProphTools can reach the performance levels of ad hoc prioritization tools without losing its generality. © The Authors 2017. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
DeCristofaro, Michael A.; Lansdowne, Chatwin A.; Schlesinger, Adam M.
2014-01-01
NASA has identified standardized wireless mesh networking as a key technology for future human and robotic space exploration. Wireless mesh networks enable rapid deployment, provide coverage in undeveloped regions. Mesh networks are also self-healing, resilient, and extensible, qualities not found in traditional infrastructure-based networks. Mesh networks can offer lower size, weight, and power (SWaP) than overlapped infrastructure-perapplication. To better understand the maturity, characteristics and capability of the technology, we developed an 802.11 mesh network consisting of a combination of heterogeneous commercial off-the-shelf devices and opensource firmware and software packages. Various streaming applications were operated over the mesh network, including voice and video, and performance measurements were made under different operating scenarios. During the testing several issues with the currently implemented mesh network technology were identified and outlined for future work.
NASA Astrophysics Data System (ADS)
Darema, F.
2016-12-01
InfoSymbiotics/DDDAS embodies the power of Dynamic Data Driven Applications Systems (DDDAS), a concept whereby an executing application model is dynamically integrated, in a feed-back loop, with the real-time data-acquisition and control components, as well as other data sources of the application system. Advanced capabilities can be created through such new computational approaches in modeling and simulations, and in instrumentation methods, and include: enhancing the accuracy of the application model; speeding-up the computation to allow faster and more comprehensive models of a system, and create decision support systems with the accuracy of full-scale simulations; in addition, the notion of controlling instrumentation processes by the executing application results in more efficient management of application-data and addresses challenges of how to architect and dynamically manage large sets of heterogeneous sensors and controllers, an advance over the static and ad-hoc ways of today - with DDDAS these sets of resources can be managed adaptively and in optimized ways. Large-Scale-Dynamic-Data encompasses the next wave of Big Data, and namely dynamic data arising from ubiquitous sensing and control in engineered, natural, and societal systems, through multitudes of heterogeneous sensors and controllers instrumenting these systems, and where opportunities and challenges at these "large-scales" relate not only to data size but the heterogeneity in data, data collection modalities, fidelities, and timescales, ranging from real-time data to archival data. In tandem with this important dimension of dynamic data, there is an extended view of Big Computing, which includes the collective computing by networked assemblies of multitudes of sensors and controllers, this range from the high-end to the real-time seamlessly integrated and unified, and comprising the Large-Scale-Big-Computing. InfoSymbiotics/DDDAS engenders transformative impact in many application domains, ranging from the nano-scale to the terra-scale and to the extra-terra-scale. The talk will address opportunities for new capabilities together with corresponding research challenges, with illustrative examples from several application areas including environmental sciences, geosciences, and space sciences.
Cheng, Feixiong; Liu, Chuang; Shen, Bairong; Zhao, Zhongming
2016-08-26
Cancer is increasingly recognized as a cellular system phenomenon that is attributed to the accumulation of genetic or epigenetic alterations leading to the perturbation of the molecular network architecture. Elucidation of network properties that can characterize tumor initiation and progression, or pinpoint the molecular targets related to the drug sensitivity or resistance, is therefore of critical importance for providing systems-level insights into tumorigenesis and clinical outcome in the molecularly targeted cancer therapy. In this study, we developed a network-based framework to quantitatively examine cellular network heterogeneity and modularity in cancer. Specifically, we constructed gene co-expressed protein interaction networks derived from large-scale RNA-Seq data across 8 cancer types generated in The Cancer Genome Atlas (TCGA) project. We performed gene network entropy and balanced versus unbalanced motif analysis to investigate cellular network heterogeneity and modularity in tumor versus normal tissues, different stages of progression, and drug resistant versus sensitive cancer cell lines. We found that tumorigenesis could be characterized by a significant increase of gene network entropy in all of the 8 cancer types. The ratio of the balanced motifs in normal tissues is higher than that of tumors, while the ratio of unbalanced motifs in tumors is higher than that of normal tissues in all of the 8 cancer types. Furthermore, we showed that network entropy could be used to characterize tumor progression and anticancer drug responses. For example, we found that kinase inhibitor resistant cancer cell lines had higher entropy compared to that of sensitive cell lines using the integrative analysis of microarray gene expression and drug pharmacological data collected from the Genomics of Drug Sensitivity in Cancer database. In addition, we provided potential network-level evidence that smoking might increase cancer cellular network heterogeneity and further contribute to tyrosine kinase inhibitor (e.g., gefitinib) resistance. In summary, we demonstrated that network properties such as network entropy and unbalanced motifs associated with tumor initiation, progression, and anticancer drug responses, suggesting new potential network-based prognostic and predictive measure in cancer.
Stimulus-dependent spiking relationships with the EEG
Snyder, Adam C.
2015-01-01
The development and refinement of noninvasive techniques for imaging neural activity is of paramount importance for human neuroscience. Currently, the most accessible and popular technique is electroencephalography (EEG). However, nearly all of what we know about the neural events that underlie EEG signals is based on inference, because of the dearth of studies that have simultaneously paired EEG recordings with direct recordings of single neurons. From the perspective of electrophysiologists there is growing interest in understanding how spiking activity coordinates with large-scale cortical networks. Evidence from recordings at both scales highlights that sensory neurons operate in very distinct states during spontaneous and visually evoked activity, which appear to form extremes in a continuum of coordination in neural networks. We hypothesized that individual neurons have idiosyncratic relationships to large-scale network activity indexed by EEG signals, owing to the neurons' distinct computational roles within the local circuitry. We tested this by recording neuronal populations in visual area V4 of rhesus macaques while we simultaneously recorded EEG. We found substantial heterogeneity in the timing and strength of spike-EEG relationships and that these relationships became more diverse during visual stimulation compared with the spontaneous state. The visual stimulus apparently shifts V4 neurons from a state in which they are relatively uniformly embedded in large-scale network activity to a state in which their distinct roles within the local population are more prominent, suggesting that the specific way in which individual neurons relate to EEG signals may hold clues regarding their computational roles. PMID:26108954
A key heterogeneous structure of fractal networks based on inverse renormalization scheme
NASA Astrophysics Data System (ADS)
Bai, Yanan; Huang, Ning; Sun, Lina
2018-06-01
Self-similarity property of complex networks was found by the application of renormalization group theory. Based on this theory, network topologies can be classified into universality classes in the space of configurations. In return, through inverse renormalization scheme, a given primitive structure can grow into a pure fractal network, then adding different types of shortcuts, it exhibits different characteristics of complex networks. However, the effect of primitive structure on networks structural property has received less attention. In this paper, we introduce a degree variance index to measure the dispersion of nodes degree in the primitive structure, and investigate the effect of the primitive structure on network structural property quantified by network efficiency. Numerical simulations and theoretical analysis show a primitive structure is a key heterogeneous structure of generated networks based on inverse renormalization scheme, whether or not adding shortcuts, and the network efficiency is positively correlated with degree variance of the primitive structure.
Potentials and Limitations of Wireless Sensor Networks for Environmental
NASA Astrophysics Data System (ADS)
Bumberger, J.; Remmler, P.; Hutschenreuther, T.; Toepfer, H.; Dietrich, P.
2013-12-01
Understanding and dealing with environmental challenges worldwide requires suitable interdisciplinary methods and a level of expertise to be able to implement these solutions, so that the lifestyles of future generations can be secured in the years to come. To characterize environmental systems it is necessary to identify and describe processes with suitable methods. Environmental systems are often characterized by their high heterogeneity, so individual measurements for their complete representation are often not sufficient. The application of wireless sensor networks in terrestrial and aquatic ecosystems offer significant benefits as a better consideration of the local test conditions becomes possible. This can be essential for the monitoring of heterogeneous environmental systems. Significant advantages in the application of wireless sensor networks are their self-organizing behaviour, resulting in a major reduction in installation and operation costs and time. In addition, a point measurement with a sensor is significantly improved by measuring at several points. It is also possible to perform analog and digital signal processing and computation on the basis of the measured data close to the sensor. Hence, a significant reduction of the data to be transmitted can be achieved which leads to a better energy management of sensor nodes. Furthermore, their localization via satellite, the miniaturization of the nodes and long-term energy self-sufficiency are current topics under investigation. In this presentation, the possibilities and limitations of the applicability of wireless sensor networks for long-term environmental monitoring are presented. To underline the importance of this future technology, example concepts are given in the field of near-surface geothermics, groundwater observation, measurement of spatial radiation intensity and air humidity on soils, measurement of matter fluxes, greenhouse gas measurement, and landslide monitoring.
High performance computing and communications: Advancing the frontiers of information technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-12-31
This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental inmore » the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kashkooli, Ali Ghorbani; Foreman, Evan; Farhad, Siamak
In this study, synchrotron X-ray computed tomography has been utilized using two different imaging modes, absorption and Zernike phase contrast, to reconstruct the real three-dimensional (3D) morphology of nanostructured Li 4Ti 5O 12 (LTO) electrodes. The morphology of the high atomic number active material has been obtained using the absorption contrast mode, whereas the percolated solid network composed of active material and carbon-doped polymer binder domain (CBD) has been obtained using the Zernike phase contrast mode. The 3D absorption contrast image revealed that some LTO nano-particles tend to agglomerate and form secondary micro-sized particles with varying degrees of sphericity. Themore » tortuosity of electrode’s pore and solid phases were found to have directional dependence, different from Bruggeman’s tortuosity commonly used in macro-homogeneous models. The electrode’s heterogeneous structure was investigated by developing a numerical model to simulate galvanostatic discharge process using the Zernike phase contrast mode. The inclusion of CBD in the Zernike phase contrast results in an integrated percolated network of active material and CBD that is highly suited for continuum modeling. As a result, the simulation results highlight the importance of using the real 3D geometry since the spatial distribution of physical and electrochemical properties have a strong non-uniformity due to microstructural heterogeneities.« less
Integrative Functional Genomics for Systems Genetics in GeneWeaver.org.
Bubier, Jason A; Langston, Michael A; Baker, Erich J; Chesler, Elissa J
2017-01-01
The abundance of existing functional genomics studies permits an integrative approach to interpreting and resolving the results of diverse systems genetics studies. However, a major challenge lies in assembling and harmonizing heterogeneous data sets across species for facile comparison to the positional candidate genes and coexpression networks that come from systems genetic studies. GeneWeaver is an online database and suite of tools at www.geneweaver.org that allows for fast aggregation and analysis of gene set-centric data. GeneWeaver contains curated experimental data together with resource-level data such as GO annotations, MP annotations, and KEGG pathways, along with persistent stores of user entered data sets. These can be entered directly into GeneWeaver or transferred from widely used resources such as GeneNetwork.org. Data are analyzed using statistical tools and advanced graph algorithms to discover new relations, prioritize candidate genes, and generate function hypotheses. Here we use GeneWeaver to find genes common to multiple gene sets, prioritize candidate genes from a quantitative trait locus, and characterize a set of differentially expressed genes. Coupling a large multispecies repository curated and empirical functional genomics data to fast computational tools allows for the rapid integrative analysis of heterogeneous data for interpreting and extrapolating systems genetics results.
Cooperation among cancer cells as public goods games on Voronoi networks.
Archetti, Marco
2016-05-07
Cancer cells produce growth factors that diffuse and sustain tumour proliferation, a form of cooperation that can be studied using mathematical models of public goods in the framework of evolutionary game theory. Cell populations, however, form heterogeneous networks that cannot be described by regular lattices or scale-free networks, the types of graphs generally used in the study of cooperation. To describe the dynamics of growth factor production in populations of cancer cells, I study public goods games on Voronoi networks, using a range of non-linear benefits that account for the known properties of growth factors, and different types of diffusion gradients. The results are surprisingly similar to those obtained on regular graphs and different from results on scale-free networks, revealing that network heterogeneity per se does not promote cooperation when public goods diffuse beyond one-step neighbours. The exact shape of the diffusion gradient is not crucial, however, whereas the type of non-linear benefit is an essential determinant of the dynamics. Public goods games on Voronoi networks can shed light on intra-tumour heterogeneity, the evolution of resistance to therapies that target growth factors, and new types of cell therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.
A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU
NASA Astrophysics Data System (ADS)
Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha
2018-03-01
Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Putt, Charles W.
1997-01-01
The Visual Computing Environment (VCE) is a NASA Lewis Research Center project to develop a framework for intercomponent and multidisciplinary computational simulations. Many current engineering analysis codes simulate various aspects of aircraft engine operation. For example, existing computational fluid dynamics (CFD) codes can model the airflow through individual engine components such as the inlet, compressor, combustor, turbine, or nozzle. Currently, these codes are run in isolation, making intercomponent and complete system simulations very difficult to perform. In addition, management and utilization of these engineering codes for coupled component simulations is a complex, laborious task, requiring substantial experience and effort. To facilitate multicomponent aircraft engine analysis, the CFD Research Corporation (CFDRC) is developing the VCE system. This system, which is part of NASA's Numerical Propulsion Simulation System (NPSS) program, can couple various engineering disciplines, such as CFD, structural analysis, and thermal analysis. The objectives of VCE are to (1) develop a visual computing environment for controlling the execution of individual simulation codes that are running in parallel and are distributed on heterogeneous host machines in a networked environment, (2) develop numerical coupling algorithms for interchanging boundary conditions between codes with arbitrary grid matching and different levels of dimensionality, (3) provide a graphical interface for simulation setup and control, and (4) provide tools for online visualization and plotting. VCE was designed to provide a distributed, object-oriented environment. Mechanisms are provided for creating and manipulating objects, such as grids, boundary conditions, and solution data. This environment includes parallel virtual machine (PVM) for distributed processing. Users can interactively select and couple any set of codes that have been modified to run in a parallel distributed fashion on a cluster of heterogeneous workstations. A scripting facility allows users to dictate the sequence of events that make up the particular simulation.
Elliptic Curve Cryptography with Security System in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Huang, Xu; Sharma, Dharmendra
2010-10-01
The rapid progress of wireless communications and embedded micro-electro-system technologies has made wireless sensor networks (WSN) very popular and even become part of our daily life. WSNs design are generally application driven, namely a particular application's requirements will determine how the network behaves. However, the natures of WSN have attracted increasing attention in recent years due to its linear scalability, a small software footprint, low hardware implementation cost, low bandwidth requirement, and high device performance. It is noted that today's software applications are mainly characterized by their component-based structures which are usually heterogeneous and distributed, including the WSNs. But WSNs typically need to configure themselves automatically and support as hoc routing. Agent technology provides a method for handling increasing software complexity and supporting rapid and accurate decision making. This paper based on our previous works [1, 2], three contributions have made, namely (a) fuzzy controller for dynamic slide window size to improve the performance of running ECC (b) first presented a hidden generation point for protection from man-in-the middle attack and (c) we first investigates multi-agent applying for key exchange together. Security systems have been drawing great attentions as cryptographic algorithms have gained popularity due to the natures that make them suitable for use in constrained environment such as mobile sensor information applications, where computing resources and power availability are limited. Elliptic curve cryptography (ECC) is one of high potential candidates for WSNs, which requires less computational power, communication bandwidth, and memory in comparison with other cryptosystem. For saving pre-computing storages recently there is a trend for the sensor networks that the sensor group leaders rather than sensors communicate to the end database, which highlighted the needs to prevent from the man-in-the middle attack. A designed a hidden generator point that offer a good protection from the man-in-the middle (MinM) attack which becomes one of major worries for the sensor's networks with multiagent system is also discussed.
Modeling flow and transport in fracture networks using graphs
NASA Astrophysics Data System (ADS)
Karra, S.; O'Malley, D.; Hyman, J. D.; Viswanathan, H. S.; Srinivasan, G.
2018-03-01
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O (104) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.
Modeling flow and transport in fracture networks using graphs.
Karra, S; O'Malley, D; Hyman, J D; Viswanathan, H S; Srinivasan, G
2018-03-01
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizations of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. The good accuracy and the low computational cost, with O(10^{4}) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.
Modeling flow and transport in fracture networks using graphs
Karra, S.; O'Malley, D.; Hyman, J. D.; ...
2018-03-09
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less
Modeling flow and transport in fracture networks using graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karra, S.; O'Malley, D.; Hyman, J. D.
Fractures form the main pathways for flow in the subsurface within low-permeability rock. For this reason, accurately predicting flow and transport in fractured systems is vital for improving the performance of subsurface applications. Fracture sizes in these systems can range from millimeters to kilometers. Although modeling flow and transport using the discrete fracture network (DFN) approach is known to be more accurate due to incorporation of the detailed fracture network structure over continuum-based methods, capturing the flow and transport in such a wide range of scales is still computationally intractable. Furthermore, if one has to quantify uncertainty, hundreds of realizationsmore » of these DFN models have to be run. To reduce the computational burden, we solve flow and transport on a graph representation of a DFN. We study the accuracy of the graph approach by comparing breakthrough times and tracer particle statistical data between the graph-based and the high-fidelity DFN approaches, for fracture networks with varying number of fractures and degree of heterogeneity. Due to our recent developments in capabilities to perform DFN high-fidelity simulations on fracture networks with large number of fractures, we are in a unique position to perform such a comparison. We show that the graph approach shows a consistent bias with up to an order of magnitude slower breakthrough when compared to the DFN approach. We show that this is due to graph algorithm's underprediction of the pressure gradients across intersections on a given fracture, leading to slower tracer particle speeds between intersections and longer travel times. We present a bias correction methodology to the graph algorithm that reduces the discrepancy between the DFN and graph predictions. We show that with this bias correction, the graph algorithm predictions significantly improve and the results are very accurate. In conclusion, the good accuracy and the low computational cost, with O(10 4) times lower times than the DFN, makes the graph algorithm an ideal technique to incorporate in uncertainty quantification methods.« less
NASA Astrophysics Data System (ADS)
Frey, Davide; Guerraoui, Rachid; Kermarrec, Anne-Marie; Koldehofe, Boris; Mogensen, Martin; Monod, Maxime; Quéma, Vivien
Gossip-based information dissemination protocols are considered easy to deploy, scalable and resilient to network dynamics. Load-balancing is inherent in these protocols as the dissemination work is evenly spread among all nodes. Yet, large-scale distributed systems are usually heterogeneous with respect to network capabilities such as bandwidth. In practice, a blind load-balancing strategy might significantly hamper the performance of the gossip dissemination.
Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices.
Marin, Leandro; Pawlowski, Marcin Piotr; Jara, Antonio
2015-08-28
The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
Kumar, Girijesh; Gupta, Rajeev
2013-10-07
The present work shows the utilization of Co(3+) complexes appended with either para- or meta-arylcarboxylic acid groups as the molecular building blocks for the construction of three-dimensional {Co(3+)-Zn(2+)} and {Co(3+)-Cd(2+)} heterobimetallic networks. The structural characterizations of these networks show several interesting features including well-defined pores and channels. These networks function as heterogeneous and reusable catalysts for the regio- and stereoselective ring-opening reactions of various epoxides and size-selective cyanation reactions of assorted aldehydes.
Bursts of Vertex Activation and Epidemics in Evolving Networks
Rocha, Luis E. C.; Blondel, Vincent D.
2013-01-01
The dynamic nature of contact patterns creates diverse temporal structures. In particular, empirical studies have shown that contact patterns follow heterogeneous inter-event time intervals, meaning that periods of high activity are followed by long periods of inactivity. To investigate the impact of these heterogeneities in the spread of infection from a theoretical perspective, we propose a stochastic model to generate temporal networks where vertices make instantaneous contacts following heterogeneous inter-event intervals, and may leave and enter the system. We study how these properties affect the prevalence of an infection and estimate , the number of secondary infections of an infectious individual in a completely susceptible population, by modeling simulated infections (SI and SIR) that co-evolve with the network structure. We find that heterogeneous contact patterns cause earlier and larger epidemics in the SIR model in comparison to homogeneous scenarios for a vast range of parameter values, while smaller epidemics may happen in some combinations of parameters. In the case of SI and heterogeneous patterns, the epidemics develop faster in the earlier stages followed by a slowdown in the asymptotic limit. For increasing vertex turnover rates, heterogeneous patterns generally cause higher prevalence in comparison to homogeneous scenarios with the same average inter-event interval. We find that is generally higher for heterogeneous patterns, except for sufficiently large infection duration and transmission probability. PMID:23555211
Large-scale Heterogeneous Network Data Analysis
2012-07-31
Mining (KDD’09), 527-535, 2009. [20] B. Long, Z. M. Zhang, X. Wu, and P. S. Yu . Spectral Clustering for Multi-type Relational Data. In Proceedings of...and Data Mining (KDD’06), 374-383, 2006. [33] Y. Sun, Y. Yu , and J. Han. Ranking-Based Clustering of Heterogeneous Information Networks with Star...publications in 2012 so far: Yi-Kuang Ko, Jing- Kai Lou, Cheng-Te Li, Shou-de Lin, and Shyh-Kang Jeng. “A Social Network Evolution Model Based on
Pervasive Sensing: Addressing the Heterogeneity Problem
NASA Astrophysics Data System (ADS)
O'Grady, Michael J.; Murdoch, Olga; Kroon, Barnard; Lillis, David; Carr, Dominic; Collier, Rem W.; O'Hare, Gregory M. P.
2013-06-01
Pervasive sensing is characterized by heterogeneity across a number of dimensions. This raises significant problems for those designing, implementing and deploying sensor networks, irrespective of application domain. Such problems include for example, issues of data provenance and integrity, security, and privacy amongst others. Thus engineering a network that is fit-for-purpose represents a significant challenge. In this paper, the issue of heterogeneity is explored from the perspective of those who seek to harness a pervasive sensing element in their applications. A initial solution is proposed based on the middleware construct.
Simple, efficient allocation of modelling runs on heterogeneous clusters with MPI
Donato, David I.
2017-01-01
In scientific modelling and computation, the choice of an appropriate method for allocating tasks for parallel processing depends on the computational setting and on the nature of the computation. The allocation of independent but similar computational tasks, such as modelling runs or Monte Carlo trials, among the nodes of a heterogeneous computational cluster is a special case that has not been specifically evaluated previously. A simulation study shows that a method of on-demand (that is, worker-initiated) pulling from a bag of tasks in this case leads to reliably short makespans for computational jobs despite heterogeneity both within and between cluster nodes. A simple reference implementation in the C programming language with the Message Passing Interface (MPI) is provided.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.
Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
A Family of Algorithms for Computing Consensus about Node State from Network Data
Brush, Eleanor R.; Krakauer, David C.; Flack, Jessica C.
2013-01-01
Biological and social networks are composed of heterogeneous nodes that contribute differentially to network structure and function. A number of algorithms have been developed to measure this variation. These algorithms have proven useful for applications that require assigning scores to individual nodes–from ranking websites to determining critical species in ecosystems–yet the mechanistic basis for why they produce good rankings remains poorly understood. We show that a unifying property of these algorithms is that they quantify consensus in the network about a node's state or capacity to perform a function. The algorithms capture consensus by either taking into account the number of a target node's direct connections, and, when the edges are weighted, the uniformity of its weighted in-degree distribution (breadth), or by measuring net flow into a target node (depth). Using data from communication, social, and biological networks we find that that how an algorithm measures consensus–through breadth or depth– impacts its ability to correctly score nodes. We also observe variation in sensitivity to source biases in interaction/adjacency matrices: errors arising from systematic error at the node level or direct manipulation of network connectivity by nodes. Our results indicate that the breadth algorithms, which are derived from information theory, correctly score nodes (assessed using independent data) and are robust to errors. However, in cases where nodes “form opinions” about other nodes using indirect information, like reputation, depth algorithms, like Eigenvector Centrality, are required. One caveat is that Eigenvector Centrality is not robust to error unless the network is transitive or assortative. In these cases the network structure allows the depth algorithms to effectively capture breadth as well as depth. Finally, we discuss the algorithms' cognitive and computational demands. This is an important consideration in systems in which individuals use the collective opinions of others to make decisions. PMID:23874167
Interstitial fluid flow and drug delivery in vascularized tumors: a computational model.
Welter, Michael; Rieger, Heiko
2013-01-01
Interstitial fluid is a solution that bathes and surrounds the human cells and provides them with nutrients and a way of waste removal. It is generally believed that elevated tumor interstitial fluid pressure (IFP) is partly responsible for the poor penetration and distribution of therapeutic agents in solid tumors, but the complex interplay of extravasation, permeabilities, vascular heterogeneities and diffusive and convective drug transport remains poorly understood. Here we consider-with the help of a theoretical model-the tumor IFP, interstitial fluid flow (IFF) and its impact upon drug delivery within tumor depending on biophysical determinants such as vessel network morphology, permeabilities and diffusive vs. convective transport. We developed a vascular tumor growth model, including vessel co-option, regression, and angiogenesis, that we extend here by the interstitium (represented by a porous medium obeying Darcy's law) and sources (vessels) and sinks (lymphatics) for IFF. With it we compute the spatial variation of the IFP and IFF and determine its correlation with the vascular network morphology and physiological parameters like vessel wall permeability, tissue conductivity, distribution of lymphatics etc. We find that an increased vascular wall conductivity together with a reduction of lymph function leads to increased tumor IFP, but also that the latter does not necessarily imply a decreased extravasation rate: Generally the IF flow rate is positively correlated with the various conductivities in the system. The IFF field is then used to determine the drug distribution after an injection via a convection diffusion reaction equation for intra- and extracellular concentrations with parameters guided by experimental data for the drug Doxorubicin. We observe that the interplay of convective and diffusive drug transport can lead to quite unexpected effects in the presence of a heterogeneous, compartmentalized vasculature. Finally we discuss various strategies to increase drug exposure time of tumor cells.
Heterogeneous delivering capability promotes traffic efficiency in complex networks
NASA Astrophysics Data System (ADS)
Zhu, Yan-Bo; Guan, Xiang-Min; Zhang, Xue-Jun
2015-12-01
Traffic is one of the most fundamental dynamical processes in networked systems. With the homogeneous delivery capability of nodes, the global dynamic routing strategy proposed by Ling et al. [Phys. Rev. E81, 016113 (2010)] adequately uses the dynamic information during the process and thus it can reach a quite high network capacity. In this paper, based on the global dynamic routing strategy, we proposed a heterogeneous delivery allocation strategy of nodes on scale-free networks with consideration of nodes degree. It is found that the network capacity as well as some other indexes reflecting transportation efficiency are further improved. Our work may be useful for the design of more efficient routing strategies in communication or transportation systems.
You, Ilsun; Sharma, Vishal; Atiquzzaman, Mohammed; Choo, Kim-Kwang Raymond
2016-01-01
With a more Internet-savvy and sophisticated user base, there are more demands for interactive applications and services. However, it is a challenge for existing radio access networks (e.g. 3G and 4G) to cope with the increasingly demanding requirements such as higher data rates and wider coverage area. One potential solution is the inter-collaborative deployment of multiple radio devices in a 5G setting designed to meet exacting user demands, and facilitate the high data rate requirements in the underlying networks. These heterogeneous 5G networks can readily resolve the data rate and coverage challenges. Networks established using the hybridization of existing networks have diverse military and civilian applications. However, there are inherent limitations in such networks such as irregular breakdown, node failures, and halts during speed transmissions. In recent years, there have been attempts to integrate heterogeneous 5G networks with existing ad hoc networks to provide a robust solution for delay-tolerant transmissions in the form of packet switched networks. However, continuous connectivity is still required in these networks, in order to efficiently regulate the flow to allow the formation of a robust network. Therefore, in this paper, we present a novel network formation consisting of nodes from different network maneuvered by Unmanned Aircraft (UA). The proposed model utilizes the features of a biological aspect of genomes and forms a delay tolerant network with existing network models. This allows us to provide continuous and robust connectivity. We then demonstrate that the proposed network model has an efficient data delivery, lower overheads and lesser delays with high convergence rate in comparison to existing approaches, based on evaluations in both real-time testbed and simulation environment.
GDTN: Genome-Based Delay Tolerant Network Formation in Heterogeneous 5G Using Inter-UA Collaboration
2016-01-01
With a more Internet-savvy and sophisticated user base, there are more demands for interactive applications and services. However, it is a challenge for existing radio access networks (e.g. 3G and 4G) to cope with the increasingly demanding requirements such as higher data rates and wider coverage area. One potential solution is the inter-collaborative deployment of multiple radio devices in a 5G setting designed to meet exacting user demands, and facilitate the high data rate requirements in the underlying networks. These heterogeneous 5G networks can readily resolve the data rate and coverage challenges. Networks established using the hybridization of existing networks have diverse military and civilian applications. However, there are inherent limitations in such networks such as irregular breakdown, node failures, and halts during speed transmissions. In recent years, there have been attempts to integrate heterogeneous 5G networks with existing ad hoc networks to provide a robust solution for delay-tolerant transmissions in the form of packet switched networks. However, continuous connectivity is still required in these networks, in order to efficiently regulate the flow to allow the formation of a robust network. Therefore, in this paper, we present a novel network formation consisting of nodes from different network maneuvered by Unmanned Aircraft (UA). The proposed model utilizes the features of a biological aspect of genomes and forms a delay tolerant network with existing network models. This allows us to provide continuous and robust connectivity. We then demonstrate that the proposed network model has an efficient data delivery, lower overheads and lesser delays with high convergence rate in comparison to existing approaches, based on evaluations in both real-time testbed and simulation environment. PMID:27973618
Features and heterogeneities in growing network models
NASA Astrophysics Data System (ADS)
Ferretti, Luca; Cortelezzi, Michele; Yang, Bin; Marmorini, Giacomo; Bianconi, Ginestra
2012-06-01
Many complex networks from the World Wide Web to biological networks grow taking into account the heterogeneous features of the nodes. The feature of a node might be a discrete quantity such as a classification of a URL document such as personal page, thematic website, news, blog, search engine, social network, etc., or the classification of a gene in a functional module. Moreover the feature of a node can be a continuous variable such as the position of a node in the embedding space. In order to account for these properties, in this paper we provide a generalization of growing network models with preferential attachment that includes the effect of heterogeneous features of the nodes. The main effect of heterogeneity is the emergence of an “effective fitness” for each class of nodes, determining the rate at which nodes acquire new links. The degree distribution exhibits a multiscaling behavior analogous to the the fitness model. This property is robust with respect to variations in the model, as long as links are assigned through effective preferential attachment. Beyond the degree distribution, in this paper we give a full characterization of the other relevant properties of the model. We evaluate the clustering coefficient and show that it disappears for large network size, a property shared with the Barabási-Albert model. Negative degree correlations are also present in this class of models, along with nontrivial mixing patterns among features. We therefore conclude that both small clustering coefficients and disassortative mixing are outcomes of the preferential attachment mechanism in general growing networks.
A model for cancer tissue heterogeneity.
Mohanty, Anwoy Kumar; Datta, Aniruddha; Venkatraj, Vijayanagaram
2014-03-01
An important problem in the study of cancer is the understanding of the heterogeneous nature of the cell population. The clonal evolution of the tumor cells results in the tumors being composed of multiple subpopulations. Each subpopulation reacts differently to any given therapy. This calls for the development of novel (regulatory network) models, which can accommodate heterogeneity in cancerous tissues. In this paper, we present a new approach to model heterogeneity in cancer. We model heterogeneity as an ensemble of deterministic Boolean networks based on prior pathway knowledge. We develop the model considering the use of qPCR data. By observing gene expressions when the tissue is subjected to various stimuli, the compositional breakup of the tissue under study can be determined. We demonstrate the viability of this approach by using our model on synthetic data, and real-world data collected from fibroblasts.
Heuristic Strategies for Persuader Selection in Contagions on Complex Networks.
Wang, Peng; Zhang, Li-Jie; Xu, Xin-Jian; Xiao, Gaoxi
2017-01-01
Individual decision to accept a new idea or product is often driven by both self-adoption and others' persuasion, which has been simulated using a double threshold model [Huang et al., Scientific Reports 6, 23766 (2016)]. We extend the study to consider the case with limited persuasion. That is, a set of individuals is chosen from the population to be equipped with persuasion capabilities, who may succeed in persuading their friends to take the new entity when certain conditions are satisfied. Network node centrality is adopted to characterize each node's influence, based on which three heuristic strategies are applied to pick out persuaders. We compare these strategies for persuader selection on both homogeneous and heterogeneous networks. Two regimes of the underline networks are identified in which the system exhibits distinct behaviors: when networks are sufficiently sparse, selecting persuader nodes in descending order of node centrality achieves the best performance; when networks are sufficiently dense, however, selecting nodes with medium centralities to serve as the persuaders performs the best. Under respective optimal strategies for different types of networks, we further probe which centrality measure is most suitable for persuader selection. It turns out that for the first regime, degree centrality offers the best measure for picking out persuaders from homogeneous networks; while in heterogeneous networks, betweenness centrality takes its place. In the second regime, there is no significant difference caused by centrality measures in persuader selection for homogeneous network; while for heterogeneous networks, closeness centrality offers the best measure.
Opinion formation driven by PageRank node influence on directed networks
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Shepelyansky, Dima L.
2015-10-01
We study a two states opinion formation model driven by PageRank node influence and report an extensive numerical study on how PageRank affects collective opinion formations in large-scale empirical directed networks. In our model the opinion of a node can be updated by the sum of its neighbor nodes' opinions weighted by the node influence of the neighbor nodes at each step. We consider PageRank probability and its sublinear power as node influence measures and investigate evolution of opinion under various conditions. First, we observe that all networks reach steady state opinion after a certain relaxation time. This time scale is decreasing with the heterogeneity of node influence in the networks. Second, we find that our model shows consensus and non-consensus behavior in steady state depending on types of networks: Web graph, citation network of physics articles, and LiveJournal social network show non-consensus behavior while Wikipedia article network shows consensus behavior. Third, we find that a more heterogeneous influence distribution leads to a more uniform opinion state in the cases of Web graph, Wikipedia, and Livejournal. However, the opposite behavior is observed in the citation network. Finally we identify that a small number of influential nodes can impose their own opinion on significant fraction of other nodes in all considered networks. Our study shows that the effects of heterogeneity of node influence on opinion formation can be significant and suggests further investigations on the interplay between node influence and collective opinion in networks.
Dose calculations using artificial neural networks: A feasibility study for photon beams
NASA Astrophysics Data System (ADS)
Vasseur, Aurélien; Makovicka, Libor; Martin, Éric; Sauget, Marc; Contassot-Vivier, Sylvain; Bahi, Jacques
2008-04-01
Direct dose calculations are a crucial requirement for Treatment Planning Systems. Some methods, such as Monte Carlo, explicitly model particle transport, others depend upon tabulated data or analytic formulae. However, their computation time is too lengthy for clinical use, or accuracy is insufficient, especially for recent techniques such as Intensity-Modulated Radiotherapy. Based on artificial neural networks (ANNs), a new solution is proposed and this work extends the properties of such an algorithm and is called NeuRad. Prior to any calculations, a first phase known as the learning process is necessary. Monte Carlo dose distributions in homogeneous media are used, and the ANN is then acquired. According to the training base, it can be used as a dose engine for either heterogeneous media or for an unknown material. In this report, two networks were created in order to compute dose distribution within a homogeneous phantom made of an unknown material and within an inhomogeneous phantom made of water and TA6V4 (titanium alloy corresponding to hip prosthesis). All NeuRad results were compared to Monte Carlo distributions. The latter required about 7 h on a dedicated cluster (10 nodes). NeuRad learning requires between 8 and 18 h (depending upon the size of the training base) on a single low-end computer. However, the results of dose computation with the ANN are available in less than 2 s, again using a low-end computer, for a 150×1×150 voxels phantom. In the case of homogeneous medium, the mean deviation in the high dose region was less than 1.7%. With a TA6V4 hip prosthesis bathed in water, the mean deviation in the high dose region was less than 4.1%. Further improvements in NeuRad will have to include full 3D calculations, inhomogeneity management and input definitions.
Synchronization properties of heterogeneous neuronal networks with mixed excitability type
NASA Astrophysics Data System (ADS)
Leone, Michael J.; Schurter, Brandon N.; Letson, Benjamin; Booth, Victoria; Zochowski, Michal; Fink, Christian G.
2015-03-01
We study the synchronization of neuronal networks with dynamical heterogeneity, showing that network structures with the same propensity for synchronization (as quantified by master stability function analysis) may develop dramatically different synchronization properties when heterogeneity is introduced with respect to neuronal excitability type. Specifically, we investigate networks composed of neurons with different types of phase response curves (PRCs), which characterize how oscillating neurons respond to excitatory perturbations. Neurons exhibiting type 1 PRC respond exclusively with phase advances, while neurons exhibiting type 2 PRC respond with either phase delays or phase advances, depending on when the perturbation occurs. We find that Watts-Strogatz small world networks transition to synchronization gradually as the proportion of type 2 neurons increases, whereas scale-free networks may transition gradually or rapidly, depending upon local correlations between node degree and excitability type. Random placement of type 2 neurons results in gradual transition to synchronization, whereas placement of type 2 neurons as hubs leads to a much more rapid transition, showing that type 2 hub cells easily "hijack" neuronal networks to synchronization. These results underscore the fact that the degree of synchronization observed in neuronal networks is determined by a complex interplay between network structure and the dynamical properties of individual neurons, indicating that efforts to recover structural connectivity from dynamical correlations must in general take both factors into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
Grid Computing and Collaboration Technology in Support of Fusion Energy Sciences
NASA Astrophysics Data System (ADS)
Schissel, D. P.
2004-11-01
The SciDAC Initiative is creating a computational grid designed to advance scientific understanding in fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling, and allowing more efficient use of experimental facilities. The philosophy is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as easy to use network available services. Access to services is stressed rather than portability. Services share the same basic security infrastructure so that stakeholders can control their own resources and helps ensure fair use of resources. The collaborative control room is being developed using the open-source Access Grid software that enables secure group-to-group collaboration with capabilities beyond teleconferencing including application sharing and control. The ability to effectively integrate off-site scientists into a dynamic control room will be critical to the success of future international projects like ITER. Grid computing, the secure integration of computer systems over high-speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. The first grid computational service deployed was the transport code TRANSP and included tools for run preparation, submission, monitoring and management. This approach saves user sites from the laborious effort of maintaining a complex code while at the same time reducing the burden on developers by avoiding the support of a large number of heterogeneous installations. This tutorial will present the philosophy behind an advanced collaborative environment, give specific examples, and discuss its usage beyond FES.
NASA Astrophysics Data System (ADS)
Gharedaghloo, Behrad; Price, Jonathan S.; Rezanezhad, Fereidoun; Quinton, William L.
2018-06-01
Micro-scale properties of peat pore space and their influence on hydraulic and transport properties of peat soils have been given little attention so far. Characterizing the variation of these properties in a peat profile can increase our knowledge on the processes controlling contaminant transport through peatlands. As opposed to the common macro-scale (or bulk) representation of groundwater flow and transport processes, a pore network model (PNM) simulates flow and transport processes within individual pores. Here, a pore network modeling code capable of simulating advective and diffusive transport processes through a 3D unstructured pore network was developed; its predictive performance was evaluated by comparing its results to empirical values and to the results of computational fluid dynamics (CFD) simulations. This is the first time that peat pore networks have been extracted from X-ray micro-computed tomography (μCT) images of peat deposits and peat pore characteristics evaluated in a 3D approach. Water flow and solute transport were modeled in the unstructured pore networks mapped directly from μCT images. The modeling results were processed to determine the bulk properties of peat deposits. Results portray the commonly observed decrease in hydraulic conductivity with depth, which was attributed to the reduction of pore radius and increase in pore tortuosity. The increase in pore tortuosity with depth was associated with more decomposed peat soil and decreasing pore coordination number with depth, which extended the flow path of fluid particles. Results also revealed that hydraulic conductivity is isotropic locally, but becomes anisotropic after upscaling to core-scale; this suggests the anisotropy of peat hydraulic conductivity observed in core-scale and field-scale is due to the strong heterogeneity in the vertical dimension that is imposed by the layered structure of peat soils. Transport simulations revealed that for a given solute, the effective diffusion coefficient decreases with depth due to the corresponding increase of diffusional tortuosity. Longitudinal dispersivity of peat also was computed by analyzing advective-dominant transport simulations that showed peat dispersivity is similar to the empirical values reported in the same peat soil; it is not sensitive to soil depth and does not vary much along the soil profile.
Systemic risk on different interbank network topologies
NASA Astrophysics Data System (ADS)
Lenzu, Simone; Tedeschi, Gabriele
2012-09-01
In this paper we develop an interbank market with heterogeneous financial institutions that enter into lending agreements on different network structures. Credit relationships (links) evolve endogenously via a fitness mechanism based on agents' performance. By changing the agent's trust on its neighbor's performance, interbank linkages self-organize themselves into very different network architectures, ranging from random to scale-free topologies. We study which network architecture can make the financial system more resilient to random attacks and how systemic risk spreads over the network. To perturb the system, we generate a random attack via a liquidity shock. The hit bank is not automatically eliminated, but its failure is endogenously driven by its incapacity to raise liquidity in the interbank network. Our analysis shows that a random financial network can be more resilient than a scale free one in case of agents' heterogeneity.
Cascade-based attacks on complex networks
NASA Astrophysics Data System (ADS)
Motter, Adilson E.; Lai, Ying-Cheng
2002-12-01
We live in a modern world supported by large, complex networks. Examples range from financial markets to communication and transportation systems. In many realistic situations the flow of physical quantities in the network, as characterized by the loads on nodes, is important. We show that for such networks where loads can redistribute among the nodes, intentional attacks can lead to a cascade of overload failures, which can in turn cause the entire or a substantial part of the network to collapse. This is relevant for real-world networks that possess a highly heterogeneous distribution of loads, such as the Internet and power grids. We demonstrate that the heterogeneity of these networks makes them particularly vulnerable to attacks in that a large-scale cascade may be triggered by disabling a single key node. This brings obvious concerns on the security of such systems.
Weighting for sex acts to understand the spread of STI on networks.
Moslonka-Lefebvre, Mathieu; Bonhoeffer, Sebastian; Alizon, Samuel
2012-10-21
Human sexual networks exhibit a heterogeneous structure where few individuals have many partners and many individuals have few partners. Network theory predicts that the spread of sexually transmitted infections (STI) on such networks should exhibit striking properties (e.g. rapid spread). However, these properties cannot be found in epidemiological data. Current network models typically assume a constant STI transmission risk per partnership, which is unrealistic because it implies that sexual activity is proportional to the number of partners and that individuals have the same activity with each partner. We develop a framework that allows us to weight any sexual network based on biological assumptions. Our results indicate that STI spreading on the resulting weighted networks do not have heterogeneous-related properties, which is consistent with data and earlier studies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong
2011-01-01
Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285
Multilayer Optimization of Heterogeneous Networks Using Grammatical Genetic Programming.
Fenton, Michael; Lynch, David; Kucera, Stepan; Claussen, Holger; O'Neill, Michael
2017-09-01
Heterogeneous cellular networks are composed of macro cells (MCs) and small cells (SCs) in which all cells occupy the same bandwidth. Provision has been made under the third generation partnership project-long term evolution framework for enhanced intercell interference coordination (eICIC) between cell tiers. Expanding on previous works, this paper instruments grammatical genetic programming to evolve control heuristics for heterogeneous networks. Three aspects of the eICIC framework are addressed including setting SC powers and selection biases, MC duty cycles, and scheduling of user equipments (UEs) at SCs. The evolved heuristics yield minimum downlink rates three times higher than a baseline method, and twice that of a state-of-the-art benchmark. Furthermore, a greater number of UEs receive transmissions under the proposed scheme than in either the baseline or benchmark cases.
Systemic risk and heterogeneous leverage in banking networks
NASA Astrophysics Data System (ADS)
Kuzubaş, Tolga Umut; Saltoğlu, Burak; Sever, Can
2016-11-01
This study probes systemic risk implications of leverage heterogeneity in banking networks. We show that the presence of heterogeneous leverages drastically changes the systemic effects of defaults and the nature of the contagion in interbank markets. Using financial leverage data from the US banking system, through simulations, we analyze the systemic significance of different types of borrowers, the evolution of the network, the consequences of interbank market size and the impact of market segmentation. Our study is related to the recent Basel III regulations on systemic risk and the treatment of the Global Systemically Important Banks (GSIBs). We also assess the extent to which the recent capital surcharges on GSIBs may curb financial fragility. We show the effectiveness of surcharge policy for the most-levered banks vis-a-vis uniform capital injection.
Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices
Marin, Leandro; Piotr Pawlowski, Marcin; Jara, Antonio
2015-01-01
The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol. PMID:26343677
Link prediction based on nonequilibrium cooperation effect
NASA Astrophysics Data System (ADS)
Li, Lanxi; Zhu, Xuzhen; Tian, Hui
2018-04-01
Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.
A moment-convergence method for stochastic analysis of biochemical reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiajun; Nie, Qing; Zhou, Tianshou, E-mail: mcszhtsh@mail.sysu.edu.cn
Traditional moment-closure methods need to assume that high-order cumulants of a probability distribution approximate to zero. However, this strong assumption is not satisfied for many biochemical reaction networks. Here, we introduce convergent moments (defined in mathematics as the coefficients in the Taylor expansion of the probability-generating function at some point) to overcome this drawback of the moment-closure methods. As such, we develop a new analysis method for stochastic chemical kinetics. This method provides an accurate approximation for the master probability equation (MPE). In particular, the connection between low-order convergent moments and rate constants can be more easily derived in termsmore » of explicit and analytical forms, allowing insights that would be difficult to obtain through direct simulation or manipulation of the MPE. In addition, it provides an accurate and efficient way to compute steady-state or transient probability distribution, avoiding the algorithmic difficulty associated with stiffness of the MPE due to large differences in sizes of rate constants. Applications of the method to several systems reveal nontrivial stochastic mechanisms of gene expression dynamics, e.g., intrinsic fluctuations can induce transient bimodality and amplify transient signals, and slow switching between promoter states can increase fluctuations in spatially heterogeneous signals. The overall approach has broad applications in modeling, analysis, and computation of complex biochemical networks with intrinsic noise.« less
Random walks on activity-driven networks with attractiveness
NASA Astrophysics Data System (ADS)
Alessandretti, Laura; Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola
2017-05-01
Virtually all real-world networks are dynamical entities. In social networks, the propensity of nodes to engage in social interactions (activity) and their chances to be selected by active nodes (attractiveness) are heterogeneously distributed. Here, we present a time-varying network model where each node and the dynamical formation of ties are characterized by these two features. We study how these properties affect random-walk processes unfolding on the network when the time scales describing the process and the network evolution are comparable. We derive analytical solutions for the stationary state and the mean first-passage time of the process, and we study cases informed by empirical observations of social networks. Our work shows that previously disregarded properties of real social systems, such as heterogeneous distributions of activity and attractiveness as well as the correlations between them, substantially affect the dynamical process unfolding on the network.
Emergence of cooperation in non-scale-free networks
NASA Astrophysics Data System (ADS)
Zhang, Yichao; Aziz-Alaoui, M. A.; Bertelle, Cyrille; Zhou, Shi; Wang, Wenting
2014-06-01
Evolutionary game theory is one of the key paradigms behind many scientific disciplines from science to engineering. Previous studies proposed a strategy updating mechanism, which successfully demonstrated that the scale-free network can provide a framework for the emergence of cooperation. Instead, individuals in random graphs and small-world networks do not favor cooperation under this updating rule. However, a recent empirical result shows the heterogeneous networks do not promote cooperation when humans play a prisoner’s dilemma. In this paper, we propose a strategy updating rule with payoff memory. We observe that the random graphs and small-world networks can provide even better frameworks for cooperation than the scale-free networks in this scenario. Our observations suggest that the degree heterogeneity may be neither a sufficient condition nor a necessary condition for the widespread cooperation in complex networks. Also, the topological structures are not sufficed to determine the level of cooperation in complex networks.
The big data-big model (BDBM) challenges in ecological research
NASA Astrophysics Data System (ADS)
Luo, Y.
2015-12-01
The field of ecology has become a big-data science in the past decades due to development of new sensors used in numerous studies in the ecological community. Many sensor networks have been established to collect data. For example, satellites, such as Terra and OCO-2 among others, have collected data relevant on global carbon cycle. Thousands of field manipulative experiments have been conducted to examine feedback of terrestrial carbon cycle to global changes. Networks of observations, such as FLUXNET, have measured land processes. In particular, the implementation of the National Ecological Observatory Network (NEON), which is designed to network different kinds of sensors at many locations over the nation, will generate large volumes of ecological data every day. The raw data from sensors from those networks offer an unprecedented opportunity for accelerating advances in our knowledge of ecological processes, educating teachers and students, supporting decision-making, testing ecological theory, and forecasting changes in ecosystem services. Currently, ecologists do not have the infrastructure in place to synthesize massive yet heterogeneous data into resources for decision support. It is urgent to develop an ecological forecasting system that can make the best use of multiple sources of data to assess long-term biosphere change and anticipate future states of ecosystem services at regional and continental scales. Forecasting relies on big models that describe major processes that underlie complex system dynamics. Ecological system models, despite great simplification of the real systems, are still complex in order to address real-world problems. For example, Community Land Model (CLM) incorporates thousands of processes related to energy balance, hydrology, and biogeochemistry. Integration of massive data from multiple big data sources with complex models has to tackle Big Data-Big Model (BDBM) challenges. Those challenges include interoperability of multiple, heterogeneous data sets; intractability of structural complexity of big models; equifinality of model structure selection and parameter estimation; and computational demand of global optimization with Big Models.
Non-systemic transmission of tick-borne diseases: A network approach
NASA Astrophysics Data System (ADS)
Ferreri, Luca; Bajardi, Paolo; Giacobini, Mario
2016-10-01
Tick-borne diseases can be transmitted via non-systemic (NS) transmission. This occurs when tick gets the infection by co-feeding with infected ticks on the same host resulting in a direct pathogen transmission between the vectors, without infecting the host. This transmission is peculiar, as it does not require any systemic infection of the host. The NS transmission is the main efficient transmission for the persistence of the tick-borne encephalitis virus in nature. By describing the heterogeneous ticks aggregation on hosts through a bipartite graphs representation, we are able to mathematically define the NS transmission and to depict the epidemiological conditions for the pathogen persistence. Despite the fact that the underlying network is largely fragmented, analytical and computational results show that the larger is the variability of the aggregation, and the easier is for the pathogen to persist in the population.
Taylor, Dane; Skardal, Per Sebastian; Sun, Jie
2016-01-01
Synchronization is central to many complex systems in engineering physics (e.g., the power-grid, Josephson junction circuits, and electro-chemical oscillators) and biology (e.g., neuronal, circadian, and cardiac rhythms). Despite these widespread applications—for which proper functionality depends sensitively on the extent of synchronization—there remains a lack of understanding for how systems can best evolve and adapt to enhance or inhibit synchronization. We study how network modifications affect the synchronization properties of network-coupled dynamical systems that have heterogeneous node dynamics (e.g., phase oscillators with non-identical frequencies), which is often the case for real-world systems. Our approach relies on a synchrony alignment function (SAF) that quantifies the interplay between heterogeneity of the network and of the oscillators and provides an objective measure for a system’s ability to synchronize. We conduct a spectral perturbation analysis of the SAF for structural network modifications including the addition and removal of edges, which subsequently ranks the edges according to their importance to synchronization. Based on this analysis, we develop gradient-descent algorithms to efficiently solve optimization problems that aim to maximize phase synchronization via network modifications. We support these and other results with numerical experiments. PMID:27872501
Patterns of recruitment and injury in a heterogeneous airway network model
Stewart, Peter S.; Jensen, Oliver E.
2015-01-01
In respiratory distress, lung airways become flooded with liquid and may collapse due to surface-tension forces acting on air–liquid interfaces, inhibiting gas exchange. This paper proposes a mathematical multiscale model for the mechanical ventilation of a network of occluded airways, where air is forced into the network at a fixed tidal volume, allowing investigation of optimal recruitment strategies. The temporal response is derived from mechanistic models of individual airway reopening, incorporating feedback on the airway pressure due to recruitment. The model accounts for stochastic variability in airway diameter and stiffness across and between generations. For weak heterogeneity, the network is completely ventilated via one or more avalanches of recruitment (with airways recruited in quick succession), each characterized by a transient decrease in the airway pressure; avalanches become more erratic for airways that are initially more flooded. However, the time taken for complete ventilation of the network increases significantly as the network becomes more heterogeneous, leading to increased stresses on airway walls. The model predicts that the most peripheral airways are most at risk of ventilation-induced damage. A positive-end-expiratory pressure reduces the total recruitment time but at the cost of larger stresses exerted on airway walls. PMID:26423440
NASA Astrophysics Data System (ADS)
Wang, Yi; Cao, Jinde; Alsaedi, Ahmed; Hayat, Tasawar
2017-02-01
In this paper, we formulate a deterministic model by including the vacant sites, which represent inactive individuals or potential contacts, to investigate the spreading dynamics of sexually transmitted diseases in heterogeneous networks. We first analytically derive the basic reproduction number R 0, which completely determines global dynamics of the system in the long run. Specifically, if R 0 < 1, the disease-free equilibrium is globally asymptotically stable, i.e. disease disappears from the network irrespective of initial infected numbers and distributions, whereas if R 0 > 1, the system is uniformly persistent around a unique endemic equilibrium, i.e. disease persists in the network. Furthermore, by using a suitable Lyapunov function the global stability of endemic equilibrium for low/high-risk infected individuals only is proved. Finally, the effects of three immunization schemes are studied and compared, and extensive numerical simulations are performed to investigate the effect of network topology and population turnover on disease spread. Our results suggest that population turnover could have great impact on the sexually transmitted disease system in heterogeneous networks, including the basic reproduction number and infection prevalence.
Robust Rate Maximization for Heterogeneous Wireless Networks under Channel Uncertainties
Xu, Yongjun; Hu, Yuan; Li, Guoquan
2018-01-01
Heterogeneous wireless networks are a promising technology in next generation wireless communication networks, which has been shown to efficiently reduce the blind area of mobile communication and improve network coverage compared with the traditional wireless communication networks. In this paper, a robust power allocation problem for a two-tier heterogeneous wireless networks is formulated based on orthogonal frequency-division multiplexing technology. Under the consideration of imperfect channel state information (CSI), the robust sum-rate maximization problem is built while avoiding sever cross-tier interference to macrocell user and maintaining the minimum rate requirement of each femtocell user. To be practical, both of channel estimation errors from the femtocells to the macrocell and link uncertainties of each femtocell user are simultaneously considered in terms of outage probabilities of users. The optimization problem is analyzed under no CSI feedback with some cumulative distribution function and partial CSI with Gaussian distribution of channel estimation error. The robust optimization problem is converted into the convex optimization problem which is solved by using Lagrange dual theory and subgradient algorithm. Simulation results demonstrate the effectiveness of the proposed algorithm by the impact of channel uncertainties on the system performance. PMID:29466315
Community-driven computational biology with Debian Linux
2010-01-01
Background The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. Results The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Conclusions Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers. PMID:21210984
The emergence of spatial cyberinfrastructure.
Wright, Dawn J; Wang, Shaowen
2011-04-05
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge.
The emergence of spatial cyberinfrastructure
Wright, Dawn J.; Wang, Shaowen
2011-01-01
Cyberinfrastructure integrates advanced computer, information, and communication technologies to empower computation-based and data-driven scientific practice and improve the synthesis and analysis of scientific data in a collaborative and shared fashion. As such, it now represents a paradigm shift in scientific research that has facilitated easy access to computational utilities and streamlined collaboration across distance and disciplines, thereby enabling scientific breakthroughs to be reached more quickly and efficiently. Spatial cyberinfrastructure seeks to resolve longstanding complex problems of handling and analyzing massive and heterogeneous spatial datasets as well as the necessity and benefits of sharing spatial data flexibly and securely. This article provides an overview and potential future directions of spatial cyberinfrastructure. The remaining four articles of the special feature are introduced and situated in the context of providing empirical examples of how spatial cyberinfrastructure is extending and enhancing scientific practice for improved synthesis and analysis of both physical and social science data. The primary focus of the articles is spatial analyses using distributed and high-performance computing, sensor networks, and other advanced information technology capabilities to transform massive spatial datasets into insights and knowledge. PMID:21467227
Computational biomedicine: a challenge for the twenty-first century.
Coveney, Peter V; Shublaq, Nour W
2012-01-01
With the relentless increase of computer power and the widespread availability of digital patient-specific medical data, we are now entering an era when it is becoming possible to develop predictive models of human disease and pathology, which can be used to support and enhance clinical decision-making. The approach amounts to a grand challenge to computational science insofar as we need to be able to provide seamless yet secure access to large scale heterogeneous personal healthcare data in a facile way, typically integrated into complex workflows-some parts of which may need to be run on high performance computers-in a facile way that is integrated into clinical decision support software. In this paper, we review the state of the art in terms of case studies drawn from neurovascular pathologies and HIV/AIDS. These studies are representative of a large number of projects currently being performed within the Virtual Physiological Human initiative. They make demands of information technology at many scales, from the desktop to national and international infrastructures for data storage and processing, linked by high performance networks.
Tagliaferri, Roberto; Longo, Giuseppe; Milano, Leopoldo; Acernese, Fausto; Barone, Fabrizio; Ciaramella, Angelo; De Rosa, Rosario; Donalek, Ciro; Eleuteri, Antonio; Raiconi, Giancarlo; Sessa, Salvatore; Staiano, Antonino; Volpicelli, Alfredo
2003-01-01
In the last decade, the use of neural networks (NN) and of other soft computing methods has begun to spread also in the astronomical community which, due to the required accuracy of the measurements, is usually reluctant to use automatic tools to perform even the most common tasks of data reduction and data mining. The federation of heterogeneous large astronomical databases which is foreseen in the framework of the astrophysical virtual observatory and national virtual observatory projects, is, however, posing unprecedented data mining and visualization problems which will find a rather natural and user friendly answer in artificial intelligence tools based on NNs, fuzzy sets or genetic algorithms. This review is aimed to both astronomers (who often have little knowledge of the methodological background) and computer scientists (who often know little about potentially interesting applications), and therefore will be structured as follows: after giving a short introduction to the subject, we shall summarize the methodological background and focus our attention on some of the most interesting fields of application, namely: object extraction and classification, time series analysis, noise identification, and data mining. Most of the original work described in the paper has been performed in the framework of the AstroNeural collaboration (Napoli-Salerno).
Minati, Ludovico; Zacà, Domenico; D'Incerti, Ludovico; Jovicich, Jorge
2014-09-01
An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times <1000 s are attainable, removing the major deterrent to voxel-based resting-sate network mapping and revealing fine-grained node degree heterogeneity. L1-norm should be given consideration as a substitute for correlation in very high-density resting-state functional connectivity analyses. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
Streaming data analytics via message passing with application to graph algorithms
Plimpton, Steven J.; Shead, Tim
2014-05-06
The need to process streaming data, which arrives continuously at high-volume in real-time, arises in a variety of contexts including data produced by experiments, collections of environmental or network sensors, and running simulations. Streaming data can also be formulated as queries or transactions which operate on a large dynamic data store, e.g. a distributed database. We describe a lightweight, portable framework named PHISH which enables a set of independent processes to compute on a stream of data in a distributed-memory parallel manner. Datums are routed between processes in patterns defined by the application. PHISH can run on top of eithermore » message-passing via MPI or sockets via ZMQ. The former means streaming computations can be run on any parallel machine which supports MPI; the latter allows them to run on a heterogeneous, geographically dispersed network of machines. We illustrate how PHISH can support streaming MapReduce operations, and describe streaming versions of three algorithms for large, sparse graph analytics: triangle enumeration, subgraph isomorphism matching, and connected component finding. Lastly, we also provide benchmark timings for MPI versus socket performance of several kernel operations useful in streaming algorithms.« less
Chimera-like states in structured heterogeneous networks
NASA Astrophysics Data System (ADS)
Li, Bo; Saad, David
2017-04-01
Chimera-like states are manifested through the coexistence of synchronous and asynchronous dynamics and have been observed in various systems. To analyze the role of network topology in giving rise to chimera-like states, we study a heterogeneous network model comprising two groups of nodes, of high and low degrees of connectivity. The architecture facilitates the analysis of the system, which separates into a densely connected coherent group of nodes, perturbed by their sparsely connected drifting neighbors. It describes a synchronous behavior of the densely connected group and scaling properties of the induced perturbations.
Aslam, Muhammad; Hu, Xiaopeng; Wang, Fan
2017-12-13
Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR's routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols.
Hu, Xiaopeng; Wang, Fan
2017-01-01
Smart reconfiguration of a dynamic networking environment is offered by the central control of Software-Defined Networking (SDN). Centralized SDN-based management architectures are capable of retrieving global topology intelligence and decoupling the forwarding plane from the control plane. Routing protocols developed for conventional Wireless Sensor Networks (WSNs) utilize limited iterative reconfiguration methods to optimize environmental reporting. However, the challenging networking scenarios of WSNs involve a performance overhead due to constant periodic iterative reconfigurations. In this paper, we propose the SDN-based Application-aware Centralized adaptive Flow Iterative Reconfiguring (SACFIR) routing protocol with the centralized SDN iterative solver controller to maintain the load-balancing between flow reconfigurations and flow allocation cost. The proposed SACFIR’s routing protocol offers a unique iterative path-selection algorithm, which initially computes suitable clustering based on residual resources at the control layer and then implements application-aware threshold-based multi-hop report transmissions on the forwarding plane. The operation of the SACFIR algorithm is centrally supervised by the SDN controller residing at the Base Station (BS). This paper extends SACFIR to SDN-based Application-aware Main-value Centralized adaptive Flow Iterative Reconfiguring (SAMCFIR) to establish both proactive and reactive reporting. The SAMCFIR transmission phase enables sensor nodes to trigger direct transmissions for main-value reports, while in the case of SACFIR, all reports follow computed routes. Our SDN-enabled proposed models adjust the reconfiguration period according to the traffic burden on sensor nodes, which results in heterogeneity awareness, load-balancing and application-specific reconfigurations of WSNs. Extensive experimental simulation-based results show that SACFIR and SAMCFIR yield the maximum scalability, network lifetime and stability period when compared to existing routing protocols. PMID:29236031
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.; ...
2016-09-18
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
A cloud-based X73 ubiquitous mobile healthcare system: design and implementation.
Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji
2014-01-01
Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed "big data" processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settgast, Randolph R.; Fu, Pengcheng; Walsh, Stuart D. C.
This study describes a fully coupled finite element/finite volume approach for simulating field-scale hydraulically driven fractures in three dimensions, using massively parallel computing platforms. The proposed method is capable of capturing realistic representations of local heterogeneities, layering and natural fracture networks in a reservoir. A detailed description of the numerical implementation is provided, along with numerical studies comparing the model with both analytical solutions and experimental results. The results demonstrate the effectiveness of the proposed method for modeling large-scale problems involving hydraulically driven fractures in three dimensions.
The impact of heterogeneous response on coupled spreading dynamics in multiplex networks
NASA Astrophysics Data System (ADS)
Nie, Xiaoyu; Tang, Ming; Zou, Yong; Guan, Shuguang; Zhou, Jie
2017-10-01
Many recent studies have demonstrated that individual awareness of disease may significantly affect the spreading process of infectious disease. In the majority of these studies, the response of the awareness is generally treated homogeneously. Considering of diversity and heterogeneity in the human behavior which widely exist under different circumstances, in this paper we study heterogeneous response when people are aware of the prevalence of infectious diseases. Specifically, we consider that an individual with more neighbors may take more preventive measures as a reaction when he is aware of the disease. A suppression strength is introduced to describe such heterogeneity, and we find that a more evident heterogeneity may cause a more effective suppressing effect to the spreading of epidemics. A mean-field theory is developed to support the results which are verified on the multiplex networks with different interlayer degree correlation.
Coordinating complex decision support activities across distributed applications
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1994-01-01
Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.
Intrinsic Neuronal Properties Switch the Mode of Information Transmission in Networks
Gjorgjieva, Julijana; Mease, Rebecca A.; Moody, William J.; Fairhall, Adrienne L.
2014-01-01
Diverse ion channels and their dynamics endow single neurons with complex biophysical properties. These properties determine the heterogeneity of cell types that make up the brain, as constituents of neural circuits tuned to perform highly specific computations. How do biophysical properties of single neurons impact network function? We study a set of biophysical properties that emerge in cortical neurons during the first week of development, eventually allowing these neurons to adaptively scale the gain of their response to the amplitude of the fluctuations they encounter. During the same time period, these same neurons participate in large-scale waves of spontaneously generated electrical activity. We investigate the potential role of experimentally observed changes in intrinsic neuronal properties in determining the ability of cortical networks to propagate waves of activity. We show that such changes can strongly affect the ability of multi-layered feedforward networks to represent and transmit information on multiple timescales. With properties modeled on those observed at early stages of development, neurons are relatively insensitive to rapid fluctuations and tend to fire synchronously in response to wave-like events of large amplitude. Following developmental changes in voltage-dependent conductances, these same neurons become efficient encoders of fast input fluctuations over few layers, but lose the ability to transmit slower, population-wide input variations across many layers. Depending on the neurons' intrinsic properties, noise plays different roles in modulating neuronal input-output curves, which can dramatically impact network transmission. The developmental change in intrinsic properties supports a transformation of a networks function from the propagation of network-wide information to one in which computations are scaled to local activity. This work underscores the significance of simple changes in conductance parameters in governing how neurons represent and propagate information, and suggests a role for background synaptic noise in switching the mode of information transmission. PMID:25474701
Supervisory control of mobile sensor networks: math formulation, simulation, and implementation.
Giordano, Vincenzo; Ballal, Prasanna; Lewis, Frank; Turchiano, Biagio; Zhang, Jing Bing
2006-08-01
This paper uses a novel discrete-event controller (DEC) for the coordination of cooperating heterogeneous wireless sensor networks (WSNs) containing both unattended ground sensors (UGSs) and mobile sensor robots. The DEC sequences the most suitable tasks for each agent and assigns sensor resources according to the current perception of the environment. A matrix formulation makes this DEC particularly useful for WSN, where missions change and sensor agents may be added or may fail. WSN have peculiarities that complicate their supervisory control. Therefore, this paper introduces several new tools for DEC design and operation, including methods for generating the required supervisory matrices based on mission planning, methods for modifying the matrices in the event of failed nodes, or nodes entering the network, and a novel dynamic priority assignment weighting approach for selecting the most appropriate and useful sensors for a given mission task. The resulting DEC represents a complete dynamical description of the WSN system, which allows a fast programming of deployable WSN, a computer simulation analysis, and an efficient implementation. The DEC is actually implemented on an experimental wireless-sensor-network prototyping system. Both simulation and experimental results are presented to show the effectiveness and versatility of the developed control architecture.
Transmission of severe acute respiratory syndrome in dynamical small-world networks
NASA Astrophysics Data System (ADS)
Masuda, Naoki; Konno, Norio; Aihara, Kazuyuki
2004-03-01
The outbreak of severe acute respiratory syndrome (SARS) is still threatening the world because of a possible resurgence. In the current situation that effective medical treatments such as antiviral drugs are not discovered yet, dynamical features of the epidemics should be clarified for establishing strategies for tracing, quarantine, isolation, and regulating social behavior of the public at appropriate costs. Here we propose a network model for SARS epidemics and discuss why superspreaders emerged and why SARS spread especially in hospitals, which were key factors of the recent outbreak. We suggest that superspreaders are biologically contagious patients, and they may amplify the spreads by going to potentially contagious places such as hospitals. To avoid mass transmission in hospitals, it may be a good measure to treat suspected cases without hospitalizing them. Finally, we indicate that SARS probably propagates in small-world networks associated with human contacts and that the biological nature of individuals and social group properties are factors more important than the heterogeneous rates of social contacts among individuals. This is in marked contrast with epidemics of sexually transmitted diseases or computer viruses to which scale-free network models often apply.
Data fusion for target tracking and classification with wireless sensor network
NASA Astrophysics Data System (ADS)
Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic
2016-10-01
In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).
The impact of multiple information on coupled awareness-epidemic dynamics in multiplex networks
NASA Astrophysics Data System (ADS)
Pan, Yaohui; Yan, Zhijun
2018-02-01
Growing interest has emerged in the study of the interplay between awareness and epidemics in multiplex networks. However, previous studies on this issue usually assume that all aware individuals take the same level of precautions, ignoring individual heterogeneity. In this paper, we investigate the coupled awareness-epidemic dynamics in multiplex networks considering individual heterogeneity. Here, the precaution levels are heterogeneous and depend on three types of information: contact information and local and global prevalence information. The results show that contact-based precautions can decrease the epidemic prevalence and augment the epidemic threshold, but prevalence-based precautions, regardless of local or global information, can only decrease the epidemic prevalence. Moreover, unlike previous studies in single-layer networks, we do not find a greater impact of local prevalence information on the epidemic prevalence compared to global prevalence information. In addition, we find that the altruistic behaviors of infected individuals can effectively suppress epidemic spreading, especially when the level of contact-based precaution is high.
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Brusnikin, E. S.; Karachevtseva, I. P.; Oberst, J.
2016-09-01
The new technique for generation of coordinate control point networks based on photogrammetric processing of heterogeneous planetary images (obtained at different time, scale, with different illumination or oblique view) is developed. The technique is verified with the example for processing the heterogeneous information obtained by remote sensing of Ganymede by the spacecraft Voyager-1, -2 and Galileo. Using this technique the first 3D control point network for Ganymede is formed: the error of the altitude coordinates obtained as a result of adjustment is less than 5 km. The new control point network makes it possible to obtain basic geodesic parameters of the body (axes size) and to estimate forced librations. On the basis of the control point network, digital terrain models (DTMs) with different resolutions are generated and used for mapping the surface of Ganymede with different levels of detail (Zubarev et al., 2015b).
State-of-the-art in Heterogeneous Computing
Brodtkorb, Andre R.; Dyken, Christopher; Hagen, Trond R.; ...
2010-01-01
Node level heterogeneous architectures have become attractive during the last decade for several reasons: compared to traditional symmetric CPUs, they offer high peak performance and are energy and/or cost efficient. With the increase of fine-grained parallelism in high-performance computing, as well as the introduction of parallelism in workstations, there is an acute need for a good overview and understanding of these architectures. We give an overview of the state-of-the-art in heterogeneous computing, focusing on three commonly found architectures: the Cell Broadband Engine Architecture, graphics processing units (GPUs), and field programmable gate arrays (FPGAs). We present a review of hardware, availablemore » software tools, and an overview of state-of-the-art techniques and algorithms. Furthermore, we present a qualitative and quantitative comparison of the architectures, and give our view on the future of heterogeneous computing.« less
Relaxation and physical aging in network glasses: a review.
Micoulaut, Matthieu
2016-06-01
Recent progress in the description of glassy relaxation and aging are reviewed for the wide class of network-forming materials such as GeO2, Ge x Se1-x , silicates (SiO2-Na2O) or borates (B2O3-Li2O), all of which have an important usefulness in domestic, geological or optoelectronic applications. A brief introduction of the glass transition phenomenology is given, together with the salient features that are revealed both from theory and experiments. Standard experimental methods used for the characterization of the slowing down of the dynamics are reviewed. We then discuss the important role played by aspects of network topology and rigidity for the understanding of the relaxation of the glass transition, while also permitting analytical predictions of glass properties from simple and insightful models based on the network structure. We also emphasize the great utility of computer simulations which probe the dynamics at the molecular level, and permit the calculation of various structure-related functions in connection with glassy relaxation and the physics of aging which reveal the non-equilibrium nature of glasses. We discuss the notion of spatial variations of structure which leads to the concept of 'dynamic heterogeneities', and recent results in relation to this important topic for network glasses are also reviewed.
Minimum requirements for predictive pore-network modeling of solute transport in micromodels
NASA Astrophysics Data System (ADS)
Mehmani, Yashar; Tchelepi, Hamdi A.
2017-10-01
Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.
Random sphere packing model of heterogeneous propellants
NASA Astrophysics Data System (ADS)
Kochevets, Sergei Victorovich
It is well recognized that combustion of heterogeneous propellants is strongly dependent on the propellant morphology. Recent developments in computing systems make it possible to start three-dimensional modeling of heterogeneous propellant combustion. A key component of such large scale computations is a realistic model of industrial propellants which retains the true morphology---a goal never achieved before. The research presented develops the Random Sphere Packing Model of heterogeneous propellants and generates numerical samples of actual industrial propellants. This is done by developing a sphere packing algorithm which randomly packs a large number of spheres with a polydisperse size distribution within a rectangular domain. First, the packing code is developed, optimized for performance, and parallelized using the OpenMP shared memory architecture. Second, the morphology and packing fraction of two simple cases of unimodal and bimodal packs are investigated computationally and analytically. It is shown that both the Loose Random Packing and Dense Random Packing limits are not well defined and the growth rate of the spheres is identified as the key parameter controlling the efficiency of the packing. For a properly chosen growth rate, computational results are found to be in excellent agreement with experimental data. Third, two strategies are developed to define numerical samples of polydisperse heterogeneous propellants: the Deterministic Strategy and the Random Selection Strategy. Using these strategies, numerical samples of industrial propellants are generated. The packing fraction is investigated and it is shown that the experimental values of the packing fraction can be achieved computationally. It is strongly believed that this Random Sphere Packing Model of propellants is a major step forward in the realistic computational modeling of heterogeneous propellant of combustion. In addition, a method of analysis of the morphology of heterogeneous propellants is developed which uses the concept of multi-point correlation functions. A set of intrinsic length scales of local density fluctuations in random heterogeneous propellants is identified by performing a Monte-Carlo study of the correlation functions. This method of analysis shows great promise for understanding the origins of the combustion instability of heterogeneous propellants, and is believed to become a valuable tool for the development of safe and reliable rocket engines.
Models@Home: distributed computing in bioinformatics using a screensaver based approach.
Krieger, Elmar; Vriend, Gert
2002-02-01
Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a scientific challenge, as done by Seti@Home (http://setiathome.berkeley.edu), the world's largest distributed computing project. We developed a generally applicable distributed computing solution that uses a screensaver system similar to Seti@Home. The software exploits the coarse-grained nature of typical bioinformatics projects. Three major considerations for the design were: (1) often, many different programs are needed, while the time is lacking to parallelize them. Models@Home can run any program in parallel without modifications to the source code; (2) in contrast to the Seti project, bioinformatics applications are normally more sensitive to lost jobs. Models@Home therefore includes stringent control over job scheduling; (3) to allow use in heterogeneous environments, Linux and Windows based workstations can be combined with dedicated PCs to build a homogeneous cluster. We present three practical applications of Models@Home, running the modeling programs WHAT IF and YASARA on 30 PCs: force field parameterization, molecular dynamics docking, and database maintenance.
Shannon, Casey P; Chen, Virginia; Takhar, Mandeep; Hollander, Zsuzsanna; Balshaw, Robert; McManus, Bruce M; Tebbutt, Scott J; Sin, Don D; Ng, Raymond T
2016-11-14
Gene network inference (GNI) algorithms can be used to identify sets of coordinately expressed genes, termed network modules from whole transcriptome gene expression data. The identification of such modules has become a popular approach to systems biology, with important applications in translational research. Although diverse computational and statistical approaches have been devised to identify such modules, their performance behavior is still not fully understood, particularly in complex human tissues. Given human heterogeneity, one important question is how the outputs of these computational methods are sensitive to the input sample set, or stability. A related question is how this sensitivity depends on the size of the sample set. We describe here the SABRE (Similarity Across Bootstrap RE-sampling) procedure for assessing the stability of gene network modules using a re-sampling strategy, introduce a novel criterion for identifying stable modules, and demonstrate the utility of this approach in a clinically-relevant cohort, using two different gene network module discovery algorithms. The stability of modules increased as sample size increased and stable modules were more likely to be replicated in larger sets of samples. Random modules derived from permutated gene expression data were consistently unstable, as assessed by SABRE, and provide a useful baseline value for our proposed stability criterion. Gene module sets identified by different algorithms varied with respect to their stability, as assessed by SABRE. Finally, stable modules were more readily annotated in various curated gene set databases. The SABRE procedure and proposed stability criterion may provide guidance when designing systems biology studies in complex human disease and tissues.
Epidemic spreading in metapopulation networks with heterogeneous infection rates
NASA Astrophysics Data System (ADS)
Gong, Yong-Wang; Song, Yu-Rong; Jiang, Guo-Ping
2014-12-01
In this paper, we study epidemic spreading in metapopulation networks wherein each node represents a subpopulation symbolizing a city or an urban area and links connecting nodes correspond to the human traveling routes among cities. Differently from previous studies, we introduce a heterogeneous infection rate to characterize the effect of nodes' local properties, such as population density, individual health habits, and social conditions, on epidemic infectivity. By means of a mean-field approach and Monte Carlo simulations, we explore how the heterogeneity of the infection rate affects the epidemic dynamics, and find that large fluctuations of the infection rate have a profound impact on the epidemic threshold as well as the temporal behavior of the prevalence above the epidemic threshold. This work can refine our understanding of epidemic spreading in metapopulation networks with the effect of nodes' local properties.
Dynamics of subway networks based on vehicles operation timetable
NASA Astrophysics Data System (ADS)
Xiao, Xue-mei; Jia, Li-min; Wang, Yan-hui
2017-05-01
In this paper, a subway network is represented as a dynamic, directed and weighted graph, in which vertices represent subway stations and weights of edges represent the number of vehicles passing through the edges by considering vehicles operation timetable. Meanwhile the definitions of static and dynamic metrics which can represent vertices' and edges' local and global attributes are proposed. Based on the model and metrics, standard deviation is further introduced to study the dynamic properties (heterogeneity and vulnerability) of subway networks. Through a detailed analysis of the Beijing subway network, we conclude that with the existing network structure, the heterogeneity and vulnerability of the Beijing subway network varies over time when the vehicle operation timetable is taken into consideration, and the distribution of edge weights affects the performance of the network. In other words, although the vehicles operation timetable is restrained by the physical structure of the network, it determines the performances and properties of the Beijing subway network.
Rumor spreading model with noise interference in complex social networks
NASA Astrophysics Data System (ADS)
Zhu, Liang; Wang, Youguo
2017-03-01
In this paper, a modified susceptible-infected-removed (SIR) model has been proposed to explore rumor diffusion on complex social networks. We take variation of connectivity into consideration and assume the variation as noise. On the basis of related literature on virus networks, the noise is described as standard Brownian motion while stochastic differential equations (SDE) have been derived to characterize dynamics of rumor diffusion both on homogeneous networks and heterogeneous networks. Then, theoretical analysis on homogeneous networks has been demonstrated to investigate the solution of SDE model and the steady state of rumor diffusion. Simulations both on Barabási-Albert (BA) network and Watts-Strogatz (WS) network display that the addition of noise accelerates rumor diffusion and expands diffusion size, meanwhile, the spreading speed on BA network is much faster than on WS network under the same noise intensity. In addition, there exists a rumor diffusion threshold in statistical average meaning on homogeneous network which is absent on heterogeneous network. Finally, we find a positive correlation between peak value of infected individuals and noise intensity while a negative correlation between rumor lifecycle and noise intensity overall.
Control of collective network chaos.
Wagemakers, Alexandre; Barreto, Ernest; Sanjuán, Miguel A F; So, Paul
2014-06-01
Under certain conditions, the collective behavior of a large globally-coupled heterogeneous network of coupled oscillators, as quantified by the macroscopic mean field or order parameter, can exhibit low-dimensional chaotic behavior. Recent advances describe how a small set of "reduced" ordinary differential equations can be derived that captures this mean field behavior. Here, we show that chaos control algorithms designed using the reduced equations can be successfully applied to imperfect realizations of the full network. To systematically study the effectiveness of this technique, we measure the quality of control as we relax conditions that are required for the strict accuracy of the reduced equations, and hence, the controller. Although the effects are network-dependent, we show that the method is effective for surprisingly small networks, for modest departures from global coupling, and even with mild inaccuracy in the estimate of network heterogeneity.
NASA Astrophysics Data System (ADS)
Indra, Sandipa; Guchhait, Biswajit; Biswas, Ranjit
2016-03-01
We have performed steady state UV-visible absorption and time-resolved fluorescence measurements and computer simulations to explore the cosolvent mole fraction induced changes in structural and dynamical properties of water/dioxane (Diox) and water/tetrahydrofuran (THF) binary mixtures. Diox is a quadrupolar solvent whereas THF is a dipolar one although both are cyclic molecules and represent cycloethers. The focus here is on whether these cycloethers can induce stiffening and transition of water H-bond network structure and, if they do, whether such structural modification differentiates the chemical nature (dipolar or quadrupolar) of the cosolvent molecules. Composition dependent measured fluorescence lifetimes and rotation times of a dissolved dipolar solute (Coumarin 153, C153) suggest cycloether mole-fraction (XTHF/Diox) induced structural transition for both of these aqueous binary mixtures in the 0.1 ≤ XTHF/Diox ≤ 0.2 regime with no specific dependence on the chemical nature. Interestingly, absorption measurements reveal stiffening of water H-bond structure in the presence of both the cycloethers at a nearly equal mole-fraction, XTHF/Diox ˜ 0.05. Measurements near the critical solution temperature or concentration indicate no role for the solution criticality on the anomalous structural changes. Evidences for cycloether aggregation at very dilute concentrations have been found. Simulated radial distribution functions reflect abrupt changes in respective peak heights at those mixture compositions around which fluorescence measurements revealed structural transition. Simulated water coordination numbers (for a dissolved C153) and number of H-bonds also exhibit minima around these cosolvent concentrations. In addition, several dynamic heterogeneity parameters have been simulated for both the mixtures to explore the effects of structural transition and chemical nature of cosolvent on heterogeneous dynamics of these systems. Simulated four-point dynamic susceptibility suggests formation of clusters inducing local heterogeneity in the solution structure.
Multi-scale Pore Imaging Techniques to Characterise Heterogeneity Effects on Flow in Carbonate Rock
NASA Astrophysics Data System (ADS)
Shah, S. M.
2017-12-01
Digital rock analysis and pore-scale studies have become an essential tool in the oil and gas industry to understand and predict the petrophysical and multiphase flow properties for the assessment and exploitation of hydrocarbon reserves. Carbonate reservoirs, accounting for majority of the world's hydrocarbon reserves, are well known for their heterogeneity and multiscale pore characteristics. The pore sizes in carbonate rock can vary over orders of magnitudes, the geometry and topology parameters of pores at different scales have a great impact on flow properties. A pore-scale study is often comprised of two key procedures: 3D pore-scale imaging and numerical modelling techniques. The fundamental problem in pore-scale imaging and modelling is how to represent and model the different range of scales encountered in porous media, from the pore-scale to macroscopic petrophysical and multiphase flow properties. However, due to the restrictions of image size vs. resolution, the desired detail is rarely captured at the relevant length scales using any single imaging technique. Similarly, direct simulations of transport properties in heterogeneous rocks with broad pore size distributions are prohibitively expensive computationally. In this study, we present the advances and review the practical limitation of different imaging techniques varying from core-scale (1mm) using Medical Computed Tomography (CT) to pore-scale (10nm - 50µm) using Micro-CT, Confocal Laser Scanning Microscopy (CLSM) and Focussed Ion Beam (FIB) to characterise the complex pore structure in Ketton carbonate rock. The effect of pore structure and connectivity on the flow properties is investigated using the obtained pore scale images of Ketton carbonate using Pore Network and Lattice-Boltzmann simulation methods in comparison with experimental data. We also shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging.
Changes in resting-state functionally connected parietofrontal networks after videogame practice.
Martínez, Kenia; Solana, Ana Beatriz; Burgaleta, Miguel; Hernández-Tamames, Juan Antonio; Alvarez-Linera, Juan; Román, Francisco J; Alfayate, Eva; Privado, Jesús; Escorial, Sergio; Quiroga, María A; Karama, Sherif; Bellec, Pierre; Colom, Roberto
2013-12-01
Neuroimaging studies provide evidence for organized intrinsic activity under task-free conditions. This activity serves functionally relevant brain systems supporting cognition. Here, we analyze changes in resting-state functional connectivity after videogame practice applying a test-retest design. Twenty young females were selected from a group of 100 participants tested on four standardized cognitive ability tests. The practice and control groups were carefully matched on their ability scores. The practice group played during two sessions per week across 4 weeks (16 h total) under strict supervision in the laboratory, showing systematic performance improvements in the game. A group independent component analysis (GICA) applying multisession temporal concatenation on test-retest resting-state fMRI, jointly with a dual-regression approach, was computed. Supporting the main hypothesis, the key finding reveals an increased correlated activity during rest in certain predefined resting state networks (albeit using uncorrected statistics) attributable to practice with the cognitively demanding tasks of the videogame. Observed changes were mainly concentrated on parietofrontal networks involved in heterogeneous cognitive functions. Copyright © 2012 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granger, Brian R.; Chang, Yi -Chien; Wang, Yan
Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less
Named Data Networking in Climate Research and HEP Applications
NASA Astrophysics Data System (ADS)
Shannigrahi, Susmit; Papadopoulos, Christos; Yeh, Edmund; Newman, Harvey; Jerzy Barczyk, Artur; Liu, Ran; Sim, Alex; Mughal, Azher; Monga, Inder; Vlimant, Jean-Roch; Wu, John
2015-12-01
The Computing Models of the LHC experiments continue to evolve from the simple hierarchical MONARC[2] model towards more agile models where data is exchanged among many Tier2 and Tier3 sites, relying on both large scale file transfers with strategic data placement, and an increased use of remote access to object collections with caching through CMS's AAA, ATLAS' FAX and ALICE's AliEn projects, for example. The challenges presented by expanding needs for CPU, storage and network capacity as well as rapid handling of large datasets of file and object collections have pointed the way towards future more agile pervasive models that make best use of highly distributed heterogeneous resources. In this paper, we explore the use of Named Data Networking (NDN), a new Internet architecture focusing on content rather than the location of the data collections. As NDN has shown considerable promise in another data intensive field, Climate Science, we discuss the similarities and differences between the Climate and HEP use cases, along with specific issues HEP faces and will face during LHC Run2 and beyond, which NDN could address.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks
Pena, Rodrigo F. O.; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C.; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks. PMID:29551968
Sihong Chen; Jing Qin; Xing Ji; Baiying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng
2017-03-01
The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.
Temporal Heterogeneity and the Value of Slowness in Robotic Systems
2015-11-01
DIMENSIONS OF HETEROGENEITY By now, we have become reasonably good at designing distributed control strategies for teams of networked agents in order...possible is the recent emergence of a relatively mature theory of how to coordinate control decisions across teams of networked agents. In fact...Loris, illustrated in Figure 2. Figure 2: Slow mammals that serve as bio-inspiration for SlowBot Behavior [Wikipedia] Top: Tree
A roadmap towards personalized immunology.
Delhalle, Sylvie; Bode, Sebastian F N; Balling, Rudi; Ollert, Markus; He, Feng Q
2018-01-01
Big data generation and computational processing will enable medicine to evolve from a "one-size-fits-all" approach to precise patient stratification and treatment. Significant achievements using "Omics" data have been made especially in personalized oncology. However, immune cells relative to tumor cells show a much higher degree of complexity in heterogeneity, dynamics, memory-capability, plasticity and "social" interactions. There is still a long way ahead on translating our capability to identify potentially targetable personalized biomarkers into effective personalized therapy in immune-centralized diseases. Here, we discuss the recent advances and successful applications in "Omics" data utilization and network analysis on patients' samples of clinical trials and studies, as well as the major challenges and strategies towards personalized stratification and treatment for infectious or non-communicable inflammatory diseases such as autoimmune diseases or allergies. We provide a roadmap and highlight experimental, clinical, computational analysis, data management, ethical and regulatory issues to accelerate the implementation of personalized immunology.
Federated and Cloud Enabled Resources for Data Management and Utilization
NASA Astrophysics Data System (ADS)
Rankin, R.; Gordon, M.; Potter, R. G.; Satchwill, B.
2011-12-01
The emergence of cloud computing over the past three years has led to a paradigm shift in how data can be managed, processed and made accessible. Building on the federated data management system offered through the Canadian Space Science Data Portal (www.cssdp.ca), we demonstrate how heterogeneous and geographically distributed data sets and modeling tools have been integrated to form a virtual data center and computational modeling platform that has services for data processing and visualization embedded within it. We also discuss positive and negative experiences in utilizing Eucalyptus and OpenStack cloud applications, and job scheduling facilitated by Condor and Star Cluster. We summarize our findings by demonstrating use of these technologies in the Cloud Enabled Space Weather Data Assimilation and Modeling Platform CESWP (www.ceswp.ca), which is funded through Canarie's (canarie.ca) Network Enabled Platforms program in Canada.
NASA Astrophysics Data System (ADS)
Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.
We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.
Characterizing the heterogeneity of tumor tissues from spatially resolved molecular measures
Zavodszky, Maria I.
2017-01-01
Background Tumor heterogeneity can manifest itself by sub-populations of cells having distinct phenotypic profiles expressed as diverse molecular, morphological and spatial distributions. This inherent heterogeneity poses challenges in terms of diagnosis, prognosis and efficient treatment. Consequently, tools and techniques are being developed to properly characterize and quantify tumor heterogeneity. Multiplexed immunofluorescence (MxIF) is one such technology that offers molecular insight into both inter-individual and intratumor heterogeneity. It enables the quantification of both the concentration and spatial distribution of 60+ proteins across a tissue section. Upon bioimage processing, protein expression data can be generated for each cell from a tissue field of view. Results The Multi-Omics Heterogeneity Analysis (MOHA) tool was developed to compute tissue heterogeneity metrics from MxIF spatially resolved tissue imaging data. This technique computes the molecular state of each cell in a sample based on a pathway or gene set. Spatial states are then computed based on the spatial arrangements of the cells as distinguished by their respective molecular states. MOHA computes tissue heterogeneity metrics from the distributions of these molecular and spatially defined states. A colorectal cancer cohort of approximately 700 subjects with MxIF data is presented to demonstrate the MOHA methodology. Within this dataset, statistically significant correlations were found between the intratumor AKT pathway state diversity and cancer stage and histological tumor grade. Furthermore, intratumor spatial diversity metrics were found to correlate with cancer recurrence. Conclusions MOHA provides a simple and robust approach to characterize molecular and spatial heterogeneity of tissues. Research projects that generate spatially resolved tissue imaging data can take full advantage of this useful technique. The MOHA algorithm is implemented as a freely available R script (see supplementary information). PMID:29190747
An emotional contagion model for heterogeneous social media with multiple behaviors
NASA Astrophysics Data System (ADS)
Xiong, Xi; Li, Yuanyuan; Qiao, Shaojie; Han, Nan; Wu, Yue; Peng, Jing; Li, Binyong
2018-01-01
The emotion varies and propagates with the spatial and temporal information of individuals through social media, which uncovers several interaction mechanisms and features the community structure in order to facilitate individuals' communication and emotional contagion in social networks. Aiming to show the detailed process and characteristics of emotional contagion within social media, we propose an emotional independent cascade model in which individual emotion can affect the subsequent emotion of his/her friends. The transmissibility is introduced to measure the capability of propagating emotion with respect to an individual in social networks. By analyzing the patterns of emotional contagion on Twitter data, we find that the value of transmissibility differs on different layers and on different community structures. Extensive experiments were conducted and the results reveal that, the polar emotion of hub users can lead to the disappearance of opposite emotion, and the transmissibility makes no sense. The final emotional distribution depends on the initial emotional distribution and the transmissibilities. Individuals from a small community are more likely to change their mood by the influence of community leaders. In addition, we compared the proposed model with two other models, the emotion-based spreader-ignorant-stifler model and the standard independent cascade model. The results demonstrate that the proposed model can reflect the real-world situation of emotional contagion for heterogeneous social media while the computational complexities of all these three models are similar.