NASA Astrophysics Data System (ADS)
Liang, Likai; Bi, Yushen
Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-06-27
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead.
Zand, Pouria; Dilo, Arta; Havinga, Paul
2013-01-01
Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead. PMID:23807687
Calculating a checksum with inactive networking components in a computing system
Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T
2014-12-16
Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.
Calculating a checksum with inactive networking components in a computing system
Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T
2015-01-27
Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.
1985-12-01
RELATIONAL TO NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM TH ESI S .L Kevin H. Mahoney -- Captain, USAF AFIT/GCS/ENG/85D-7...NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM - THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Systems - Kevin H. Mahoney
NASA Astrophysics Data System (ADS)
Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun
2002-07-01
In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.
Resource Management for Distributed Parallel Systems
NASA Technical Reports Server (NTRS)
Neuman, B. Clifford; Rao, Santosh
1993-01-01
Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.
Supporting scalability and flexibility in a distributed management platform
NASA Astrophysics Data System (ADS)
Jardin, P.
1996-06-01
The TeMIP management platform was developed to manage very large distributed systems such as telecommunications networks. The management of these networks imposes a number of fairly stringent requirements including the partitioning of the network, division of work based on skills and target system types and the ability to adjust the functions to specific operational requirements. This requires the ability to cluster managed resources into domains that are totally defined at runtime based on operator policies. This paper addresses some of the issues that must be addressed in order to add a dynamic dimension to a management solution.
Technologies for network-centric C4ISR
NASA Astrophysics Data System (ADS)
Dunkelberger, Kirk A.
2003-07-01
Three technologies form the heart of any network-centric command, control, communication, intelligence, surveillance, and reconnaissance (C4ISR) system: distributed processing, reconfigurable networking, and distributed resource management. Distributed processing, enabled by automated federation, mobile code, intelligent process allocation, dynamic multiprocessing groups, check pointing, and other capabilities creates a virtual peer-to-peer computing network across the force. Reconfigurable networking, consisting of content-based information exchange, dynamic ad-hoc routing, information operations (perception management) and other component technologies forms the interconnect fabric for fault tolerant inter processor and node communication. Distributed resource management, which provides the means for distributed cooperative sensor management, foe sensor utilization, opportunistic collection, symbiotic inductive/deductive reasoning and other applications provides the canonical algorithms for network-centric enterprises and warfare. This paper introduces these three core technologies and briefly discusses a sampling of their component technologies and their individual contributions to network-centric enterprises and warfare. Based on the implied requirements, two new algorithms are defined and characterized which provide critical building blocks for network centricity: distributed asynchronous auctioning and predictive dynamic source routing. The first provides a reliable, efficient, effective approach for near-optimal assignment problems; the algorithm has been demonstrated to be a viable implementation for ad-hoc command and control, object/sensor pairing, and weapon/target assignment. The second is founded on traditional dynamic source routing (from mobile ad-hoc networking), but leverages the results of ad-hoc command and control (from the contributed auctioning algorithm) into significant increases in connection reliability through forward prediction. Emphasis is placed on the advantages gained from the closed-loop interaction of the multiple technologies in the network-centric application environment.
The Network Information Management System (NIMS) in the Deep Space Network
NASA Technical Reports Server (NTRS)
Wales, K. J.
1983-01-01
In an effort to better manage enormous amounts of administrative, engineering, and management data that is distributed worldwide, a study was conducted which identified the need for a network support system. The Network Information Management System (NIMS) will provide the Deep Space Network with the tools to provide an easily accessible source of valid information to support management activities and provide a more cost-effective method of acquiring, maintaining, and retrieval data.
Technologies for unattended network operations
NASA Technical Reports Server (NTRS)
Jaworski, Allan; Odubiyi, Jide; Holdridge, Mark; Zuzek, John
1991-01-01
The necessary network management functions for a telecommunications, navigation and information management (TNIM) system in the framework of an extension of the ISO model for communications network management are described. Various technologies that could substantially reduce the need for TNIM network management, automate manpower intensive functions, and deal with synchronization and control at interplanetary distances are presented. Specific technologies addressed include the use of the ISO Common Management Interface Protocol, distributed artificial intelligence for network synchronization and fault management, and fault-tolerant systems engineering.
Distributed network management in the flat structured mobile communities
NASA Astrophysics Data System (ADS)
Balandina, Elena
2005-10-01
Delivering proper management into the flat structured mobile communities is crucial for improving users experience and increase applications diversity in mobile networks. The available P2P applications do application-centric management, but it cannot replace network-wide management, especially when a number of different applications are used simultaneously in the network. The network-wide management is the key element required for a smooth transition from standalone P2P applications to the self-organizing mobile communities that maintain various services with quality and security guaranties. The classical centralized network management solutions are not applicable in the flat structured mobile communities due to the decentralized nature and high mobility of the underlying networks. Also the basic network management tasks have to be revised taking into account specialties of the flat structured mobile communities. The network performance management becomes more dependent on the current nodes' context, which also requires extension of the configuration management functionality. The fault management has to take into account high mobility of the network nodes. The performance and accounting managements are mainly targeted in maintain an efficient and fair access to the resources within the community, however they also allow unbalanced resource use of the nodes that explicitly permit it, e.g. as a voluntary donation to the community or due to the profession (commercial) reasons. The security management must implement the new trust models, which are based on the community feedback, professional authorization, and a mix of both. For fulfilling these and another specialties of the flat structured mobile communities, a new network management solution is demanded. The paper presents a distributed network management solution for flat structured mobile communities. Also the paper points out possible network management roles for the different parties (e.g. operators, service providing hubs/super nodes, etc.) involved in a service providing chain.
Cooperative Learning for Distributed In-Network Traffic Classification
NASA Astrophysics Data System (ADS)
Joseph, S. B.; Loo, H. R.; Ismail, I.; Andromeda, T.; Marsono, M. N.
2017-04-01
Inspired by the concept of autonomic distributed/decentralized network management schemes, we consider the issue of information exchange among distributed network nodes to network performance and promote scalability for in-network monitoring. In this paper, we propose a cooperative learning algorithm for propagation and synchronization of network information among autonomic distributed network nodes for online traffic classification. The results show that network nodes with sharing capability perform better with a higher average accuracy of 89.21% (sharing data) and 88.37% (sharing clusters) compared to 88.06% for nodes without cooperative learning capability. The overall performance indicates that cooperative learning is promising for distributed in-network traffic classification.
2002-09-01
Management .........................15 5. Time Management ..............................16 6. Data Distribution Management .................16 D...50 b. Ownership Management .....................51 c. Data Distribution Management .............51 2. Additional Objects and Interactions...16 Figure 6. Data Distribution Management . (From: ref. 2) ...16 Figure 7. RTI and Federate Code Responsibilities. (From: ref. 2
Final Report - Cloud-Based Management Platform for Distributed, Multi-Domain Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, Pulak; Mukherjee, Biswanath
2017-11-03
In this Department of Energy (DOE) Small Business Innovation Research (SBIR) Phase II project final report, Ennetix presents the development of a solution for end-to-end monitoring, analysis, and visualization of network performance for distributed networks. This solution benefits enterprises of all sizes, operators of distributed and federated networks, and service providers.
Autonomous distributed self-organization for mobile wireless sensor networks.
Wen, Chih-Yu; Tang, Hung-Kai
2009-01-01
This paper presents an adaptive combined-metrics-based clustering scheme for mobile wireless sensor networks, which manages the mobile sensors by utilizing the hierarchical network structure and allocates network resources efficiently A local criteria is used to help mobile sensors form a new cluster or join a current cluster. The messages transmitted during hierarchical clustering are applied to choose distributed gateways such that communication for adjacent clusters and distributed topology control can be achieved. In order to balance the load among clusters and govern the topology change, a cluster reformation scheme using localized criterions is implemented. The proposed scheme is simulated and analyzed to abstract the network behaviors in a number of settings. The experimental results show that the proposed algorithm provides efficient network topology management and achieves high scalability in mobile sensor networks.
Quantum key distribution network for multiple applications
NASA Astrophysics Data System (ADS)
Tajima, A.; Kondoh, T.; Ochi, T.; Fujiwara, M.; Yoshino, K.; Iizuka, H.; Sakamoto, T.; Tomita, A.; Shimamura, E.; Asami, S.; Sasaki, M.
2017-09-01
The fundamental architecture and functions of secure key management in a quantum key distribution (QKD) network with enhanced universal interfaces for smooth key sharing between arbitrary two nodes and enabling multiple secure communication applications are proposed. The proposed architecture consists of three layers: a quantum layer, key management layer and key supply layer. We explain the functions of each layer, the key formats in each layer and the key lifecycle for enabling a practical QKD network. A quantum key distribution-advanced encryption standard (QKD-AES) hybrid system and an encrypted smartphone system were developed as secure communication applications on our QKD network. The validity and usefulness of these systems were demonstrated on the Tokyo QKD Network testbed.
Management of the Space Station Freedom onboard local area network
NASA Technical Reports Server (NTRS)
Miller, Frank W.; Mitchell, Randy C.
1991-01-01
An operational approach is proposed to managing the Data Management System Local Area Network (LAN) on Space Station Freedom. An overview of the onboard LAN elements is presented first, followed by a proposal of the operational guidelines by which management of the onboard network may be effected. To implement the guidelines, a recommendation is then presented on a set of network management parameters which should be made available in the onboard Network Operating System Computer Software Configuration Item and Fiber Distributed Data Interface firmware. Finally, some implications for the implementation of the various network management elements are discussed.
Resource Management In Peer-To-Peer Networks: A Nadse Approach
NASA Astrophysics Data System (ADS)
Patel, R. B.; Garg, Vishal
2011-12-01
This article presents a common solution to Peer-to-Peer (P2P) network problems and distributed computing with the help of "Neighbor Assisted Distributed and Scalable Environment" (NADSE). NADSE supports both device and code mobility. In this article mainly we focus on the NADSE based resource management technique. How information dissemination and searching is speedup when using the NADSE service provider node in large network. Results show that performance of the NADSE network is better in comparison to Gnutella, and Freenet.
Motion/imagery secure cloud enterprise architecture analysis
NASA Astrophysics Data System (ADS)
DeLay, John L.
2012-06-01
Cloud computing with storage virtualization and new service-oriented architectures brings a new perspective to the aspect of a distributed motion imagery and persistent surveillance enterprise. Our existing research is focused mainly on content management, distributed analytics, WAN distributed cloud networking performance issues of cloud based technologies. The potential of leveraging cloud based technologies for hosting motion imagery, imagery and analytics workflows for DOD and security applications is relatively unexplored. This paper will examine technologies for managing, storing, processing and disseminating motion imagery and imagery within a distributed network environment. Finally, we propose areas for future research in the area of distributed cloud content management enterprises.
A system for distributed intrusion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snapp, S.R.; Brentano, J.; Dias, G.V.
1991-01-01
The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less
NASA Astrophysics Data System (ADS)
Abdelbaki, Chérifa; Benchaib, Mohamed Mouâd; Benziada, Salim; Mahmoudi, Hacène; Goosen, Mattheus
2017-06-01
For more effective management of water distribution network in an arid region, Mapinfo GIS (8.0) software was coupled with a hydraulic model (EPANET 2.0) and applied to a case study region, Chetouane, situated in the north-west of Algeria. The area is characterized not only by water scarcity but also by poor water management practices. The results showed that a combination of GIS and modeling permits network operators to better analyze malfunctions with a resulting more rapid response as well as facilitating in an improved understanding of the work performed on the network. The grouping of GIS and modeling as an operating tool allows managers to diagnosis a network, to study solutions of problems and to predict future situations. The later can assist them in making informed decisions to ensure an acceptable performance level for optimal network operation.
1998-07-01
Report No. WH97JR00-A002 Sponsored by REAL-TIME NETWORK MANAGEMENT FINAL TECHNICAL REPORT K CD July 1998 CO CO O W O Defense Advanced...Approved for public release; distribution unlimited. t^GquALmmsPEami Report No. WH97JR00-A002 REAL-TIME NETWORK MANAGEMENT Synectics Corporation...2.1.2.1 WAN-class Networks 12 2.1.2.2 IEEE 802.3-class Networks 13 2.2 Task 2 - Object Modeling for Architecture 14 2.2.1 Managed Objects 14 2.2.2
A distributed data base management system. [for Deep Space Network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1975-01-01
Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.
Laadan, Oren; Nieh, Jason; Phung, Dan
2012-10-02
Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.
A Performance Comparison of Tree and Ring Topologies in Distributed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Min
A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less
Managed traffic evacuation using distributed sensor processing
NASA Astrophysics Data System (ADS)
Ramuhalli, Pradeep; Biswas, Subir
2005-05-01
This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.
Distributing stand inventory data and maps over a wide area network
Thomas E. Burk
2000-01-01
High-speed networks connecting multiple levels of management are becoming commonplace among forest resources organizations. Such networks can be used to deliver timely spatial and aspatial data relevant to the management of stands to field personnel. A network infrastructure allows maintenance of cost-effective, centralized databases with the potential for updating by...
Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila
2015-11-01
Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Unified web-based network management based on distributed object orientated software agents
NASA Astrophysics Data System (ADS)
Djalalian, Amir; Mukhtar, Rami; Zukerman, Moshe
2002-09-01
This paper presents an architecture that provides a unified web interface to managed network devices that support CORBA, OSI or Internet-based network management protocols. A client gains access to managed devices through a web browser, which is used to issue management operations and receive event notifications. The proposed architecture is compatible with both the OSI Management reference Model and CORBA. The steps required for designing the building blocks of such architecture are identified.
Efficient Management of Certificate Revocation Lists in Smart Grid Advanced Metering Infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cebe, Mumin; Akkaya, Kemal
Advanced Metering Infrastructure (AMI) forms a communication network for the collection of power data from smart meters in Smart Grid. As the communication within an AMI needs to be secure, key management becomes an issue due to overhead and limited resources. While using public-keys eliminate some of the overhead of key management, there is still challenges regarding certificates that store and certify the publickeys. In particular, distribution and storage of certificate revocation list (CRL) is major a challenge due to cost of distribution and storage in AMI networks which typically consist of wireless multi-hop networks. Motivated by the need ofmore » keeping the CRL distribution and storage cost effective and scalable, in this paper, we present a distributed CRL management model utilizing the idea of distributed hash trees (DHTs) from peer-to-peer (P2P) networks. The basic idea is to share the burden of storage of CRLs among all the smart meters by exploiting the meshing capability of the smart meters among each other. Thus, using DHTs not only reduces the space requirements for CRLs but also makes the CRL updates more convenient. We implemented this structure on ns-3 using IEEE 802.11s mesh standard as a model for AMI and demonstrated its superior performance with respect to traditional methods of CRL management through extensive simulations.« less
Zounemat-Kermani, Mohammad; Ramezani-Charmahineh, Abdollah; Adamowski, Jan; Kisi, Ozgur
2018-06-13
Chlorination, the basic treatment utilized for drinking water sources, is widely used for water disinfection and pathogen elimination in water distribution networks. Thereafter, the proper prediction of chlorine consumption is of great importance in water distribution network performance. In this respect, data mining techniques-which have the ability to discover the relationship between dependent variable(s) and independent variables-can be considered as alternative approaches in comparison to conventional methods (e.g., numerical methods). This study examines the applicability of three key methods, based on the data mining approach, for predicting chlorine levels in four water distribution networks. ANNs (artificial neural networks, including the multi-layer perceptron neural network, MLPNN, and radial basis function neural network, RBFNN), SVM (support vector machine), and CART (classification and regression tree) methods were used to estimate the concentration of residual chlorine in distribution networks for three villages in Kerman Province, Iran. Produced water (flow), chlorine consumption, and residual chlorine were collected daily for 3 years. An assessment of the studied models using several statistical criteria (NSC, RMSE, R 2 , and SEP) indicated that, in general, MLPNN has the greatest capability for predicting chlorine levels followed by CART, SVM, and RBF-ANN. Weaker performance of the data-driven methods in the water distribution networks, in some cases, could be attributed to improper chlorination management rather than the methods' capability.
A distributed data base management facility for the CAD/CAM environment
NASA Technical Reports Server (NTRS)
Balza, R. M.; Beaudet, R. W.; Johnson, H. R.
1984-01-01
Current/PAD research in the area of distributed data base management considers facilities for supporting CAD/CAM data management in a heterogeneous network of computers encompassing multiple data base managers supporting a variety of data models. These facilities include coordinated execution of multiple DBMSs to provide for administration of and access to data distributed across them.
Animal Telemetry Network Data Assembly Center: Phase 2
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Animal telemetry network data assembly center: Phase...2 Barbara Block & Randy Kochevar Hopkins Marine Station Stanford University 120 Oceanview Blvd. Pacific Grove, Ca phone: (831) 655-6236...prior development for tag data management (e.g. TOPP, GTOPP, GulfTOPP) of animal telemetry data management into a single system (DAC) with an
NASA Technical Reports Server (NTRS)
George, Jude (Inventor); Schlecht, Leslie (Inventor); McCabe, James D. (Inventor); LeKashman, John Jr. (Inventor)
1998-01-01
A network management system has SNMP agents distributed at one or more sites, an input output module at each site, and a server module located at a selected site for communicating with input output modules, each of which is configured for both SNMP and HNMP communications. The server module is configured exclusively for HNMP communications, and it communicates with each input output module according to the HNMP. Non-iconified, informationally complete views are provided of network elements to aid in network management.
Algorithms for Data Sharing, Coordination, and Communication in Dynamic Network Settings
2007-12-03
problems in dynamic networks, focusing on mobile networks with wireless communication. Problems studied include data management, time synchronization ...The discovery of a fundamental limitation in capabilities for time synchronization in large networks. (2) The identification and development of the...Problems studied include data management, time synchronization , communication problems (broadcast, geocast, and point-to-point routing), distributed
Planning Considerations for Secure Network Protocols
1999-03-01
distribution / management ) requirements needed to support network security services are examined. The thesis concludes by identifying tactical user network requirements and suggests security issues to be considered in concert with network
Advanced Distribution Management System
NASA Astrophysics Data System (ADS)
Avazov, Artur R.; Sobinova, Liubov A.
2016-02-01
This article describes the advisability of using advanced distribution management systems in the electricity distribution networks area and considers premises of implementing ADMS within the Smart Grid era. Also, it gives the big picture of ADMS and discusses the ADMS advantages and functionalities.
NASA Astrophysics Data System (ADS)
Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter
Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.
Multiobjective assessment of distributed energy storage location in electricity networks
NASA Astrophysics Data System (ADS)
Ribeiro Gonçalves, José António; Neves, Luís Pires; Martins, António Gomes
2017-07-01
This paper presents a methodology to provide information to a decision maker on the associated impacts, both of economic and technical nature, of possible management schemes of storage units for choosing the best location of distributed storage devices, with a multiobjective optimisation approach based on genetic algorithms. The methodology was applied to a case study, a known distribution network model in which the installation of distributed storage units was tested, using lithium-ion batteries. The obtained results show a significant influence of the charging/discharging profile of batteries on the choice of their best location, as well as the relevance that these choices may have for the different network management objectives, for example, for reducing network energy losses or minimising voltage deviations. Results also show a difficult cost-effectiveness of an energy-only service, with the tested systems, both due to capital cost and due to the efficiency of conversion.
An object-based storage model for distributed remote sensing images
NASA Astrophysics Data System (ADS)
Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng
2006-10-01
It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.
An application of digital network technology to medical image management.
Chu, W K; Smith, C L; Wobig, R K; Hahn, F A
1997-01-01
With the advent of network technology, there is considerable interest within the medical community to manage the storage and distribution of medical images by digital means. Higher workflow efficiency leading to better patient care is one of the commonly cited outcomes [1,2]. However, due to the size of medical image files and the unique requirements in detail and resolution, medical image management poses special challenges. Storage requirements are usually large, which implies expenses or investment costs make digital networking projects financially out of reach for many clinical institutions. New advances in network technology and telecommunication, in conjunction with the decreasing cost in computer devices, have made digital image management achievable. In our institution, we have recently completed a pilot project to distribute medical images both within the physical confines of the clinical enterprise as well as outside the medical center campus. The design concept and the configuration of a comprehensive digital image network is described in this report.
Web-based monitoring and management system for integrated enterprise-wide imaging networks
NASA Astrophysics Data System (ADS)
Ma, Keith; Slik, David; Lam, Alvin; Ng, Won
2003-05-01
Mass proliferation of IP networks and the maturity of standards has enabled the creation of sophisticated image distribution networks that operate over Intranets, Extranets, Communities of Interest (CoI) and even the public Internet. Unified monitoring, provisioning and management of such systems at the application and protocol levels represent a challenge. This paper presents a web based monitoring and management tool that employs established telecom standards for the creation of an open system that enables proactive management, provisioning and monitoring of image management systems at the enterprise level and across multi-site geographically distributed deployments. Utilizing established standards including ITU-T M.3100, and web technologies such as XML/XSLT, JSP/JSTL, and J2SE, the system allows for seamless device and protocol adaptation between multiple disparate devices. The goal has been to develop a unified interface that provides network topology views, multi-level customizable alerts, real-time fault detection as well as real-time and historical reporting of all monitored resources, including network connectivity, system load, DICOM transactions and storage capacities.
Analysis and Application of Microgrids
NASA Astrophysics Data System (ADS)
Yue, Lu
New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.
Distributed Prognostic Health Management with Gaussian Process Regression
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Saxena, Abhinav; Goebel, Kai Frank
2010-01-01
Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper. we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.
Cooperative Management of a Lithium-Ion Battery Energy Storage Network: A Distributed MPC Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Huazhen; Wu, Di; Yang, Tao
2016-12-12
This paper presents a study of cooperative power supply and storage for a network of Lithium-ion energy storage systems (LiBESSs). We propose to develop a distributed model predictive control (MPC) approach for two reasons. First, able to account for the practical constraints of a LiBESS, the MPC can enable a constraint-aware operation. Second, a distributed management can cope with a complex network that integrates a large number of LiBESSs over a complex communication topology. With this motivation, we then build a fully distributed MPC algorithm from an optimization perspective, which is based on an extension of the alternating direction methodmore » of multipliers (ADMM) method. A simulation example is provided to demonstrate the effectiveness of the proposed algorithm.« less
van Mierlo, Trevor; Hyatt, Douglas; Ching, Andrew T
2015-06-25
Social networks are common in digital health. A new stream of research is beginning to investigate the mechanisms of digital health social networks (DHSNs), how they are structured, how they function, and how their growth can be nurtured and managed. DHSNs increase in value when additional content is added, and the structure of networks may resemble the characteristics of power laws. Power laws are contrary to traditional Gaussian averages in that they demonstrate correlated phenomena. The objective of this study is to investigate whether the distribution frequency in four DHSNs can be characterized as following a power law. A second objective is to describe the method used to determine the comparison. Data from four DHSNs—Alcohol Help Center (AHC), Depression Center (DC), Panic Center (PC), and Stop Smoking Center (SSC)—were compared to power law distributions. To assist future researchers and managers, the 5-step methodology used to analyze and compare datasets is described. All four DHSNs were found to have right-skewed distributions, indicating the data were not normally distributed. When power trend lines were added to each frequency distribution, R(2) values indicated that, to a very high degree, the variance in post frequencies can be explained by actor rank (AHC .962, DC .975, PC .969, SSC .95). Spearman correlations provided further indication of the strength and statistical significance of the relationship (AHC .987. DC .967, PC .983, SSC .993, P<.001). This is the first study to investigate power distributions across multiple DHSNs, each addressing a unique condition. Results indicate that despite vast differences in theme, content, and length of existence, DHSNs follow properties of power laws. The structure of DHSNs is important as it gives insight to researchers and managers into the nature and mechanisms of network functionality. The 5-step process undertaken to compare actor contribution patterns can be replicated in networks that are managed by other organizations, and we conjecture that patterns observed in this study could be found in other DHSNs. Future research should analyze network growth over time and examine the characteristics and survival rates of superusers.
Tools to manage the enterprise-wide picture archiving and communications system environment.
Lannum, L M; Gumpf, S; Piraino, D
2001-06-01
The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.
A neural network approach to burst detection.
Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J
2002-01-01
This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.
A Framework for Managing Inter-Site Storage Area Networks using Grid Technologies
NASA Technical Reports Server (NTRS)
Kobler, Ben; McCall, Fritz; Smorul, Mike
2006-01-01
The NASA Goddard Space Flight Center and the University of Maryland Institute for Advanced Computer Studies are studying mechanisms for installing and managing Storage Area Networks (SANs) that span multiple independent collaborating institutions using Storage Area Network Routers (SAN Routers). We present a framework for managing inter-site distributed SANs that uses Grid Technologies to balance the competing needs to control local resources, share information, delegate administrative access, and manage the complex trust relationships between the participating sites.
Celestial data routing network
NASA Astrophysics Data System (ADS)
Bordetsky, Alex
2000-11-01
Imagine that information processing human-machine network is threatened in a particular part of the world. Suppose that an anticipated threat of physical attacks could lead to disruption of telecommunications network management infrastructure and access capabilities for small geographically distributed groups engaged in collaborative operations. Suppose that small group of astronauts are exploring the solar planet and need to quickly configure orbital information network to support their collaborative work and local communications. The critical need in both scenarios would be a set of low-cost means of small team celestial networking. To the geographically distributed mobile collaborating groups such means would allow to maintain collaborative multipoint work, set up orbital local area network, and provide orbital intranet communications. This would be accomplished by dynamically assembling the network enabling infrastructure of the small satellite based router, satellite based Codec, and set of satellite based intelligent management agents. Cooperating single function pico satellites, acting as agents and personal switching devices together would represent self-organizing intelligent orbital network of cooperating mobile management nodes. Cooperative behavior of the pico satellite based agents would be achieved by comprising a small orbital artificial neural network capable of learning and restructing the networking resources in response to the anticipated threat.
A comparison of queueing, cluster and distributed computing systems
NASA Technical Reports Server (NTRS)
Kaplan, Joseph A.; Nelson, Michael L.
1993-01-01
Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.
A Logically Centralized Approach for Control and Management of Large Computer Networks
ERIC Educational Resources Information Center
Iqbal, Hammad A.
2012-01-01
Management of large enterprise and Internet service provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these…
Decentralized Energy Management System for Networked Microgrids in Grid-connected and Islanded Modes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
This paper proposes a decentralized energy management system (EMS) for the coordinated operation of networked Microgirds (MGs) in a distribution system. In the grid-connected mode, the distribution network operator (DNO) and each MG are considered as distinct entities with individual objectives to minimize their own operation costs. It is assumed that both dispatchable and renewable energy source (RES)-based distributed generators (DGs) exist in the distribution network and the networked MGs. In order to coordinate the operation of all entities, we apply a decentralized bi-level algorithm to solve the problem with the first level to conduct negotiations among all entities andmore » the second level to update the non-converging penalties. In the islanded mode, the objective of each MG is to maintain a reliable power supply to its customers. In order to take into account the uncertainties of DG outputs and load consumption, we formulate the problems as two-stage stochastic programs. The first stage is to determine base generation setpoints based on the forecasts and the second stage is to adjust the generation outputs based on the realized scenarios. Case studies of a distribution system with networked MGs demonstrate the effectiveness of the proposed methodology in both grid-connected and islanded modes.« less
2006-04-01
and Scalability, (2) Sensors and Platforms, (3) Distributed Computing and Processing , (4) Information Management, (5) Fusion and Resource Management...use of the deployed system. 3.3 Distributed Computing and Processing Session The Distributed Computing and Processing Session consisted of three
ReTrust: attack-resistant and lightweight trust management for medical sensor networks.
He, Daojing; Chen, Chun; Chan, Sammy; Bu, Jiajun; Vasilakos, Athanasios V
2012-07-01
Wireless medical sensor networks (MSNs) enable ubiquitous health monitoring of users during their everyday lives, at health sites, without restricting their freedom. Establishing trust among distributed network entities has been recognized as a powerful tool to improve the security and performance of distributed networks such as mobile ad hoc networks and sensor networks. However, most existing trust systems are not well suited for MSNs due to the unique operational and security requirements of MSNs. Moreover, similar to most security schemes, trust management methods themselves can be vulnerable to attacks. Unfortunately, this issue is often ignored in existing trust systems. In this paper, we identify the security and performance challenges facing a sensor network for wireless medical monitoring and suggest it should follow a two-tier architecture. Based on such an architecture, we develop an attack-resistant and lightweight trust management scheme named ReTrust. This paper also reports the experimental results of the Collection Tree Protocol using our proposed system in a network of TelosB motes, which show that ReTrust not only can efficiently detect malicious/faulty behaviors, but can also significantly improve the network performance in practice.
Granko, Robert P; Wolfe, Adam S; Kelley, Lindsey R; Morton, Carolyn S; Delgado, Osmel
2015-01-15
The self-development potential of pharmacy management practitioners related to self-management, team development, and network management was assessed. A survey instrument consisting of 12 self-assessment questions and 11 questions about demographics was distributed to pharmacy management practitioners to assess their abilities to manage themselves, their teams, and their networks. The tool was distributed by e-mail hyperlink to 190 potential respondents. Only surveys from respondents who had a pharmacy degree and direct supervisory capacity were analyzed. Respondents rated their progress toward meeting the three imperatives on a scale of 1-5. Responses to the questions were analyzed as ordinal data, with median responses used for assessment. A total of 160 responses were received via e-mail, 149 (93%) of which met the inclusion criteria. About half of all respondents were practicing at institutions of 600 beds or more and supervised at least five employees. The majority of respondents identified their abilities to manage themselves, their teams, and their networks as areas of strength but also acknowledged that using all three of these skills on a daily basis was an area of opportunity. Respondents generally identified management of their network as an area needing work. The majority of survey respondents identified their skills in self-, team, and network management as areas of strength. Respondents generally identified management of their network as an area needing work. Respondents also identified the use of all three imperatives on a daily basis as an area of opportunity for improvement. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Additional Security Considerations for Grid Management
NASA Technical Reports Server (NTRS)
Eidson, Thomas M.
2003-01-01
The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.
Integrating network ecology with applied conservation: a synthesis and guide to implementation.
Kaiser-Bunbury, Christopher N; Blüthgen, Nico
2015-07-10
Ecological networks are a useful tool to study the complexity of biotic interactions at a community level. Advances in the understanding of network patterns encourage the application of a network approach in other disciplines than theoretical ecology, such as biodiversity conservation. So far, however, practical applications have been meagre. Here we present a framework for network analysis to be harnessed to advance conservation management by using plant-pollinator networks and islands as model systems. Conservation practitioners require indicators to monitor and assess management effectiveness and validate overall conservation goals. By distinguishing between two network attributes, the 'diversity' and 'distribution' of interactions, on three hierarchical levels (species, guild/group and network) we identify seven quantitative metrics to describe changes in network patterns that have implications for conservation. Diversity metrics are partner diversity, vulnerability/generality, interaction diversity and interaction evenness, and distribution metrics are the specialization indices d' and [Formula: see text] and modularity. Distribution metrics account for sampling bias and may therefore be suitable indicators to detect human-induced changes to plant-pollinator communities, thus indirectly assessing the structural and functional robustness and integrity of ecosystems. We propose an implementation pathway that outlines the stages that are required to successfully embed a network approach in biodiversity conservation. Most importantly, only if conservation action and study design are aligned by practitioners and ecologists through joint experiments, are the findings of a conservation network approach equally beneficial for advancing adaptive management and ecological network theory. We list potential obstacles to the framework, highlight the shortfall in empirical, mostly experimental, network data and discuss possible solutions. Published by Oxford University Press on behalf of the Annals of Botany Company.
Evaluation of the U.S. Geological Survey Ground-Water Data-Collection Program in Hawaii, 1992
Anthony, Stephen S.
1997-01-01
In 1992, the U.S. Geological Survey ground-water data-collection program in the State of Hawaii consisted of 188 wells distributed among the islands of Oahu, Kauai, Maui, Molokai, and Hawaii. Water-level and water-quality (temperature, specific conductance, and chloride concentration) data were collected from observation wells, deep monitoring wells that penetrate the zone of transition between freshwater and saltwater, free-flowing wells, and pumped wells. The objective of the program was to collect sufficient spatial and temporal data to define seasonal and long-term changes in ground-water levels and chloride concentrations induced by natural and human-made stresses for different climatic and hydrogeologic settings. Wells needed to meet this objective can be divided into two types of networks: (1) a water-management network to determine the response of ground-water flow systems to human-induced stresses, such as pumpage, and (2) a baseline network to determine the response of ground-water flow systems to natural stresses for different climatic and hydrogeologic settings. Maps showing the distribution and magnitude of pumpage and the distribution of proposed pumped wells are presented to identify areas in need of water-management networks. Wells in the 1992 U.S. Geological Survey ground-water data-collection program were classified as either water-management or baseline network wells. In addition, locations where additional water-management network wells are needed for water-level and water-quality data were identified.
A Distributed Energy-Aware Trust Management System for Secure Routing in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Stelios, Yannis; Papayanoulas, Nikos; Trakadas, Panagiotis; Maniatis, Sotiris; Leligou, Helen C.; Zahariadis, Theodore
Wireless sensor networks are inherently vulnerable to security attacks, due to their wireless operation. The situation is further aggravated because they operate in an infrastructure-less environment, which mandates the cooperation among nodes for all networking tasks, including routing, i.e. all nodes act as “routers”, forwarding the packets generated by their neighbours in their way to the sink node. This implies that malicious nodes (denying their cooperation) can significantly affect the network operation. Trust management schemes provide a powerful tool for the detection of unexpected node behaviours (either faulty or malicious). Once misbehaving nodes are detected, their neighbours can use this information to avoid cooperating with them either for data forwarding, data aggregation or any other cooperative function. We propose a secure routing solution based on a novel distributed trust management system, which allows for fast detection of a wide set of attacks and also incorporates energy awareness.
Distributed Interplanetary Delay/Disruption Tolerant Network (DTN) Monitor and Control System
NASA Technical Reports Server (NTRS)
Wang, Shin-Ywan
2012-01-01
The main purpose of Distributed interplanetary Delay Tolerant Network Monitor and Control System as a DTN system network management implementation in JPL is defined to provide methods and tools that can monitor the DTN operation status, detect and resolve DTN operation failures in some automated style while either space network or some heterogeneous network is infused with DTN capability. In this paper, "DTN Monitor and Control system in Deep Space Network (DSN)" exemplifies a case how DTN Monitor and Control system can be adapted into a space network as it is DTN enabled.
Software-Enabled Distributed Network Governance: The PopMedNet Experience.
Davies, Melanie; Erickson, Kyle; Wyner, Zachary; Malenfant, Jessica; Rosen, Rob; Brown, Jeffrey
2016-01-01
The expanded availability of electronic health information has led to increased interest in distributed health data research networks. The distributed research network model leaves data with and under the control of the data holder. Data holders, network coordinating centers, and researchers have distinct needs and challenges within this model. The concerns of network stakeholders are addressed in the design and governance models of the PopMedNet software platform. PopMedNet features include distributed querying, customizable workflows, and auditing and search capabilities. Its flexible role-based access control system enables the enforcement of varying governance policies. Four case studies describe how PopMedNet is used to enforce network governance models. Trust is an essential component of a distributed research network and must be built before data partners may be willing to participate further. The complexity of the PopMedNet system must be managed as networks grow and new data, analytic methods, and querying approaches are developed. The PopMedNet software platform supports a variety of network structures, governance models, and research activities through customizable features designed to meet the needs of network stakeholders.
Flow-rate control for managing communications in tracking and surveillance networks
NASA Astrophysics Data System (ADS)
Miller, Scott A.; Chong, Edwin K. P.
2007-09-01
This paper describes a primal-dual distributed algorithm for managing communications in a bandwidth-limited sensor network for tracking and surveillance. The algorithm possesses some scale-invariance properties and adaptive gains that make it more practical for applications such as tracking where the conditions change over time. A simulation study comparing this algorithm with a priority-queue-based approach in a network tracking scenario shows significant improvement in the resulting track quality when using flow control to manage communications.
NASA Astrophysics Data System (ADS)
Kodama, Yu; Hamagami, Tomoki
Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.
Rethinking Traffic Management: Design of Optimizable Networks
2008-06-01
Though this paper used optimization theory to design and analyze DaVinci , op- timization theory is one of many possible tools to enable a grounded...dynamically allocate bandwidth shares. The distributed protocols can be implemented using DaVinci : Dynamically Adaptive VIrtual Networks for a Customized...Internet. In DaVinci , each virtual network runs traffic-management protocols optimized for a traffic class, and link bandwidth is dynamically allocated
Prediction-based Dynamic Energy Management in Wireless Sensor Networks
Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei
2007-01-01
Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.
Modeling Social Influences in a Knowledge Management Network
ERIC Educational Resources Information Center
Franco, Giacomo; Maresca, Paolo; Nota, Giancarlo
2010-01-01
The issue of knowledge management in a distributed network is receiving increasing attention from both scientific and industrial organizations. Research efforts in this field are motivated by the awareness that knowledge is more and more perceived as a primary economic resource and that, in the context of organization of organizations, the…
Distributed controller clustering in software defined networks.
Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.
Advanced Distribution Network Modelling with Distributed Energy Resources
NASA Astrophysics Data System (ADS)
O'Connell, Alison
The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a three-phase optimal power flow method is developed. The formulation has the capability to provide optimal solutions for distribution system control variables, for a chosen objective function, subject to required constraints. It can, therefore, be utilised for numerous technologies and applications. The three-phase optimal power flow is employed to manage various distributed resources, such as photovoltaics and storage, as well as distribution equipment, including tap changers and switches. The flexibility of the methodology allows it to be applied in both an operational and a planning capacity. The three-phase optimal power flow is employed in an operational planning capacity to determine volt-var curves for distributed photovoltaic inverters. The formulation finds optimal reactive power settings for a number of load and solar scenarios and uses these reactive power points to create volt-var curves. Volt-var curves are determined for 10 PV systems on a test feeder. A universal curve is also determined which is applicable to all inverters. The curves are validated by testing them in a power flow setting over a 24-hour test period. The curves are shown to provide advantages to the feeder in terms of reduction of voltage deviations and unbalance, with the individual curves proving to be more effective. It is also shown that adding a new PV system to the feeder only requires analysis for that system. In order to represent the uncertainties that inherently occur on distribution systems, an information gap decision theory method is also proposed and integrated into the three-phase optimal power flow formulation. This allows for robust network decisions to be made using only an initial prediction for what the uncertain parameter will be. The work determines tap and switch settings for a test network with demand being treated as uncertain. The aim is to keep losses below a predefined acceptable value. The results provide the decision maker with the maximum possible variation in demand for a given acceptable variation in the losses. A validation is performed with the resulting tap and switch settings being implemented, and shows that the control decisions provided by the formulation keep losses below the acceptable value while adhering to the limits imposed by the network.
Edwards, Michelle; Wood, Fiona; Davies, Myfanwy; Edwards, Adrian
2015-10-01
The role of one's social network in the process of becoming health literate is not well understood. We aim to explain the 'distributed' nature of health literacy and how people living with a long-term condition draw on their social network for support with health literacy-related tasks such as managing their condition, interacting with health professionals and making decisions about their health. This paper reports a longitudinal qualitative interview and observation study of the development and practice of health literacy in people with long-term health conditions, living in South Wales, UK. Participants were recruited from health education groups (n = 14) and community education venues (n = 4). The 44 interview transcripts were analysed using the 'Framework' approach. Health literacy was distributed through family and social networks, and participants often drew on the health literacy skills of others to seek, understand and use health information. Those who passed on their health literacy skills acted as health literacy mediators and supported participants in becoming more health literate about their condition. The distribution of health literacy supported participants to manage their health, become more active in health-care decision-making processes, communicate with health professionals and come to terms with living with a long-term condition. Participants accessed health literacy mediators through personal and community networks. Distributed health literacy is a potential resource for managing one's health, communicating with health professionals and making health decisions. © 2013 John Wiley & Sons Ltd.
MASM: a market architecture for sensor management in distributed sensor networks
NASA Astrophysics Data System (ADS)
Viswanath, Avasarala; Mullen, Tracy; Hall, David; Garga, Amulya
2005-03-01
Rapid developments in sensor technology and its applications have energized research efforts towards devising a firm theoretical foundation for sensor management. Ubiquitous sensing, wide bandwidth communications and distributed processing provide both opportunities and challenges for sensor and process control and optimization. Traditional optimization techniques do not have the ability to simultaneously consider the wildly non-commensurate measures involved in sensor management in a single optimization routine. Market-oriented programming provides a valuable and principled paradigm to designing systems to solve this dynamic and distributed resource allocation problem. We have modeled the sensor management scenario as a competitive market, wherein the sensor manager holds a combinatorial auction to sell the various items produced by the sensors and the communication channels. However, standard auction mechanisms have been found not to be directly applicable to the sensor management domain. For this purpose, we have developed a specialized market architecture MASM (Market architecture for Sensor Management). In MASM, the mission manager is responsible for deciding task allocations to the consumers and their corresponding budgets and the sensor manager is responsible for resource allocation to the various consumers. In addition to having a modified combinatorial winner determination algorithm, MASM has specialized sensor network modules that address commensurability issues between consumers and producers in the sensor network domain. A preliminary multi-sensor, multi-target simulation environment has been implemented to test the performance of the proposed system. MASM outperformed the information theoretic sensor manager in meeting the mission objectives in the simulation experiments.
Distributed architecture and distributed processing mode in urban sewage treatment
NASA Astrophysics Data System (ADS)
Zhou, Ruipeng; Yang, Yuanming
2017-05-01
Decentralized rural sewage treatment facility over the broad area, a larger operation and management difficult, based on the analysis of rural sewage treatment model based on the response to these challenges, we describe the principle, structure and function in networking technology and network communications technology as the core of distributed remote monitoring system, through the application of case analysis to explore remote monitoring system features in a decentralized rural sewage treatment facilities in the daily operation and management. Practice shows that the remote monitoring system to provide technical support for the long-term operation and effective supervision of the facilities, and reduced operating, maintenance and supervision costs for development.
BARI+: A Biometric Based Distributed Key Management Approach for Wireless Body Area Networks
Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo
2010-01-01
Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain. PMID:22319333
BARI+: a biometric based distributed key management approach for wireless body area networks.
Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo
2010-01-01
Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain.
Evaluation and prediction of solar radiation for energy management based on neural networks
NASA Astrophysics Data System (ADS)
Aldoshina, O. V.; Van Tai, Dinh
2017-08-01
Currently, there is a high rate of distribution of renewable energy sources and distributed power generation based on intelligent networks; therefore, meteorological forecasts are particularly useful for planning and managing the energy system in order to increase its overall efficiency and productivity. The application of artificial neural networks (ANN) in the field of photovoltaic energy is presented in this article. Implemented in this study, two periodically repeating dynamic ANS, that are the concentration of the time delay of a neural network (CTDNN) and the non-linear autoregression of a network with exogenous inputs of the NAEI, are used in the development of a model for estimating and daily forecasting of solar radiation. ANN show good productivity, as reliable and accurate models of daily solar radiation are obtained. This allows to successfully predict the photovoltaic output power for this installation. The potential of the proposed method for controlling the energy of the electrical network is shown using the example of the application of the NAEI network for predicting the electric load.
Distributed Communications Resource Management for Tracking and Surveillance Networks
2005-08-01
Principles of Economics , Ludwig von Mises Institute, Auburn, AL, 2004. 13. J. Wang, L. Li, S. H. Low and J. C. Doyle, “Cross-layer Optimization in TCP/IP Networks,” IEEE/ACM Trans. on Networking, 2005, to appear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mercier, C.W.
The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.
K-12 Networking: Breaking Down the Walls of the Learning Environment.
ERIC Educational Resources Information Center
Epler, Doris, Ed.
Networks can benefit school libraries by: (1) offering multiple user access to information; (2) managing and distributing information and data; (3) allowing resources to be shared; (4) improving and enabling communications; (5) improving the management of resources; and (6) creating renewed interest in the library and its resources. As a result,…
A Bibliography of Packet Radio Literature.
1984-03-01
r r* C.C.4. W . . . . . . . . .. . . . Gafni, E. and D. Bertsekas, "Distributed Algorithms for Generating Loop-Free Routes in Networks with Frequently...December 1980). Leiner, B., K. Klemba, and J. Tornow , "Packet Radio Networking," Computer World, pp. 26-37, 30, (September 27, 1982). Liu, J...34Distributed Routing and Relay Management in Mobile Packet Radio Networks," Proceedings IEEE COMPCON Fall , p. 235-243 (1980). MacGregor, W ., J. Westcott, and
A distributed data base management capability for the deep space network
NASA Technical Reports Server (NTRS)
Bryan, A. I.
1976-01-01
The Configuration Control and Audit Assembly (CCA) is reported that has been designed to provide a distributed data base management capability for the DSN. The CCA utilizes capabilities provided by the DSN standard minicomputer and the DSN standard nonreal time high level management oriented programming language, MBASIC. The characteristics of the CCA for the first phase of implementation are described.
A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method
Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei
2016-01-01
With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624
NetWall distributed firewall in the use of campus network
NASA Astrophysics Data System (ADS)
He, Junhua; Zhang, Pengshuai
2011-10-01
Internet provides a modern means of education but also non-mainstream consciousness and poor dissemination of information opens the door, network and moral issues have become prominent, poor dissemination of information and network spread rumors and negative effects of new problems, ideological and political education in schools had a huge impact, poses a severe challenge. This paper presents a distributed firewall will NetWall deployed in a campus network solution. The characteristics of the campus network, using technology to filter out bad information on the means of control, of sensitive information related to the record, establish a complete information security management platform for the campus network.
Distributed cluster management techniques for unattended ground sensor networks
NASA Astrophysics Data System (ADS)
Essawy, Magdi A.; Stelzig, Chad A.; Bevington, James E.; Minor, Sharon
2005-05-01
Smart Sensor Networks are becoming important target detection and tracking tools. The challenging problems in such networks include the sensor fusion, data management and communication schemes. This work discusses techniques used to distribute sensor management and multi-target tracking responsibilities across an ad hoc, self-healing cluster of sensor nodes. Although miniaturized computing resources possess the ability to host complex tracking and data fusion algorithms, there still exist inherent bandwidth constraints on the RF channel. Therefore, special attention is placed on the reduction of node-to-node communications within the cluster by minimizing unsolicited messaging, and distributing the sensor fusion and tracking tasks onto local portions of the network. Several challenging problems are addressed in this work including track initialization and conflict resolution, track ownership handling, and communication control optimization. Emphasis is also placed on increasing the overall robustness of the sensor cluster through independent decision capabilities on all sensor nodes. Track initiation is performed using collaborative sensing within a neighborhood of sensor nodes, allowing each node to independently determine if initial track ownership should be assumed. This autonomous track initiation prevents the formation of duplicate tracks while eliminating the need for a central "management" node to assign tracking responsibilities. Track update is performed as an ownership node requests sensor reports from neighboring nodes based on track error covariance and the neighboring nodes geo-positional location. Track ownership is periodically recomputed using propagated track states to determine which sensing node provides the desired coverage characteristics. High fidelity multi-target simulation results are presented, indicating the distribution of sensor management and tracking capabilities to not only reduce communication bandwidth consumption, but to also simplify multi-target tracking within the cluster.
Distributed controller clustering in software defined networks
Gani, Abdullah; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability. PMID:28384312
Multimedia on the Network: Has Its Time Come?
ERIC Educational Resources Information Center
Galbreath, Jeremy
1995-01-01
Examines the match between multimedia data and local area network (LAN) infrastructures. Highlights include applications for networked multimedia, i.e., asymmetric and symmetric; alternate LAN technology, including stream management software, Ethernet, FDDI (Fiber Distributed Data Interface), and ATM (Asynchronous Transfer Mode); WAN (Wide Area…
Moving beyond Blackboard: Using a Social Network as a Learning Management System
ERIC Educational Resources Information Center
Thacker, Christopher
2012-01-01
Web 2.0 is a paradigm of a participatory Internet, which has implications for the delivery of online courses. Instructors and students can now develop, distribute, and aggregate content through the use of third-party web applications, particularly social networking platforms, which combine to form a user-created learning management system (LMS).…
Asset deterioration and discolouration in water distribution systems.
Husband, P S; Boxall, J B
2011-01-01
Water Distribution Systems function to supply treated water safe for human consumption and complying with increasingly stringent quality regulations. Considered primarily an aesthetic issue, discolouration is the largest cause of customer dissatisfaction associated with distribution system water quality. Pro-active measures to prevent discolouration are sought yet network processes remain insufficiently understood to fully justify and optimise capital or operational strategies to manage discolouration risk. Results are presented from a comprehensive fieldwork programme in UK water distribution networks that have determined asset deterioration with respect to discolouration. This is achieved by quantification of material accumulating as cohesive layers on pipe surfaces that when mobilised are acknowledged as the primary cause of discolouration. It is shown that these material layers develop ubiquitously with defined layer strength characteristics and at a consistent and repeatable rate dependant on water quality. For UK networks iron concentration in the bulk water is shown as a potential indicator of deterioration rate. With material layer development rates determined, management decisions that balance discolouration risk and expenditure to maintain water quality integrity can be justified. In particular the balance between capital investment such as improving water treatment output or pipe renewal and operational expenditure such as the frequency of network maintenance through flushing may be judged. While the rate of development is shown to be a function of water quality, the magnitude (peak or average turbidity) of discolouration incidents is shown to be dominated by hydraulic conditions. From this it can be proposed that network hydraulic management, such as regular periodic 'stressing', is a potential strategy in reducing discolouration risk. The ultimate application of this is the hydraulic design of self-cleaning networks to maintain discolouration risk below acceptable levels. Copyright © 2010 Elsevier Ltd. All rights reserved.
Performance issues in management of the Space Station Information System
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1988-01-01
The onboard segment of the Space Station Information System (SSIS), called the Data Management System (DMS), will consist of a Fiber Distributed Data Interface (FDDI) token-ring network. The performance of the DMS in scenarios involving two kinds of network management is analyzed. In the first scenario, how the transmission of routine management messages impacts performance of the DMS is examined. In the second scenario, techniques for ensuring low latency of real-time control messages in an emergency are examined.
Robinson, Jason L; Fordyce, James A
2017-01-01
Among the greatest challenges facing the conservation of plants and animal species in protected areas are threats from a rapidly changing climate. An altered climate creates both challenges and opportunities for improving the management of protected areas in networks. Increasingly, quantitative tools like species distribution modeling are used to assess the performance of protected areas and predict potential responses to changing climates for groups of species, within a predictive framework. At larger geographic domains and scales, protected area network units have spatial geoclimatic properties that can be described in the gap analysis typically used to measure or aggregate the geographic distributions of species (stacked species distribution models, or S-SDM). We extend the use of species distribution modeling techniques in order to model the climate envelope (or "footprint") of individual protected areas within a network of protected areas distributed across the 48 conterminous United States and managed by the US National Park System. In our approach we treat each protected area as the geographic range of a hypothetical endemic species, then use MaxEnt and 5 uncorrelated BioClim variables to model the geographic distribution of the climatic envelope associated with each protected area unit (modeling the geographic area of park units as the range of a species). We describe the individual and aggregated climate envelopes predicted by a large network of 163 protected areas and briefly illustrate how macroecological measures of geodiversity can be derived from our analysis of the landscape ecological context of protected areas. To estimate trajectories of change in the temporal distribution of climatic features within a protected area network, we projected the climate envelopes of protected areas in current conditions onto a dataset of predicted future climatic conditions. Our results suggest that the climate envelopes of some parks may be locally unique or have narrow geographic distributions, and are thus prone to future shifts away from the climatic conditions in these parks in current climates. In other cases, some parks are broadly similar to large geographic regions surrounding the park or have climatic envelopes that may persist into near-term climate change. Larger parks predict larger climatic envelopes, in current conditions, but on average the predicted area of climate envelopes are smaller in our single future conditions scenario. Individual units in a protected area network may vary in the potential for climate adaptation, and adaptive management strategies for the network should account for the landscape contexts of the geodiversity or climate diversity within individual units. Conservation strategies, including maintaining connectivity, assessing the feasibility of assisted migration and other landscape restoration or enhancements can be optimized using analysis methods to assess the spatial properties of protected area networks in biogeographic and macroecological contexts.
Analysis of the Chinese air route network as a complex network
NASA Astrophysics Data System (ADS)
Cai, Kai-Quan; Zhang, Jun; Du, Wen-Bo; Cao, Xian-Bin
2012-02-01
The air route network, which supports all the flight activities of the civil aviation, is the most fundamental infrastructure of air traffic management system. In this paper, we study the Chinese air route network (CARN) within the framework of complex networks. We find that CARN is a geographical network possessing exponential degree distribution, low clustering coefficient, large shortest path length and exponential spatial distance distribution that is obviously different from that of the Chinese airport network (CAN). Besides, via investigating the flight data from 2002 to 2010, we demonstrate that the topology structure of CARN is homogeneous, howbeit the distribution of flight flow on CARN is rather heterogeneous. In addition, the traffic on CARN keeps growing in an exponential form and the increasing speed of west China is remarkably larger than that of east China. Our work will be helpful to better understand Chinese air traffic systems.
OpenFlow arbitrated programmable network channels for managing quantum metadata
Dasari, Venkat R.; Humble, Travis S.
2016-10-10
Quantum networks must classically exchange complex metadata between devices in order to carry out information for protocols such as teleportation, super-dense coding, and quantum key distribution. Demonstrating the integration of these new communication methods with existing network protocols, channels, and data forwarding mechanisms remains an open challenge. Software-defined networking (SDN) offers robust and flexible strategies for managing diverse network devices and uses. We adapt the principles of SDN to the deployment of quantum networks, which are composed from unique devices that operate according to the laws of quantum mechanics. We show how quantum metadata can be managed within a software-definedmore » network using the OpenFlow protocol, and we describe how OpenFlow management of classical optical channels is compatible with emerging quantum communication protocols. We next give an example specification of the metadata needed to manage and control quantum physical layer (QPHY) behavior and we extend the OpenFlow interface to accommodate this quantum metadata. Here, we conclude by discussing near-term experimental efforts that can realize SDN’s principles for quantum communication.« less
OpenFlow arbitrated programmable network channels for managing quantum metadata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasari, Venkat R.; Humble, Travis S.
Quantum networks must classically exchange complex metadata between devices in order to carry out information for protocols such as teleportation, super-dense coding, and quantum key distribution. Demonstrating the integration of these new communication methods with existing network protocols, channels, and data forwarding mechanisms remains an open challenge. Software-defined networking (SDN) offers robust and flexible strategies for managing diverse network devices and uses. We adapt the principles of SDN to the deployment of quantum networks, which are composed from unique devices that operate according to the laws of quantum mechanics. We show how quantum metadata can be managed within a software-definedmore » network using the OpenFlow protocol, and we describe how OpenFlow management of classical optical channels is compatible with emerging quantum communication protocols. We next give an example specification of the metadata needed to manage and control quantum physical layer (QPHY) behavior and we extend the OpenFlow interface to accommodate this quantum metadata. Here, we conclude by discussing near-term experimental efforts that can realize SDN’s principles for quantum communication.« less
Insertion algorithms for network model database management systems
NASA Astrophysics Data System (ADS)
Mamadolimov, Abdurashid; Khikmat, Saburov
2017-12-01
The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.
MapReduce Based Parallel Bayesian Network for Manufacturing Quality Control
NASA Astrophysics Data System (ADS)
Zheng, Mao-Kuan; Ming, Xin-Guo; Zhang, Xian-Yu; Li, Guo-Ming
2017-09-01
Increasing complexity of industrial products and manufacturing processes have challenged conventional statistics based quality management approaches in the circumstances of dynamic production. A Bayesian network and big data analytics integrated approach for manufacturing process quality analysis and control is proposed. Based on Hadoop distributed architecture and MapReduce parallel computing model, big volume and variety quality related data generated during the manufacturing process could be dealt with. Artificial intelligent algorithms, including Bayesian network learning, classification and reasoning, are embedded into the Reduce process. Relying on the ability of the Bayesian network in dealing with dynamic and uncertain problem and the parallel computing power of MapReduce, Bayesian network of impact factors on quality are built based on prior probability distribution and modified with posterior probability distribution. A case study on hull segment manufacturing precision management for ship and offshore platform building shows that computing speed accelerates almost directly proportionally to the increase of computing nodes. It is also proved that the proposed model is feasible for locating and reasoning of root causes, forecasting of manufacturing outcome, and intelligent decision for precision problem solving. The integration of bigdata analytics and BN method offers a whole new perspective in manufacturing quality control.
A cooperative model for IS security risk management in distributed environment.
Feng, Nan; Zheng, Chundong
2014-01-01
Given the increasing cooperation between organizations, the flexible exchange of security information across the allied organizations is critical to effectively manage information systems (IS) security in a distributed environment. In this paper, we develop a cooperative model for IS security risk management in a distributed environment. In the proposed model, the exchange of security information among the interconnected IS under distributed environment is supported by Bayesian networks (BNs). In addition, for an organization's IS, a BN is utilized to represent its security environment and dynamically predict its security risk level, by which the security manager can select an optimal action to safeguard the firm's information resources. The actual case studied illustrates the cooperative model presented in this paper and how it can be exploited to manage the distributed IS security risk effectively.
Coordinating complex problem-solving among distributed intelligent agents
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1992-01-01
A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.
NASA Astrophysics Data System (ADS)
Ohyama, Takashi; Enomoto, Hiroyuki; Takei, Yuichiro; Maeda, Yuji
2009-05-01
Most of Japan's local governments utilize municipal disaster-management radio communications systems to communicate information on disasters or terrorism to residents. The national government is progressing in efforts toward digitalization by local governments of these systems, but only a small number (approx. 10%) have introduced such equipment due to its requiring large amounts of investment. On the other hand, many local governments are moving forward in installation of optical fiber networks for the purpose of eliminating the "digital divide." We herein propose a communication system as an alternative or supplement to municipal disaster-management radio communications systems, which utilizes municipal optical fiber networks, the internet and similar networks and terminals. The system utilizes the multiple existing networks and is capable of instantly distributing to all residents, and controlling, risk management information. We describe the system overview and the field trials conducted with a local government using this system.
Positive train control interoperability and networking research : final report.
DOT National Transportation Integrated Search
2015-12-01
This document describes the initial development of an ITC PTC Shared Network (IPSN), a hosted : environment to support the distribution, configuration management, and IT governance of Interoperable : Train Control (ITC) Positive Train Control (PTC) s...
Managing Documents in the Wider Area: Intelligent Document Management.
ERIC Educational Resources Information Center
Bittleston, Richard
1995-01-01
Discusses techniques for managing documents in wide area networks, reviews technique limitations, and offers recommendations to database designers. Presented techniques include: increasing bandwidth, reducing data traffic, synchronizing documentation, partial synchronization, audit trials, navigation, and distribution control and security. Two…
Networked sensors for the combat forces
NASA Astrophysics Data System (ADS)
Klager, Gene
2004-11-01
Real-time and detailed information is critical to the success of ground combat forces. Current manned reconnaissance, surveillance, and target acquisition (RSTA) capabilities are not sufficient to cover battlefield intelligence gaps, provide Beyond-Line-of-Sight (BLOS) targeting, and the ambush avoidance information necessary for combat forces operating in hostile situations, complex terrain, and conducting military operations in urban terrain. This paper describes a current US Army program developing advanced networked unmanned/unattended sensor systems to survey these gaps and provide the Commander with real-time, pertinent information. Networked Sensors for the Combat Forces plans to develop and demonstrate a new generation of low cost distributed unmanned sensor systems organic to the RSTA Element. Networked unmanned sensors will provide remote monitoring of gaps, will increase a unit"s area of coverage, and will provide the commander organic assets to complete his Battlefield Situational Awareness (BSA) picture for direct and indirect fire weapons, early warning, and threat avoidance. Current efforts include developing sensor packages for unmanned ground vehicles, small unmanned aerial vehicles, and unattended ground sensors using advanced sensor technologies. These sensors will be integrated with robust networked communications and Battle Command tools for mission planning, intelligence "reachback", and sensor data management. The network architecture design is based on a model that identifies a three-part modular design: 1) standardized sensor message protocols, 2) Sensor Data Management, and 3) Service Oriented Architecture. This simple model provides maximum flexibility for data exchange, information management and distribution. Products include: Sensor suites optimized for unmanned platforms, stationary and mobile versions of the Sensor Data Management Center, Battle Command planning tools, networked communications, and sensor management software. Details of these products and recent test results will be presented.
NASA Astrophysics Data System (ADS)
Johnston, William; Ernst, M.; Dart, E.; Tierney, B.
2014-04-01
Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large-scale, widely distributed data analysis systems, and the experience of the LHC can be applied to other scientific disciplines. In particular, specific analogies to the SKA will be cited in the talk.
Communal Cooperation in Sensor Networks for Situation Management
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin,Chunsheng
2006-01-01
Situation management is a rapidly evolving science where managed sources are processed as realtime streams of events and fused in a way that maximizes comprehension, thus enabling better decisions for action. Sensor networks provide a new technology that promises ubiquitous input and action throughout an environment, which can substantially improve information available to the process. Here we describe a NASA program that requires improvements in sensor networks and situation management. We present an approach for massively deployed sensor networks that does not rely on centralized control but is founded in lessons learned from the way biological ecosystems are organized. In this approach, fully distributed data aggregation and integration can be performed in a scalable fashion where individual motes operate based on local information, making local decisions that achieve globally-meaningful results. This exemplifies the robust, fault-tolerant infrastructure required for successful situation management systems.
Autonomous smart sensor network for full-scale structural health monitoring
NASA Astrophysics Data System (ADS)
Rice, Jennifer A.; Mechitov, Kirill A.; Spencer, B. F., Jr.; Agha, Gul A.
2010-04-01
The demands of aging infrastructure require effective methods for structural monitoring and maintenance. Wireless smart sensor networks offer the ability to enhance structural health monitoring (SHM) practices through the utilization of onboard computation to achieve distributed data management. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While smart sensor technology is not new, the number of full-scale SHM applications has been limited. This slow progress is due, in part, to the complex network management issues that arise when moving from a laboratory setting to a full-scale monitoring implementation. This paper presents flexible network management software that enables continuous and autonomous operation of wireless smart sensor networks for full-scale SHM applications. The software components combine sleep/wake cycling for enhanced power management with threshold detection for triggering network wide tasks, such as synchronized sensing or decentralized modal analysis, during periods of critical structural response.
Downlink power distributions for 2G and 3G mobile communication networks.
Colombi, Davide; Thors, Björn; Persson, Tomas; Wirén, Niklas; Larsson, Lars-Eric; Jonsson, Mikael; Törnevik, Christer
2013-12-01
Knowledge of realistic power levels is key when conducting accurate EMF exposure assessments. In this study, downlink output power distributions for radio base stations in 2G and 3G mobile communication networks have been assessed. The distributions were obtained from network measurement data collected from the Operations Support System, which normally is used for network monitoring and management. Significant amounts of data were gathered simultaneously for large sets of radio base stations covering wide geographical areas and different environments. The method was validated with in situ measurements. For the 3G network, the 90th percentile of the averaged output power during high traffic hours was found to be 43 % of the maximum available power. The corresponding number for 2G, with two or more transceivers installed, was 65 % or below.
Advanced systems engineering and network planning support
NASA Technical Reports Server (NTRS)
Walters, David H.; Barrett, Larry K.; Boyd, Ronald; Bazaj, Suresh; Mitchell, Lionel; Brosi, Fred
1990-01-01
The objective of this task was to take a fresh look at the NASA Space Network Control (SNC) element for the Advanced Tracking and Data Relay Satellite System (ATDRSS) such that it can be made more efficient and responsive to the user by introducing new concepts and technologies appropriate for the 1997 timeframe. In particular, it was desired to investigate the technologies and concepts employed in similar systems that may be applicable to the SNC. The recommendations resulting from this study include resource partitioning, on-line access to subsets of the SN schedule, fluid scheduling, increased use of demand access on the MA service, automating Inter-System Control functions using monitor by exception, increase automation for distributed data management and distributed work management, viewing SN operational control in terms of the OSI Management framework, and the introduction of automated interface management.
Design and performance evaluation of a distributed OFDMA-based MAC protocol for MANETs.
Park, Jaesung; Chung, Jiyoung; Lee, Hyungyu; Lee, Jung-Ryun
2014-01-01
In this paper, we propose a distributed MAC protocol for OFDMA-based wireless mobile ad hoc multihop networks, in which the resource reservation and data transmission procedures are operated in a distributed manner. A frame format is designed considering the characteristics of OFDMA that each node can transmit or receive data to or from multiple nodes simultaneously. Under this frame structure, we propose a distributed resource management method including network state estimation and resource reservation processes. We categorize five types of logical errors according to their root causes and show that two of the logical errors are inevitable while three of them are avoided under the proposed distributed MAC protocol. In addition, we provide a systematic method to determine the advertisement period of each node by presenting a clear relation between the accuracy of estimated network states and the signaling overhead. We evaluate the performance of the proposed protocol in respect of the reservation success rate and the success rate of data transmission. Since our method focuses on avoiding logical errors, it could be easily placed on top of the other resource allocation methods focusing on the physical layer issues of the resource management problem and interworked with them.
A Distributed Dynamic Programming-Based Solution for Load Management in Smart Grids
NASA Astrophysics Data System (ADS)
Zhang, Wei; Xu, Yinliang; Li, Sisi; Zhou, MengChu; Liu, Wenxin; Xu, Ying
2018-03-01
Load management is being recognized as an important option for active user participation in the energy market. Traditional load management methods usually require a centralized powerful control center and a two-way communication network between the system operators and energy end-users. The increasing user participation in smart grids may limit their applications. In this paper, a distributed solution for load management in emerging smart grids is proposed. The load management problem is formulated as a constrained optimization problem aiming at maximizing the overall utility of users while meeting the requirement for load reduction requested by the system operator, and is solved by using a distributed dynamic programming algorithm. The algorithm is implemented via a distributed framework and thus can deliver a highly desired distributed solution. It avoids the required use of a centralized coordinator or control center, and can achieve satisfactory outcomes for load management. Simulation results with various test systems demonstrate its effectiveness.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
NASA Astrophysics Data System (ADS)
Felix, J.
The management center and new circuit switching services offered by the French Telecom I network are described. Attention is focused on business services. The satellite has a 125 Mbit/sec capability distributed over 5 frequency bands, yielding the equivalent of 1800 channels. Data are transmitted in digitized bursts with TDMA techniques. Besides the management center, Telecom I interfaces with 310 local network antennas with access managed by the center through a reservation service and protocol assignment. The center logs and supervises alarms and network events, monitors traffic, logs taxation charges and manages the man-machine dialog for TDMA and terrestrial operations. Time slots are arranged in terms of minimal 10 min segments. The reservations can be directly accessed by up to 1000 terminals. All traffic is handled on a call-by-call basis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez; ...
2017-11-18
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
A Cooperative Model for IS Security Risk Management in Distributed Environment
Zheng, Chundong
2014-01-01
Given the increasing cooperation between organizations, the flexible exchange of security information across the allied organizations is critical to effectively manage information systems (IS) security in a distributed environment. In this paper, we develop a cooperative model for IS security risk management in a distributed environment. In the proposed model, the exchange of security information among the interconnected IS under distributed environment is supported by Bayesian networks (BNs). In addition, for an organization's IS, a BN is utilized to represent its security environment and dynamically predict its security risk level, by which the security manager can select an optimal action to safeguard the firm's information resources. The actual case studied illustrates the cooperative model presented in this paper and how it can be exploited to manage the distributed IS security risk effectively. PMID:24563626
A Novel College Network Resource Management Method using Cloud Computing
NASA Astrophysics Data System (ADS)
Lin, Chen
At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.
Dynamic Power Distribution System Management With a Locally Connected Communication Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhang, Kaiqing; Basar, Tamer
Coordinated optimization and control of distribution-level assets can enable a reliable and optimal integration of massive amount of distributed energy resources (DERs) and facilitate distribution system management (DSM). Accordingly, the objective is to coordinate the power injection at the DERs to maintain certain quantities across the network, e.g., voltage magnitude, line flows, or line losses, to be close to a desired profile. By and large, the performance of the DSM algorithms has been challenged by two factors: i) the possibly non-strongly connected communication network over DERs that hinders the coordination; ii) the dynamics of the real system caused by themore » DERs with heterogeneous capabilities, time-varying operating conditions, and real-time measurement mismatches. In this paper, we investigate the modeling and algorithm design and analysis with the consideration of these two factors. In particular, a game theoretic characterization is first proposed to account for a locally connected communication network over DERs, along with the analysis of the existence and uniqueness of the Nash equilibrium (NE) therein. To achieve the equilibrium in a distributed fashion, a projected-gradient-based asynchronous DSM algorithm is then advocated. The algorithm performance, including the convergence speed and the tracking error, is analytically guaranteed under the dynamic setting. Extensive numerical tests on both synthetic and realistic cases corroborate the analytical results derived.« less
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.
Jiang, Peng; Xu, Yiming; Liu, Jun
2017-01-19
For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.
[AC-STB: dedicated software for managed healthcare of chronic headache patients].
Wallasch, T-M; Bek, J; Pabel, R; Modahl, M; Demir, M; Straube, A
2009-04-01
This paper examines a new approach to managed healthcare where a network of care providers exchanges patient information through the internet. Integrating networks of clinical specialists and general care providers promises to achieve qualitative and economic improvements in the German healthcare system. In practice, problems related to patient management and data exchange between the managing clinic and assorted caregivers arise. The implementation and use of a cross-spectrum computerized solution for the management of patients and their care is the key for a successful managed healthcare system. This paper documents the managed healthcare of chronic headache patients and the development of an IT-solution capable of providing distributed patient care and case management.
Bailly, Jean-Stéphane; Vinatier, Fabrice
2018-01-01
To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute new seed bank sources for species that are affected by the distance to natural lands and roads. PMID:29360857
Rudi, Gabrielle; Bailly, Jean-Stéphane; Vinatier, Fabrice
2018-01-01
To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute new seed bank sources for species that are affected by the distance to natural lands and roads.
Hung, Le Xuan; Canh, Ngo Trong; Lee, Sungyoung; Lee, Young-Koo; Lee, Heejo
2008-01-01
For many sensor network applications such as military or homeland security, it is essential for users (sinks) to access the sensor network while they are moving. Sink mobility brings new challenges to secure routing in large-scale sensor networks. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. Also, studies and experiences have shown that considering security during design time is the best way to provide security for sensor network routing. This paper presents an energy-efficient secure routing and key management for mobile sinks in sensor networks, called SCODEplus. It is a significant extension of our previous study in five aspects: (1) Key management scheme and routing protocol are considered during design time to increase security and efficiency; (2) The network topology is organized in a hexagonal plane which supports more efficiency than previous square-grid topology; (3) The key management scheme can eliminate the impacts of node compromise attacks on links between non-compromised nodes; (4) Sensor node deployment is based on Gaussian distribution which is more realistic than uniform distribution; (5) No GPS or like is required to provide sensor node location information. Our security analysis demonstrates that the proposed scheme can defend against common attacks in sensor networks including node compromise attacks, replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Both mathematical and simulation-based performance evaluation show that the SCODEplus significantly reduces the communication overhead, energy consumption, packet delivery latency while it always delivers more than 97 percent of packets successfully. PMID:27873956
Hung, Le Xuan; Canh, Ngo Trong; Lee, Sungyoung; Lee, Young-Koo; Lee, Heejo
2008-12-03
For many sensor network applications such as military or homeland security, it is essential for users (sinks) to access the sensor network while they are moving. Sink mobility brings new challenges to secure routing in large-scale sensor networks. Previous studies on sink mobility have mainly focused on efficiency and effectiveness of data dissemination without security consideration. Also, studies and experiences have shown that considering security during design time is the best way to provide security for sensor network routing. This paper presents an energy-efficient secure routing and key management for mobile sinks in sensor networks, called SCODE plus . It is a significant extension of our previous study in five aspects: (1) Key management scheme and routing protocol are considered during design time to increase security and efficiency; (2) The network topology is organized in a hexagonal plane which supports more efficiency than previous square-grid topology; (3) The key management scheme can eliminate the impacts of node compromise attacks on links between non-compromised nodes; (4) Sensor node deployment is based on Gaussian distribution which is more realistic than uniform distribution; (5) No GPS or like is required to provide sensor node location information. Our security analysis demonstrates that the proposed scheme can defend against common attacks in sensor networks including node compromise attacks, replay attacks, selective forwarding attacks, sinkhole and wormhole, Sybil attacks, HELLO flood attacks. Both mathematical and simulation-based performance evaluation show that the SCODE plus significantly reduces the communication overhead, energy consumption, packet delivery latency while it always delivers more than 97 percent of packets successfully.
NASA Astrophysics Data System (ADS)
Alyami, Saeed
Installation of photovoltaic (PV) units could lead to great challenges to the existing electrical systems. Issues such as voltage rise, protection coordination, islanding detection, harmonics, increased or changed short-circuit levels, etc., need to be carefully addressed before we can see a wide adoption of this environmentally friendly technology. Voltage rise or overvoltage issues are of particular importance to be addressed for deploying more PV systems to distribution networks. This dissertation proposes a comprehensive solution to deal with the voltage violations in distribution networks, from controlling PV power outputs and electricity consumption of smart appliances in real time to optimal placement of PVs at the planning stage. The dissertation is composed of three parts: the literature review, the work that has already been done and the future research tasks. An overview on renewable energy generation and its challenges are given in Chapter 1. The overall literature survey, motivation and the scope of study are also outlined in the chapter. Detailed literature reviews are given in the rest of chapters. The overvoltage and undervoltage phenomena in typical distribution networks with integration of PVs are further explained in Chapter 2. Possible approaches for voltage quality control are also discussed in this chapter, followed by the discussion on the importance of the load management for PHEVs and appliances and its benefits to electric utilities and end users. A new real power capping method is presented in Chapter 3 to prevent overvoltage by adaptively setting the power caps for PV inverters in real time. The proposed method can maintain voltage profiles below a pre-set upper limit while maximizing the PV generation and fairly distributing the real power curtailments among all the PV systems in the network. As a result, each of the PV systems in the network has equal opportunity to generate electricity and shares the responsibility of voltage regulation. The method does not require global information and can be implemented either under a centralized supervisory control scheme or in a distributed way via consensus control. Chapter 4 investigates autonomous operation schedules for three types of intelligent appliances (or residential controllable loads) without receiving external signals for cost saving and for assisting the management of possible photovoltaic generation systems installed in the same distribution network. The three types of controllable loads studied in the chapter are electric water heaters, refrigerators deicing loads, and dishwashers, respectively. Chapter 5 investigates the method to mitigate overvoltage issues at the planning stage. A probabilistic method is presented in the chapter to evaluate the overvoltage risk in a distribution network with different PV capacity sizes under different load levels. Kolmogorov--Smirnov test (K--S test) is used to identify the most proper probability distributions for solar irradiance in different months. To increase accuracy, an iterative process is used to obtain the maximum allowable injection of active power from PVs. Conclusion and discussions on future work are given in Chapter 6.
A resilient and secure software platform and architecture for distributed spacecraft
NASA Astrophysics Data System (ADS)
Otte, William R.; Dubey, Abhishek; Karsai, Gabor
2014-06-01
A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.
Optical processing for future computer networks
NASA Technical Reports Server (NTRS)
Husain, A.; Haugen, P. R.; Hutcheson, L. D.; Warrior, J.; Murray, N.; Beatty, M.
1986-01-01
In the development of future data management systems, such as the NASA Space Station, a major problem represents the design and implementation of a high performance communication network which is self-correcting and repairing, flexible, and evolvable. To obtain the goal of designing such a network, it will be essential to incorporate distributed adaptive network control techniques. The present paper provides an outline of the functional and communication network requirements for the Space Station data management system. Attention is given to the mathematical representation of the operations being carried out to provide the required functionality at each layer of communication protocol on the model. The possible implementation of specific communication functions in optics is also considered.
Performance evaluation of distributed wavelength assignment in WDM optical networks
NASA Astrophysics Data System (ADS)
Hashiguchi, Tomohiro; Wang, Xi; Morikawa, Hiroyuki; Aoyama, Tomonori
2004-04-01
In WDM wavelength routed networks, prior to a data transfer, a call setup procedure is required to reserve a wavelength path between the source-destination node pairs. A distributed approach to a connection setup can achieve a very high speed, while improving the reliability and reducing the implementation cost of the networks. However, along with many advantages, several major challenges have been posed by the distributed scheme in how the management and allocation of wavelength could be efficiently carried out. In this thesis, we apply a distributed wavelength assignment algorithm named priority based wavelength assignment (PWA) that was originally proposed for the use in burst switched optical networks to the problem of reserving wavelengths of path reservation protocols in the distributed control optical networks. Instead of assigning wavelengths randomly, this approach lets each node select the "safest" wavelengths based on the information of wavelength utilization history, thus unnecessary future contention is prevented. The simulation results presented in this paper show that the proposed protocol can enhance the performance of the system without introducing any apparent drawbacks.
Large-scale P2P network based distributed virtual geographic environment (DVGE)
NASA Astrophysics Data System (ADS)
Tan, Xicheng; Yu, Liang; Bian, Fuling
2007-06-01
Virtual Geographic Environment has raised full concern as a kind of software information system that helps us understand and analyze the real geographic environment, and it has also expanded to application service system in distributed environment--distributed virtual geographic environment system (DVGE), and gets some achievements. However, limited by the factor of the mass data of VGE, the band width of network, as well as numerous requests and economic, etc. DVGE still faces some challenges and problems which directly cause the current DVGE could not provide the public with high-quality service under current network mode. The Rapid development of peer-to-peer network technology has offered new ideas of solutions to the current challenges and problems of DVGE. Peer-to-peer network technology is able to effectively release and search network resources so as to realize efficient share of information. Accordingly, this paper brings forth a research subject on Large-scale peer-to-peer network extension of DVGE as well as a deep study on network framework, routing mechanism, and DVGE data management on P2P network.
A resource management architecture based on complex network theory in cloud computing federation
NASA Astrophysics Data System (ADS)
Zhang, Zehua; Zhang, Xuejie
2011-10-01
Cloud Computing Federation is a main trend of Cloud Computing. Resource Management has significant effect on the design, realization, and efficiency of Cloud Computing Federation. Cloud Computing Federation has the typical characteristic of the Complex System, therefore, we propose a resource management architecture based on complex network theory for Cloud Computing Federation (abbreviated as RMABC) in this paper, with the detailed design of the resource discovery and resource announcement mechanisms. Compare with the existing resource management mechanisms in distributed computing systems, a Task Manager in RMABC can use the historical information and current state data get from other Task Managers for the evolution of the complex network which is composed of Task Managers, thus has the advantages in resource discovery speed, fault tolerance and adaptive ability. The result of the model experiment confirmed the advantage of RMABC in resource discovery performance.
Managing Communications with Experts in Geographically Distributed Collaborative Networks
2009-03-01
agent architectures, and management of sensor-unmanned vehicle decision maker self organizing environments . Although CENETIX has its beginnings...understanding how everything in a complex system is interconnected. Additionally, environmental factors that impact the management of communications with...unrestricted warfare environment . In “Unconventional Insights for Managing Stakeholder Trust”, Pirson, et al. (2008) emphasizes the challenges of managing
Advanced algorithms for distributed fusion
NASA Astrophysics Data System (ADS)
Gelfand, A.; Smith, C.; Colony, M.; Bowman, C.; Pei, R.; Huynh, T.; Brown, C.
2008-03-01
The US Military has been undergoing a radical transition from a traditional "platform-centric" force to one capable of performing in a "Network-Centric" environment. This transformation will place all of the data needed to efficiently meet tactical and strategic goals at the warfighter's fingertips. With access to this information, the challenge of fusing data from across the batttlespace into an operational picture for real-time Situational Awareness emerges. In such an environment, centralized fusion approaches will have limited application due to the constraints of real-time communications networks and computational resources. To overcome these limitations, we are developing a formalized architecture for fusion and track adjudication that allows the distribution of fusion processes over a dynamically created and managed information network. This network will support the incorporation and utilization of low level tracking information within the Army Distributed Common Ground System (DCGS-A) or Future Combat System (FCS). The framework is based on Bowman's Dual Node Network (DNN) architecture that utilizes a distributed network of interlaced fusion and track adjudication nodes to build and maintain a globally consistent picture across all assets.
Programmable multi-node quantum network design and simulation
NASA Astrophysics Data System (ADS)
Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.
2016-05-01
Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.
Distributed intelligent monitoring and reporting facilities
NASA Astrophysics Data System (ADS)
Pavlou, George; Mykoniatis, George; Sanchez-P, Jorge-A.
1996-06-01
Distributed intelligent monitoring and reporting facilities are of paramount importance in both service and network management as they provide the capability to monitor quality of service and utilization parameters and notify degradation so that corrective action can be taken. By intelligent, we refer to the capability of performing the monitoring tasks in a way that has the smallest possible impact on the managed network, facilitates the observation and summarization of information according to a number of criteria and in its most advanced form and permits the specification of these criteria dynamically to suit the particular policy in hand. In addition, intelligent monitoring facilities should minimize the design and implementation effort involved in such activities. The ISO/ITU Metric, Summarization and Performance management functions provide models that only partially satisfy the above requirements. This paper describes our extensions to the proposed models to support further capabilities, with the intention to eventually lead to fully dynamically defined monitoring policies. The concept of distributing intelligence is also discussed, including the consideration of security issues and the applicability of the model in ODP-based distributed processing environments.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Reduction of water losses by rehabilitation of water distribution network.
Güngör, Mahmud; Yarar, Ufuk; Firat, Mahmut
2017-09-11
Physical or real losses may be indicated as the most important component of the water losses occurring in a water distribution network (WDN). The objective of this study is to examine the effects of piping material management and network rehabilitation on the physical water losses and water losses management in a WDN. For this aim, the Denizli WDN consisting of very old pipes that have exhausted their economic life is selected as the study area. The fact that the current network is old results in the decrease of pressure strength, increase of failure intensity, and inefficient use of water resources thus leading to the application of the rehabilitation program. In Denizli, network renewal works have been carried out since the year 2009 under the rehabilitation program. It was determined that the failure rate at regions where network renewal constructions have been completed decreased down to zero level. Renewal of piping material enables the minimization of leakage losses as well as the failure rate. On the other hand, the system rehabilitation has the potential to amortize itself in a very short amount of time if the initial investment cost of network renewal is considered along with the operating costs of the old and new systems, as well as water loss costs. As a result, it can be stated that renewal of piping material in water distribution systems, enhancement of the physical properties of the system, provide significant contributions such as increase of water and energy efficiency and more effective use of resources.
Rehan, R; Knight, M A; Unger, A J A; Haas, C T
2013-12-15
This paper develops causal loop diagrams and a system dynamics model for financially sustainable management of urban water distribution networks. The developed causal loop diagrams are a novel contribution in that it illustrates the unique characteristics and feedback loops for financially self-sustaining water distribution networks. The system dynamics model is a mathematical realization of the developed interactions among system variables over time and is comprised of three sectors namely watermains network, consumer, and finance. This is the first known development of a water distribution network system dynamics model. The watermains network sector accounts for the unique characteristics of watermain pipes such as service life, deterioration progression, pipe breaks, and water leakage. The finance sector allows for cash reserving by the utility in addition to the pay-as-you-go and borrowing strategies. The consumer sector includes controls to model water fee growth as a function of service performance and a household's financial burden due to water fees. A series of policy levers are provided that allow the impact of various financing strategies to be evaluated in terms of financial sustainability and household affordability. The model also allows for examination of the impact of different management strategies on the water fee in terms of consistency and stability over time. The paper concludes with a discussion on how the developed system dynamics water model can be used by water utilities to achieve a variety of utility short and long-term objectives and to establish realistic and defensible water utility policies. It also discusses how the model can be used by regulatory bodies, government agencies, the financial industry, and researchers. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Self-Organized Link State Aware Routing for Multiple Mobile Agents in Wireless Network
NASA Astrophysics Data System (ADS)
Oda, Akihiro; Nishi, Hiroaki
Recently, the importance of data sharing structures in autonomous distributed networks has been increasing. A wireless sensor network is used for managing distributed data. This type of distributed network requires effective information exchanging methods for data sharing. To reduce the traffic of broadcasted messages, reduction of the amount of redundant information is indispensable. In order to reduce packet loss in mobile ad-hoc networks, QoS-sensitive routing algorithm have been frequently discussed. The topology of a wireless network is likely to change frequently according to the movement of mobile nodes, radio disturbance, or fading due to the continuous changes in the environment. Therefore, a packet routing algorithm should guarantee QoS by using some quality indicators of the wireless network. In this paper, a novel information exchanging algorithm developed using a hash function and a Boolean operation is proposed. This algorithm achieves efficient information exchanges by reducing the overhead of broadcasting messages, and it can guarantee QoS in a wireless network environment. It can be applied to a routing algorithm in a mobile ad-hoc network. In the proposed routing algorithm, a routing table is constructed by using the received signal strength indicator (RSSI), and the neighborhood information is periodically broadcasted depending on this table. The proposed hash-based routing entry management by using an extended MAC address can eliminate the overhead of message flooding. An analysis of the collision of hash values contributes to the determination of the length of the hash values, which is minimally required. Based on the verification of a mathematical theory, an optimum hash function for determining the length of hash values can be given. Simulations are carried out to evaluate the effectiveness of the proposed algorithm and to validate the theory in a general wireless network routing algorithm.
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
Experiments on neural network architectures for fuzzy logic
NASA Technical Reports Server (NTRS)
Keller, James M.
1991-01-01
The use of fuzzy logic to model and manage uncertainty in a rule-based system places high computational demands on an inference engine. In an earlier paper, the authors introduced a trainable neural network structure for fuzzy logic. These networks can learn and extrapolate complex relationships between possibility distributions for the antecedents and consequents in the rules. Here, the power of these networks is further explored. The insensitivity of the output to noisy input distributions (which are likely if the clauses are generated from real data) is demonstrated as well as the ability of the networks to internalize multiple conjunctive clause and disjunctive clause rules. Since different rules with the same variables can be encoded in a single network, this approach to fuzzy logic inference provides a natural mechanism for rule conflict resolution.
2015-05-22
sensor networks for managing power levels of wireless networks ; air and ground transportation systems for air traffic control and payload transport and... network systems, large-scale systems, adaptive control, discontinuous systems 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF...cover a broad spectrum of ap- plications including cooperative control of unmanned air vehicles, autonomous underwater vehicles, distributed sensor
Contemporary data communications and local networking principles
NASA Astrophysics Data System (ADS)
Chartrand, G. A.
1982-08-01
The most important issue of data communications today is networking which can be roughly divided into two catagories: local networking; and distributed processing. The most sought after aspect of local networking is office automation. Office automation really is the grand unification of all local communications and not of a new type of business office as the name might imply. This unification is the ability to have voice, data, and video carried by the same medium and managed by the same network resources. There are many different ways this unification can be done, and many manufacturers are designing systems to accomplish the task. Distributed processing attempts to share resources between computer systems and peripheral subsystems from the same or different manufacturers. There are several companies that are trying to solve both networking problems with the same network architecture.
DOT National Transportation Integrated Search
2014-04-01
Trip origin-destination (O-D) demand matrices are critical components in transportation network : modeling, and provide essential information on trip distributions and corresponding spatiotemporal : traffic patterns in traffic zones in vehicular netw...
Geissbuhler, Antoine; Spahni, Stéphane; Assimacopoulos, André; Raetzo, Marc-André; Gobet, Gérard
2004-01-01
to design a community healthcare information network for all 450,000 citizen in the State of Geneva, Switzerland, connecting public and private healthcare professionals. Requirements include the decentralized storage of information at the source of its production, the creation of a virtual patient record at the time of the consultation, the control by the patient of the access rights to the information, and the interoperability with other similar networks at the national and european level. a participative approach and real-world pilot projects are used to design, test and validate key components of the network, including its technical architecture and the strategy for the management of access rights by the patients. a distributed architecture using peer-to-peer communication of information mediators can implement the various requirements while limiting to an absolute minimum the amount of centralized information. Access control can be managed by the patient with the help of a medical information mediator, the physician of trust.
A Landscape Approach to Invasive Species Management.
Lurgi, Miguel; Wells, Konstans; Kennedy, Malcolm; Campbell, Susan; Fordham, Damien A
2016-01-01
Biological invasions are not only a major threat to biodiversity, they also have major impacts on local economies and agricultural production systems. Once established, the connection of local populations into metapopulation networks facilitates dispersal at landscape scales, generating spatial dynamics that can impact the outcome of pest-management actions. Much planning goes into landscape-scale invasive species management. However, effective management requires knowledge on the interplay between metapopulation network topology and management actions. We address this knowledge gap using simulation models to explore the effectiveness of two common management strategies, applied across different extents and according to different rules for selecting target localities in metapopulations with different network topologies. These management actions are: (i) general population reduction, and (ii) reduction of an obligate resource. The reduction of an obligate resource was generally more efficient than population reduction for depleting populations at landscape scales. However, the way in which local populations are selected for management is important when the topology of the metapopulation is heterogeneous in terms of the distribution of connections among local populations. We tested these broad findings using real-world scenarios of European rabbits (Oryctolagus cuniculus) infesting agricultural landscapes in Western Australia. Although management strategies targeting central populations were more effective in simulated heterogeneous metapopulation structures, no difference was observed in real-world metapopulation structures that are highly homogeneous. In large metapopulations with high proximity and connectivity of neighbouring populations, different spatial management strategies yield similar outcomes. Directly considering spatial attributes in pest-management actions will be most important for metapopulation networks with heterogeneously distributed links. Our modelling framework provides a simple approach for identifying the best possible management strategy for invasive species based on metapopulation structure and control capacity. This information can be used by managers trying to devise efficient landscape-oriented management strategies for invasive species and can also generate insights for conservation purposes.
Real-time hierarchically distributed processing network interaction simulation
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Wu, C.
1987-01-01
The Telerobot Testbed is a hierarchically distributed processing system which is linked together through a standard, commercial Ethernet. Standard Ethernet systems are primarily designed to manage non-real-time information transfer. Therefore, collisions on the net (i.e., two or more sources attempting to send data at the same time) are managed by randomly rescheduling one of the sources to retransmit at a later time interval. Although acceptable for transmitting noncritical data such as mail, this particular feature is unacceptable for real-time hierarchical command and control systems such as the Telerobot. Data transfer and scheduling simulations, such as token ring, offer solutions to collision management, but do not appropriately characterize real-time data transfer/interactions for robotic systems. Therefore, models like these do not provide a viable simulation environment for understanding real-time network loading. A real-time network loading model is being developed which allows processor-to-processor interactions to be simulated, collisions (and respective probabilities) to be logged, collision-prone areas to be identified, and network control variable adjustments to be reentered as a means of examining and reducing collision-prone regimes that occur in the process of simulating a complete task sequence.
KeyWare: an open wireless distributed computing environment
NASA Astrophysics Data System (ADS)
Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir
1995-12-01
Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.
A Very Large Area Network (VLAN) knowledge-base applied to space communication problems
NASA Technical Reports Server (NTRS)
Zander, Carol S.
1988-01-01
This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.
NASA Astrophysics Data System (ADS)
Fuertes, David; Toledano, Carlos; González, Ramiro; Berjón, Alberto; Torres, Benjamín; Cachorro, Victoria E.; de Frutos, Ángel M.
2018-02-01
Given the importance of the atmospheric aerosol, the number of instruments and measurement networks which focus on its characterization are growing. Many challenges are derived from standardization of protocols, monitoring of the instrument status to evaluate the network data quality and manipulation and distribution of large volume of data (raw and processed). CÆLIS is a software system which aims at simplifying the management of a network, providing tools by monitoring the instruments, processing the data in real time and offering the scientific community a new tool to work with the data. Since 2008 CÆLIS has been successfully applied to the photometer calibration facility managed by the University of Valladolid, Spain, in the framework of Aerosol Robotic Network (AERONET). Thanks to the use of advanced tools, this facility has been able to analyze a growing number of stations and data in real time, which greatly benefits the network management and data quality control. The present work describes the system architecture of CÆLIS and some examples of applications and data processing.
Network Information Management Subsystem
NASA Technical Reports Server (NTRS)
Chatburn, C. C.
1985-01-01
The Deep Space Network is implementing a distributed data base management system in which the data are shared among several applications and the host machines are not totally dedicated to a particular application. Since the data and resources are to be shared, the equipment must be operated carefully so that the resources are shared equitably. The current status of the project is discussed and policies, roles, and guidelines are recommended for the organizations involved in the project.
A Mobile IPv6 based Distributed Mobility Management Mechanism of Mobile Internet
NASA Astrophysics Data System (ADS)
Yan, Shi; Jiayin, Cheng; Shanzhi, Chen
A flatter architecture is one of the trends of mobile Internet. Traditional centralized mobility management mechanism faces the challenges such as scalability and UE reachability. A MIPv6 based distributed mobility management mechanism is proposed in this paper. Some important network entities and signaling procedures are defined. UE reachability is also considered in this paper through extension to DNS servers. Simulation results show that the proposed approach can overcome the scalability problem of the centralized scheme.
Estimating Forest Management Units from Road Network Maps in the Southeastern U.S.
NASA Astrophysics Data System (ADS)
Yang, D.; Hall, J.; Fu, C. S.; Binford, M. W.
2015-12-01
The most important factor affecting forest structure and function is the type of management undertaken in forest stands. Owners manage forests using appropriately sized areas to meet management objectives, which include economic return, sustainability, recreation, or esthetic enjoyment. Thus, the socio-environmental unit of study for forests should be the management unit. To study the ecological effects of different kinds of management activities, we must identify individual management units. Road networks, which provide access for human activities, are widely used in managing forests in the southeastern U.S. Coastal Plain and Piedmont (SEUS). Our research question in this study is: How can we identify individual forest management units in an entire region? To answer it, we hypothesize that the road network defines management units on the landscape. Road-caused canopy openings are not always captured by satellite sensors, so it is difficult to delineate ecologically relevant patches based only on remote sensing data. We used a reliable, accurate and freely available road network data, OpenStreetMap (OSM), and the National Land Cover Database (NLCD) to delineate management units in a section of the SEUS defined by Landsat Wprldwide Reference System (WRS) II footprint path 17 row 39. The spatial frequency distributions of forest management units indicate that while units < 0.5 Ha comprised 64% of the units, these small units covered only 0.98% of the total forest area. Management units ≥ 0.5 Ha ranged from 0.5 to 160,770 Ha (the Okefenokee National Wildlife Refuge). We compared the size-frequency distributions of management units with four independently derived management types: production, ecological, preservation, and passive management. Preservation and production management had the largest units, at 40.5 ± 2196.7 (s.d.) and 41.3 ± 273.5 Ha, respectively. Ecological and passive averaged about half as large at 19.2 ± 91.5 and 22.4 ± 96.0 Ha, respectively. This result supports the hypothesis that the road network defines management units in SEUS. If this way of delineating management units stands under further testing, it will provide a way of subdividing the landscape so that we can study the effects of different management on forest ecosystems.
Team knowledge representation: a network perspective.
Espinosa, J Alberto; Clark, Mark A
2014-03-01
We propose a network perspective of team knowledge that offers both conceptual and methodological advantages, expanding explanatory value through representation and measurement of component structure and content. Team knowledge has typically been conceptualized and measured with relatively simple aggregates, without fully accounting for differing knowledge configurations among team members. Teams with similar aggregate values of team knowledge may have very different team dynamics depending on how knowledge isolates, cliques, and densities are distributed across the team; which members are the most knowledgeable; who shares knowledge with whom; and how knowledge clusters are distributed. We illustrate our proposed network approach through a sample of 57 teams, including how to compute, analyze, and visually represent team knowledge. Team knowledge network structures (isolation, centrality) are associated with outcomes of, respectively, task coordination, strategy coordination, and the proportion of team knowledge cliques, all after controlling for shared team knowledge. Network analysis helps to represent, measure, and understand the relationship of team knowledge to outcomes of interest to team researchers, members, and managers. Our approach complements existing team knowledge measures. Researchers and managers can apply network concepts and measures to help understand where team knowledge is held within a team and how this relational structure may influence team coordination, cohesion, and performance.
Research on information security system of waste terminal disposal process
NASA Astrophysics Data System (ADS)
Zhou, Chao; Wang, Ziying; Guo, Jing; Guo, Yajuan; Huang, Wei
2017-05-01
Informatization has penetrated the whole process of production and operation of electric power enterprises. It not only improves the level of lean management and quality service, but also faces severe security risks. The internal network terminal is the outermost layer and the most vulnerable node of the inner network boundary. It has the characteristics of wide distribution, long depth and large quantity. The user and operation and maintenance personnel technical level and security awareness is uneven, which led to the internal network terminal is the weakest link in information security. Through the implementation of security of management, technology and physics, we should establish an internal network terminal security protection system, so as to fully protect the internal network terminal information security.
Price Based Local Power Distribution Management System (Local Power Distribution Manager) v1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BROWN, RICHARD E.; CZARNECKI, STEPHEN; SPEARS, MICHAEL
2016-11-28
A trans-active energy micro-grid controller is implemented in the VOLTTRON distributed control platform. The system uses the price of electricity as the mechanism for conducting transactions that are used to manage energy use and to balance supply and demand. In order to allow testing and analysis of the control system, the implementation is designed to run completely as a software simulation, while allowing the inclusion of selected hardware that physically manages power. Equipment to be integrated with the micro-grid controller must have an IP (Internet Protocol)-based network connection and a software "driver" must exist to translate data communications between themore » device and the controller.« less
An Efficient Method for Detecting Misbehaving Zone Manager in MANET
NASA Astrophysics Data System (ADS)
Rafsanjani, Marjan Kuchaki; Pakzad, Farzaneh; Asadinia, Sanaz
In recent years, one of the wireless technologies increased tremendously is mobile ad hoc networks (MANETs) in which mobile nodes organize themselves without the help of any predefined infrastructure. MANETs are highly vulnerable to attack due to the open medium, dynamically changing network topology, cooperative algorithms, lack of centralized monitoring, management point and lack of a clear defense line. In this paper, we report our progress in developing intrusion detection (ID) capabilities for MANET. In our proposed scheme, the network with distributed hierarchical architecture is partitioned into zones, so that in each of them there is one zone manager. The zone manager is responsible for monitoring the cluster heads in its zone and cluster heads are in charge of monitoring their members. However, the most important problem is how the trustworthiness of the zone manager can be recognized. So, we propose a scheme in which "honest neighbors" of zone manager specify the validation of their zone manager. These honest neighbors prevent false accusations and also allow manager if it is wrongly misbehaving. However, if the manger repeats its misbehavior, then it will lose its management degree. Therefore, our scheme will be improved intrusion detection and also provide a more reliable network.
System approach to distributed sensor management
NASA Astrophysics Data System (ADS)
Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid
2010-04-01
Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.
Design of Distributed Engine Control Systems for Stability Under Communication Packet Dropouts
2009-08-01
remarks. II. Distributed Engine Control Systems A. FADEC based on Distributed Engine Control Architecture (DEC) In Distributed Engine...Control, the functions of Full Authority Digital Engine Control ( FADEC ) are distributed at the component level. Each sensor/actuator is to be replaced...diagnostics and health management functionality. Dual channel digital serial communication network is used to connect these smart modules with FADEC . Fig
The Role of Transnational Networking for Higher Education Academics
ERIC Educational Resources Information Center
Wakefield, Kelly; Dismore, Harriet
2015-01-01
Amidst rapid socio-economic change, higher education (HE) academics across the world face major challenges to its organisation, finance and management. This paper discusses the role of transnational networking in higher education. Data from 40 interviews with geographically distributed academics engaged in learning and teaching transnational…
Authenticated IGMP for Controlling Access to Multicast Distribution Tree
NASA Astrophysics Data System (ADS)
Park, Chang-Seop; Kang, Hyun-Sun
A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.
Bridging the Capability Gap for Battle Command On-the-Move
2005-06-01
FDM) and synchronous Time Division Multiplexing (TDM) network components. This advantage will become further realized once mobile satellite modem... synchronize the initial network timing . Once a NM receives this beacon, it reports the measured receive signal strength back to the NC using the NM’s...in certain areas of the world. Due to M4’s synchronous network connections, link engineering to manage required distributed network timing is often
Considerations in the design of a communication network for an autonomously managed power system
NASA Technical Reports Server (NTRS)
Mckee, J. W.; Whitehead, Norma; Lollar, Louis
1989-01-01
The considerations involved in designing a communication network for an autonomously managed power system intended for use in space vehicles are examined. An overview of the design and implementation of a communication network implemented in a breadboard power system is presented. An assumption that the monitoring and control devices are distributed but physically close leads to the selection of a multidrop cable communication system. The assumption of a high-quality communication cable in which few messages are lost resulted in a simple recovery procedure consisting of a time out and retransmit process.
The UK DNA banking network: a "fair access" biobank.
Yuille, Martin; Dixon, Katherine; Platt, Andrew; Pullum, Simon; Lewis, David; Hall, Alistair; Ollier, William
2010-08-01
The UK DNA Banking Network (UDBN) is a secondary biobank: it aggregates and manages resources (samples and data) originated by others. The network comprises, on the one hand, investigator groups led by clinicians each with a distinct disease specialism and, on the other hand, a research infrastructure to manage samples and data. The infrastructure addresses the problem of providing secure quality-assured accrual, storage, replenishment and distribution capacities for samples and of facilitating access to DNA aliquots and data for new peer-reviewed studies in genetic epidemiology. 'Fair access' principles and practices have been pragmatically developed that, unlike open access policies in this area, are not cumbersome but, rather, are fit for the purpose of expediting new study designs and their implementation. UDBN has so far distributed >60,000 samples for major genotyping studies yielding >10 billion genotypes. It provides a working model that can inform progress in biobanking nationally, across Europe and internationally.
Disaster management and mitigation: the telecommunications infrastructure.
Patricelli, Frédéric; Beakley, James E; Carnevale, Angelo; Tarabochia, Marcello; von Lubitz, Dag K J E
2009-03-01
Among the most typical consequences of disasters is the near or complete collapse of terrestrial telecommunications infrastructures (especially the distribution network--the 'last mile') and their concomitant unavailability to the rescuers and the higher echelons of mitigation teams. Even when such damage does not take place, the communications overload/congestion resulting from significantly elevated traffic generated by affected residents can be highly disturbing. The paper proposes innovative remedies to the telecommunications difficulties in disaster struck regions. The offered solutions are network-centric operations-cap able, and can be employed in management of disasters of any magnitude (local to national or international). Their implementation provide ground rescue teams (such as law enforcement, firemen, healthcare personnel, civilian authorities) with tactical connectivity among themselves, and, through the Next Generation Network backbone, ensure the essential bidirectional free flow of information and distribution of Actionable Knowledge among ground units, command/control centres, and civilian and military agencies participating in the rescue effort.
Embedded diagnostic, prognostic, and health management system and method for a humanoid robot
NASA Technical Reports Server (NTRS)
Barajas, Leandro G. (Inventor); Strawser, Philip A (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)
2013-01-01
A robotic system includes a humanoid robot with multiple compliant joints, each moveable using one or more of the actuators, and having sensors for measuring control and feedback data. A distributed controller controls the joints and other integrated system components over multiple high-speed communication networks. Diagnostic, prognostic, and health management (DPHM) modules are embedded within the robot at the various control levels. Each DPHM module measures, controls, and records DPHM data for the respective control level/connected device in a location that is accessible over the networks or via an external device. A method of controlling the robot includes embedding a plurality of the DPHM modules within multiple control levels of the distributed controller, using the DPHM modules to measure DPHM data within each of the control levels, and recording the DPHM data in a location that is accessible over at least one of the high-speed communication networks.
A Distributed Prognostic Health Management Architecture
NASA Technical Reports Server (NTRS)
Bhaskar, Saha; Saha, Sankalita; Goebel, Kai
2009-01-01
This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.
NASA Astrophysics Data System (ADS)
Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.
2016-03-01
Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.
Modeling of the Space Station Freedom data management system
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1990-01-01
The Data Management System (DMS) is the information and communications system onboard Space Station Freedom (SSF). Extensive modeling of the DMS is being conducted throughout NASA to aid in the design and development of this vital system. Activities discussed at NASA Ames Research Center to model the DMS network infrastructure are discussed with focus on the modeling of the Fiber Distributed Data Interface (FDDI) token-ring protocol and experimental testbedding of networking aspects of the DMS.
Gholami, Mohammad; Brennan, Robert W
2016-01-06
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency.
Gholami, Mohammad; Brennan, Robert W.
2016-01-01
In this paper, we investigate alternative distributed clustering techniques for wireless sensor node tracking in an industrial environment. The research builds on extant work on wireless sensor node clustering by reporting on: (1) the development of a novel distributed management approach for tracking mobile nodes in an industrial wireless sensor network; and (2) an objective comparison of alternative cluster management approaches for wireless sensor networks. To perform this comparison, we focus on two main clustering approaches proposed in the literature: pre-defined clusters and ad hoc clusters. These approaches are compared in the context of their reconfigurability: more specifically, we investigate the trade-off between the cost and the effectiveness of competing strategies aimed at adapting to changes in the sensing environment. To support this work, we introduce three new metrics: a cost/efficiency measure, a performance measure, and a resource consumption measure. The results of our experiments show that ad hoc clusters adapt more readily to changes in the sensing environment, but this higher level of adaptability is at the cost of overall efficiency. PMID:26751447
Federated software defined network operations for LHC experiments
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon
2013-09-01
The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.
Information Systems Should Be Both Useful and Used: The Benetton Experience.
ERIC Educational Resources Information Center
Zuccaro, Bruno
1990-01-01
Describes the information systems strategy and network development of the Benetton clothing business. Applications in the areas of manufacturing, scheduling, centralized distribution, and centralized cash flow are discussed; the GEIS managed network service is described; and internal and external electronic data interchange (EDI) is explained.…
TMN: Introduction and interpretation
NASA Astrophysics Data System (ADS)
Pras, Aiko
An overview of Telecommunications Management Network (TMN) status is presented. Its relation with Open System Interconnection (OSI) systems management is given and the commonalities and distinctions are identified. Those aspects that distinguish TMN from OSI management are introduced; TMN's functional and physical architectures and TMN's logical layered architecture are discussed. An analysis of the concepts used by these architectures (reference point, interface, function block, and building block) is given. The use of these concepts to express geographical distribution and functional layering is investigated. This aspect is interesting to understand how OSI management protocols can be used in a TMN environment. A statement regarding applicability of TMN as a model that helps the designers of (management) networks is given.
IRM in the Federal Government: Opinions and Reflections.
ERIC Educational Resources Information Center
Haney, Glenn P.
1989-01-01
Evaluates various aspects of federal information resources management and reviews technological changes within the Department of Agriculture to illustrate current issues and future trends in information resources management. Topics discussed include telecommunications and networking; distributed processing and field office automation; the role of…
Current Issues for Higher Education Information Resources Management.
ERIC Educational Resources Information Center
CAUSE/EFFECT, 1996
1996-01-01
Issues identified as important to the future of information resources management and use in higher education include information policy in a networked environment, distributed computing, integrating information resources and college planning, benchmarking information technology, integrated digital libraries, technology integration in teaching,…
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Distributed generation of shared RSA keys in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Liu, Yi-Liang; Huang, Qin; Shen, Ying
2005-12-01
Mobile Ad Hoc Networks is a totally new concept in which mobile nodes are able to communicate together over wireless links in an independent manner, independent of fixed physical infrastructure and centralized administrative infrastructure. However, the nature of Ad Hoc Networks makes them very vulnerable to security threats. Generation and distribution of shared keys for CA (Certification Authority) is challenging for security solution based on distributed PKI(Public-Key Infrastructure)/CA. The solutions that have been proposed in the literature and some related issues are discussed in this paper. The solution of a distributed generation of shared threshold RSA keys for CA is proposed in the present paper. During the process of creating an RSA private key share, every CA node only has its own private security. Distributed arithmetic is used to create the CA's private share locally, and that the requirement of centralized management institution is eliminated. Based on fully considering the Mobile Ad Hoc network's characteristic of self-organization, it avoids the security hidden trouble that comes by holding an all private security share of CA, with which the security and robustness of system is enhanced.
A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks
Jiang, Peng; Xu, Yiming; Liu, Jun
2017-01-01
For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837
A network approach to decentralized coordination of energy production-consumption grids.
Omodei, Elisa; Arenas, Alex
2018-01-01
Energy grids are facing a relatively new paradigm consisting in the formation of local distributed energy sources and loads that can operate in parallel independently from the main power grid (usually called microgrids). One of the main challenges in microgrid-like networks management is that of self-adapting to the production and demands in a decentralized coordinated way. Here, we propose a stylized model that allows to analytically predict the coordination of the elements in the network, depending on the network topology. Surprisingly, almost global coordination is attained when users interact locally, with a small neighborhood, instead of the obvious but more costly all-to-all coordination. We compute analytically the optimal value of coordinated users in random homogeneous networks. The methodology proposed opens a new way of confronting the analysis of energy demand-side management in networked systems.
NASA Astrophysics Data System (ADS)
Endrawati, Titin; Siregar, M. Tirtana
2018-03-01
PT Mentari Trans Nusantara is a company engaged in the distribution of goods from the manufacture of the product to the distributor branch of the customer so that the product distribution must be controlled directly from the PT Mentari Trans Nusantara Center for faster delivery process. Problems often occur on the expedition company which in charge in sending the goods although it has quite extensive networking. The company is less control over logistics management. Meanwhile, logistics distribution management control policy will affect the company's performance in distributing products to customer distributor branches and managing product inventory in distribution center. PT Mentari Trans Nusantara is an expedition company which engaged in good delivery, including in Jakarta. Logistics management performance is very important due to its related to the supply of goods from the central activities to the branches based oncustomer demand. Supply chain management performance is obviously depends on the location of both the distribution center and branches, the smoothness of transportation in the distribution and the availability of the product in the distribution center to meet the demand in order to avoid losing sales. This study concluded that the company could be more efficient and effective in minimizing the risks of loses by improve its logistic management.
Gamma motes for detection of radioactive materials in shipping containers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harold McHugh; William Quam; Stephan Weeks
Shipping containers can be effectively monitored for radiological materials using gamma (and neutron) motes in distributed mesh networks. The mote platform is ideal for collecting data for integration into operational management systems required for efficiently and transparently monitoring international trade. Significant reductions in size and power requirements have been achieved for room-temperature cadmium zinc telluride (CZT) gamma detectors. Miniaturization of radio modules and microcontroller units are paving the way for low-power, deeply-embedded, wireless sensor distributed mesh networks.
Proton beam therapy control system
Baumann, Michael A [Riverside, CA; Beloussov, Alexandre V [Bernardino, CA; Bakir, Julide [Alta Loma, CA; Armon, Deganit [Redlands, CA; Olsen, Howard B [Colton, CA; Salem, Dana [Riverside, CA
2008-07-08
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A.; Beloussov, Alexandre V.; Bakir, Julide; Armon, Deganit; Olsen, Howard B.; Salem, Dana
2010-09-21
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-06-25
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-12-03
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
NASA Astrophysics Data System (ADS)
Niu, Xiaoliang; Yuan, Fen; Huang, Shanguo; Guo, Bingli; Gu, Wanyi
2011-12-01
A Dynamic clustering scheme based on coordination of management and control is proposed to reduce network congestion rate and improve the blocking performance of hierarchical routing in Multi-layer and Multi-region intelligent optical network. Its implement relies on mobile agent (MA) technology, which has the advantages of efficiency, flexibility, functional and scalability. The paper's major contribution is to adjust dynamically domain when the performance of working network isn't in ideal status. And the incorporation of centralized NMS and distributed MA control technology migrate computing process to control plane node which releases the burden of NMS and improves process efficiently. Experiments are conducted on Multi-layer and multi-region Simulation Platform for Optical Network (MSPON) to assess the performance of the scheme.
A Distributed Cache Update Deployment Strategy in CDN
NASA Astrophysics Data System (ADS)
E, Xinhua; Zhu, Binjie
2018-04-01
The CDN management system distributes content objects to the edge of the internet to achieve the user's near access. Cache strategy is an important problem in network content distribution. A cache strategy was designed in which the content effective diffusion in the cache group, so more content was storage in the cache, and it improved the group hit rate.
Identifying PHM market and network opportunities.
Grube, Mark E; Krishnaswamy, Anand; Poziemski, John; York, Robert W
2015-11-01
Two key processes for healthcare organizations seeking to assume a financially sustainable role in population health management (PHM), after laying the groundwork for the effort, are to identify potential PHM market opportunities and determine the scope of the PHM network. Key variables organizations should consider with respect to market opportunities include the patient population, the overall insurance/employer market, and available types of insurance products. Regarding the network's scope, organizations should consider both traditional strategic criteria for a viable network and at least five additional criteria: network essentiality and PHM care continuum, network adequacy, service distribution right-sizing, network growth strategy, and organizational agility.
Sahama, Tony; Liang, Jian; Iannella, Renato
2012-01-01
Most social network users hold more than one social network account and utilize them in different ways depending on the digital context. For example, friendly chat on Facebook, professional discussion on LinkedIn, and health information exchange on PatientsLikeMe. Thus many web users need to manage many disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time consuming, inefficient, and leads to lost opportunity. In this paper we propose a framework for multiple profile management of online social networks and showcase a demonstrator utilising an open source platform. The result of the research enables a user to create and manage an integrated profile and share/synchronise their profiles with their social networks. A number of use cases were created to capture the functional requirements and describe the interactions between users and the online services. An innovative application of this project is in public health informatics. We utilize the prototype to examine how the framework can benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians.
NASA Astrophysics Data System (ADS)
Li, Siwei; Li, Jun; Liu, Zhuochu; Wang, Min; Yue, Liang
2017-05-01
After the access of household distributed photovoltaic, conditions of high permeability generally occur, which cut off the connection between distributed power supply and major network rapidly and use energy storage device to realize electrical energy storage. The above operations cannot be adequate for the power grid health after distributed power supply access any more from the perspective of economy and rationality. This paper uses the integration between device and device, integration between device and system and integration between system and system of household microgrid and household energy efficiency management, to design household microgrid building program and operation strategy containing household energy efficiency management, to achieve efficient integration of household energy efficiency management and household microgrid, to effectively solve problems of high permeability of household distributed power supply and so on.
Irrigation system management assisted by thermal imagery and spatial statistics
USDA-ARS?s Scientific Manuscript database
Thermal imaging has the potential to assist with many aspects of irrigation management including scheduling water application, detecting leaky irrigation canals, and gauging the overall effectiveness of water distribution networks used in furrow irrigation. Many challenges exist for the use of therm...
NASA Astrophysics Data System (ADS)
Bashi-Azghadi, Seyyed Nasser; Afshar, Abbas; Afshar, Mohammad Hadi
2018-03-01
Previous studies on consequence management assume that the selected response action including valve closure and/or hydrant opening remains unchanged during the entire management period. This study presents a new embedded simulation-optimization methodology for deriving time-varying operational response actions in which the network topology may change from one stage to another. Dynamic programming (DP) and genetic algorithm (GA) are used in order to minimize selected objective functions. Two networks of small and large sizes are used in order to illustrate the performance of the proposed modelling schemes if a time-dependent consequence management strategy is to be implemented. The results show that for a small number of decision variables even in large-scale networks, DP is superior in terms of accuracy and computer runtime. However, as the number of potential actions grows, DP loses its merit over the GA approach. This study clearly proves the priority of the proposed dynamic operation strategy over the commonly used static strategy.
Optimization of turbine positioning in water distribution networks. A Sicilian case study
NASA Astrophysics Data System (ADS)
Milici, Barbara; Messineo, Simona; Messineo, Antonio
2017-11-01
The potential energy of water in Water Distribution Networks (WDNs), is usually dissipated by Pressure Reduction Valves (PRVs), thanks to which water utilities manage the pressure level in selected nodes of the network. The present study explores the use of economic hydraulic machines, pumps as turbines (PATs), to produce energy in a small network with the aim to avoid dissipation in favour of renewable energy production. The proposed study is applied to a WDN located in a town close to Palermo (Sicily), where users often install private tanks, to collect water during the period of water scarcity conditions. As expected, the economic benefit of PATs installation in WDNs is affected by the presence of private tanks, whose presence deeply modifies the network from designed condition. The analysis is carried out by means of a mathematical model, which is able to simulate dynamically water distribution networks with private tanks and PATs. As expected, the advantage of PATs' installation in terms of renewable energy recovery is strictly conditioned by their placement in the WDN.
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
Water Quality in Small Community Distribution Systems. A Reference Guide for Operators
The U.S. Environmental Protection Agency (EPA) has developed this reference guide to assist the operators and managers of small- and medium-sized public water systems. This compilation provides a comprehensive picture of the impact of the water distribution system network on dist...
Managing RFID sensors networks with a general purpose RFID middleware.
Abad, Ismael; Cerrada, Carlos; Cerrada, Jose A; Heradio, Rubén; Valero, Enrique
2012-01-01
RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS) is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA) systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc…) inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM) this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command) and RRTL (RFID Reader Topology Language). MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster).
A virtual water network of the Roman world
NASA Astrophysics Data System (ADS)
Dermody, B. J.; van Beek, R. P. H.; Meeks, E.; Klein Goldewijk, K.; Scheidel, W.; van der Velde, Y.; Bierkens, M. F. P.; Wassen, M. J.; Dekker, S. C.
2014-12-01
The Romans were perhaps the most impressive exponents of water resource management in preindustrial times with irrigation and virtual water trade facilitating unprecedented urbanization and socioeconomic stability for hundreds of years in a region of highly variable climate. To understand Roman water resource management in response to urbanization and climate variability, a Virtual Water Network of the Roman World was developed. Using this network we find that irrigation and virtual water trade increased Roman resilience to interannual climate variability. However, urbanization arising from virtual water trade likely pushed the Empire closer to the boundary of its water resources, led to an increase in import costs, and eroded its resilience to climate variability in the long term. In addition to improving our understanding of Roman water resource management, our cost-distance-based analysis illuminates how increases in import costs arising from climatic and population pressures are likely to be distributed in the future global virtual water network.
A virtual water network of the Roman world
NASA Astrophysics Data System (ADS)
Dermody, B. J.; van Beek, R. P. H.; Meeks, E.; Klein Goldewijk, K.; Scheidel, W.; van der Velde, Y.; Bierkens, M. F. P.; Wassen, M. J.; Dekker, S. C.
2014-06-01
The Romans were perhaps the most impressive exponents of water resource management in preindustrial times with irrigation and virtual water trade facilitating unprecedented urbanisation and socioeconomic stability for hundreds of years in a region of highly variable climate. To understand Roman water resource management in response to urbanisation and climate variability, a Virtual Water Network of the Roman World was developed. Using this network we find that irrigation and virtual water trade increased Roman resilience to climate variability in the short term. However, urbanisation arising from virtual water trade likely pushed the Empire closer to the boundary of its water resources, led to an increase in import costs, and reduced its resilience to climate variability in the long-term. In addition to improving our understanding of Roman water resource management, our cost-distance based analysis illuminates how increases in import costs arising from climatic and population pressures are likely to be distributed in the future global virtual water network.
Distributed Common Ground System-Navy Increment 2 (DCGS-N Inc 2)
2016-03-01
15 minutes Enter and be Managed in the Network: Reference SvcV-7, Consolidated Afloat Networks and Enterprise Services ( CANES ) CDD, DCGS-N Inc 2...Red, White , Gray Data and Tracks to Command and Control System. Continuous Stream from SCI Common Intelligence Picture to General Service (GENSER...AIS - Automatic Information System AOC - Air Operations Command CANES - Consolidated Afloat Networks and Enterprise Services CID - Center for
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Huang, Cheng-Fu; Yeh, Cheng-Ta
2016-04-01
In supply chain management, satisfying customer demand is the most concerned for the manager. However, the goods may rot or be spoilt during delivery owing to natural disasters, inclement weather, traffic accidents, collisions, and so on, such that the intact goods may not meet market demand. This paper concentrates on a stochastic-flow distribution network (SFDN), in which a node denotes a supplier, a transfer station, or a market, while a route denotes a carrier providing the delivery service for a pair of nodes. The available capacity of the carrier is stochastic because the capacity may be partially reserved by other customers. The addressed problem is to evaluate the system reliability, the probability that the SFDN can satisfy the market demand with the spoilage rate under the budget constraint from multiple suppliers to the customer. An algorithm is developed in terms of minimal paths to evaluate the system reliability along with a numerical example to illustrate the solution procedure. A practical case of fruit distribution is presented accordingly to emphasise the management implication of the system reliability.
Refueling Strategies for a Team of Cooperating AUVs
2011-01-01
manager, and thus the constraint a centrally managed underwater network imposes on the mission. Task management utilizing Robust Decentralized Task ...the computational complexity. A bid based approach to task management has also been studied as a possible means of decentralization of group task ...currently performing another task . In [18], ground robots perform distributed task allocation using the ASyMTRy-D algorithm, which is based on CNP
The United States Postal Service (USPS) in cooperation with EPA's National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate Waste prevention and recycling activities into the waste management programs at Postal facilities. In this report, the findi...
NASA Astrophysics Data System (ADS)
Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.
2011-10-01
This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.
A network approach to decentralized coordination of energy production-consumption grids
Arenas, Alex
2018-01-01
Energy grids are facing a relatively new paradigm consisting in the formation of local distributed energy sources and loads that can operate in parallel independently from the main power grid (usually called microgrids). One of the main challenges in microgrid-like networks management is that of self-adapting to the production and demands in a decentralized coordinated way. Here, we propose a stylized model that allows to analytically predict the coordination of the elements in the network, depending on the network topology. Surprisingly, almost global coordination is attained when users interact locally, with a small neighborhood, instead of the obvious but more costly all-to-all coordination. We compute analytically the optimal value of coordinated users in random homogeneous networks. The methodology proposed opens a new way of confronting the analysis of energy demand-side management in networked systems. PMID:29364962
NASA Astrophysics Data System (ADS)
Li, Ming; Yin, Hongxi; Xing, Fangyuan; Wang, Jingchao; Wang, Honghuan
2016-02-01
With the features of network virtualization and resource programming, Software Defined Optical Network (SDON) is considered as the future development trend of optical network, provisioning a more flexible, efficient and open network function, supporting intraconnection and interconnection of data centers. Meanwhile cloud platform can provide powerful computing, storage and management capabilities. In this paper, with the coordination of SDON and cloud platform, a multi-domain SDON architecture based on cloud control plane has been proposed, which is composed of data centers with database (DB), path computation element (PCE), SDON controller and orchestrator. In addition, the structure of the multidomain SDON orchestrator and OpenFlow-enabled optical node are proposed to realize the combination of centralized and distributed effective management and control platform. Finally, the functional verification and demonstration are performed through our optical experiment network.
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate. PMID:28991919
Tengku Hashim, Tengku Juhana; Mohamed, Azah
2017-01-01
The growing interest in distributed generation (DG) in recent years has led to a number of generators connected to a distribution system. The integration of DGs in a distribution system has resulted in a network known as active distribution network due to the existence of bidirectional power flow in the system. Voltage rise issue is one of the predominantly important technical issues to be addressed when DGs exist in an active distribution network. This paper presents the application of the backtracking search algorithm (BSA), which is relatively new optimisation technique to determine the optimal settings of coordinated voltage control in a distribution system. The coordinated voltage control considers power factor, on-load tap-changer and generation curtailment control to manage voltage rise issue. A multi-objective function is formulated to minimise total losses and voltage deviation in a distribution system. The proposed BSA is compared with that of particle swarm optimisation (PSO) so as to evaluate its effectiveness in determining the optimal settings of power factor, tap-changer and percentage active power generation to be curtailed. The load flow algorithm from MATPOWER is integrated in the MATLAB environment to solve the multi-objective optimisation problem. Both the BSA and PSO optimisation techniques have been tested on a radial 13-bus distribution system and the results show that the BSA performs better than PSO by providing better fitness value and convergence rate.
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
Network based management for multiplexed electric vehicle charging
Gadh, Rajit; Chung, Ching Yen; Qui, Li
2017-04-11
A system for multiplexing charging of electric vehicles, comprising a server coupled to a plurality of charging control modules over a network. Each of said charging modules being connected to a voltage source such that each charging control module is configured to regulate distribution of voltage from the voltage source to an electric vehicle coupled to the charging control module. Data collection and control software is provided on the server for identifying a plurality of electric vehicles coupled to the plurality of charging control modules and selectively distributing charging of the plurality of charging control modules to multiplex distribution of voltage to the plurality of electric vehicles.
Phenology for science, resource management, decision making, and education
Nolan, V.P.; Weltzin, J.F.
2011-01-01
Fourth USA National Phenology Network (USA-NPN) Research Coordination Network (RCN) Annual Meeting and Stakeholders Workshop; Milwaukee, Wisconsin, 21-22 September 2010; Phenology, the study of recurring plant and animal life cycle events, is rapidly emerging as a fundamental approach for understanding how ecological systems respond to environmental variation and climate change. The USA National Phenology Network (USA-NPN; http://www.usanpn.org) is a large-scale network of governmental and nongovernmental organizations, academic institutions, resource management agencies, and tribes. The network is dedicated to conducting and promoting repeated and integrated plant and animal phenological observations, identifying linkages with other relevant biological and physical data sources, and developing and distributing the tools to analyze these data at local to national scales. The primary goal of the USA-NPN is to improve the ability of decision makers to design strategies for climate adaptation.
Phenology for Science, Resource Management, Decision Making, and Education
NASA Astrophysics Data System (ADS)
Nolan, Vivian P.; Weltzin, Jake F.
2011-01-01
Fourth USA National Phenology Network (USA-NPN) Research Coordination Network (RCN) Annual Meeting and Stakeholders Workshop; Milwaukee, Wisconsin, 21-22 September 2010; Phenology, the study of recurring plant and animal life cycle events, is rapidly emerging as a fundamental approach for understanding how ecological systems respond to environmental variation and climate change. The USA National Phenology Network (USA-NPN; http://www.usanpn.org) is a large-scale network of governmental and nongovernmental organizations, academic institutions, resource management agencies, and tribes. The network is dedicated to conducting and promoting repeated and integrated plant and animal phenological observations, identifying linkages with other relevant biological and physical data sources, and developing and distributing the tools to analyze these data at local to national scales. The primary goal of the USA-NPN is to improve the ability of decision makers to design strategies for climate adaptation.
Interconnecting heterogeneous database management systems
NASA Technical Reports Server (NTRS)
Gligor, V. D.; Luckenbaugh, G. L.
1984-01-01
It is pointed out that there is still a great need for the development of improved communication between remote, heterogeneous database management systems (DBMS). Problems regarding the effective communication between distributed DBMSs are primarily related to significant differences between local data managers, local data models and representations, and local transaction managers. A system of interconnected DBMSs which exhibit such differences is called a network of distributed, heterogeneous DBMSs. In order to achieve effective interconnection of remote, heterogeneous DBMSs, the users must have uniform, integrated access to the different DBMs. The present investigation is mainly concerned with an analysis of the existing approaches to interconnecting heterogeneous DBMSs, taking into account four experimental DBMS projects.
Potential and challenges in use of thermal imaging for humid region irrigation system management
USDA-ARS?s Scientific Manuscript database
Thermal imaging has shown potential to assist with many aspects of irrigation management including scheduling water application, detecting leaky irrigation canals, and gauging the overall effectiveness of water distribution networks used in furrow irrigation. Many challenges exist for the use of the...
Distributed Agent-Based Networks in Support of Advanced Marine Corps Command and Control Concept
2012-09-01
clusters of managers and clients that form a hierarchical management framework (Figure 14). However, since it is SNMP-based, due to the size and...that are much less computationally intensive than other proposed approaches such as multivariate calculations of Pareto boundaries (Bordetsky and
Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing
NASA Astrophysics Data System (ADS)
Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.
2006-05-01
Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.
The UK DNA banking network: a “fair access” biobank
Dixon, Katherine; Platt, Andrew; Pullum, Simon; Lewis, David; Hall, Alistair; Ollier, William
2009-01-01
The UK DNA Banking Network (UDBN) is a secondary biobank: it aggregates and manages resources (samples and data) originated by others. The network comprises, on the one hand, investigator groups led by clinicians each with a distinct disease specialism and, on the other hand, a research infrastructure to manage samples and data. The infrastructure addresses the problem of providing secure quality-assured accrual, storage, replenishment and distribution capacities for samples and of facilitating access to DNA aliquots and data for new peer-reviewed studies in genetic epidemiology. ‘Fair access’ principles and practices have been pragmatically developed that, unlike open access policies in this area, are not cumbersome but, rather, are fit for the purpose of expediting new study designs and their implementation. UDBN has so far distributed >60,000 samples for major genotyping studies yielding >10 billion genotypes. It provides a working model that can inform progress in biobanking nationally, across Europe and internationally. PMID:19672698
Distributed information system (water fact sheet)
Harbaugh, A.W.
1986-01-01
During 1982-85, the Water Resources Division (WRD) of the U.S. Geological Survey (USGS) installed over 70 large minicomputers in offices across the country to support its mission in the science of hydrology. These computers are connected by a communications network that allows information to be shared among computers in each office. The computers and network together are known as the Distributed Information System (DIS). The computers are accessed through the use of more than 1500 terminals and minicomputers. The WRD has three fundamentally different needs for computing: data management; hydrologic analysis; and administration. Data management accounts for 50% of the computational workload of WRD because hydrologic data are collected in all 50 states, Puerto Rico, and the Pacific trust territories. Hydrologic analysis consists of 40% of the computational workload of WRD. Cost accounting, payroll, personnel records, and planning for WRD programs occupies an estimated 10% of the computer workload. The DIS communications network is shown on a map. (Lantz-PTT)
Data Aggregation in Multi-Agent Systems in the Presence of Hybrid Faults
ERIC Educational Resources Information Center
Srinivasan, Satish Mahadevan
2010-01-01
Data Aggregation (DA) is a set of functions that provide components of a distributed system access to global information for purposes of network management and user services. With the diverse new capabilities that networks can provide, applicability of DA is growing. DA is useful in dealing with multi-value domain information and often requires…
NASA Astrophysics Data System (ADS)
Mohamed, Ahmed
Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system's dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.
Multicast for savings in cache-based video distribution
NASA Astrophysics Data System (ADS)
Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf
1999-12-01
Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.
Grethe, Jeffrey S; Baru, Chaitan; Gupta, Amarnath; James, Mark; Ludaescher, Bertram; Martone, Maryann E; Papadopoulos, Philip M; Peltier, Steven T; Rajasekar, Arcot; Santini, Simone; Zaslavsky, Ilya N; Ellisman, Mark H
2005-01-01
Through support from the National Institutes of Health's National Center for Research Resources, the Biomedical Informatics Research Network (BIRN) is pioneering the use of advanced cyberinfrastructure for medical research. By synchronizing developments in advanced wide area networking, distributed computing, distributed database federation, and other emerging capabilities of e-science, the BIRN has created a collaborative environment that is paving the way for biomedical research and clinical information management. The BIRN Coordinating Center (BIRN-CC) is orchestrating the development and deployment of key infrastructure components for immediate and long-range support of biomedical and clinical research being pursued by domain scientists in three neuroimaging test beds.
Managing Network Partitions in Structured P2P Networks
NASA Astrophysics Data System (ADS)
Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif
Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.
1979-09-30
University, Pittsburgh, Pennsylvania (1976). 14. R. L. Kirby, "ULISP for PDP-11s with Memory Management ," Report MCS-76-23763, University of Maryland...teletVpe or 9 raphIc S output. The recor iuL, po , uitist il so mon itot its owvn ( Onmand queue and a( knowlede commands Sent to It hN the UsCtr interfa I...kernel. By a net- work kernel we mean a multicomputer distributed operating system kernel that includes proces- sor schedulers, "core" memory managers , and
1991-03-01
management methodologies claim to be "expert systems" with security intelligence built into them to I derive a body of both facts and speculative data ... Data Administration considerations . III -21 IV. ARTIFICIAL INTELLIGENCE . .. .. .. . .. IV - 1 A. Description of Technologies . . . . . .. IV - 1 1...as intelligent gateways, wide area networks, and distributed databases for the distribution of logistics products. The integrity of CALS data and the
Bhat, Shirish; Motz, Louis H; Pathak, Chandra; Kuebler, Laura
2015-01-01
A geostatistical method was applied to optimize an existing groundwater-level monitoring network in the Upper Floridan aquifer for the South Florida Water Management District in the southeastern United States. Analyses were performed to determine suitable numbers and locations of monitoring wells that will provide equivalent or better quality groundwater-level data compared to an existing monitoring network. Ambient, unadjusted groundwater heads were expressed as salinity-adjusted heads based on the density of freshwater, well screen elevations, and temperature-dependent saline groundwater density. The optimization of the numbers and locations of monitoring wells is based on a pre-defined groundwater-level prediction error. The newly developed network combines an existing network with the addition of new wells that will result in a spatial distribution of groundwater monitoring wells that better defines the regional potentiometric surface of the Upper Floridan aquifer in the study area. The network yields groundwater-level predictions that differ significantly from those produced using the existing network. The newly designed network will reduce the mean prediction standard error by 43% compared to the existing network. The adoption of a hexagonal grid network for the South Florida Water Management District is recommended to achieve both a uniform level of information about groundwater levels and the minimum required accuracy. It is customary to install more monitoring wells for observing groundwater levels and groundwater quality as groundwater development progresses. However, budget constraints often force water managers to implement cost-effective monitoring networks. In this regard, this study provides guidelines to water managers concerned with groundwater planning and monitoring.
Complex Dynamics in Information Sharing Networks
NASA Astrophysics Data System (ADS)
Cronin, Bruce
This study examines the roll-out of an electronic knowledge base in a medium-sized professional services firm over a six year period. The efficiency of such implementation is a key business problem in IT systems of this type. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an "s-curve" and usage exhibits something of a power law distribution, both attributable to network effects, and network position is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Further analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed. The study provides a case study demonstrating the utility of the broad methodological approach.
Network analysis of a financial market based on genuine correlation and threshold method
NASA Astrophysics Data System (ADS)
Namaki, A.; Shirazi, A. H.; Raei, R.; Jafari, G. R.
2011-10-01
A financial market is an example of an adaptive complex network consisting of many interacting units. This network reflects market’s behavior. In this paper, we use Random Matrix Theory (RMT) notion for specifying the largest eigenvector of correlation matrix as the market mode of stock network. For a better risk management, we clean the correlation matrix by removing the market mode from data and then construct this matrix based on the residuals. We show that this technique has an important effect on correlation coefficient distribution by applying it for Dow Jones Industrial Average (DJIA). To study the topological structure of a network we apply the removing market mode technique and the threshold method to Tehran Stock Exchange (TSE) as an example. We show that this network follows a power-law model in certain intervals. We also show the behavior of clustering coefficients and component numbers of this network for different thresholds. These outputs are useful for both theoretical and practical purposes such as asset allocation and risk management.
Energy-efficient sensing in wireless sensor networks using compressed sensing.
Razzaque, Mohammad Abdur; Dobson, Simon
2014-02-12
Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.
Predicting the distribution of bed material accumulation using river network sediment budgets
NASA Astrophysics Data System (ADS)
Wilkinson, Scott N.; Prosser, Ian P.; Hughes, Andrew O.
2006-10-01
Assessing the spatial distribution of bed material accumulation in river networks is important for determining the impacts of erosion on downstream channel form and habitat and for planning erosion and sediment management. A model that constructs spatially distributed budgets of bed material sediment is developed to predict the locations of accumulation following land use change. For each link in the river network, GIS algorithms are used to predict bed material supply from gullies, river banks, and upstream tributaries and to compare total supply with transport capacity. The model is tested in the 29,000 km2 Murrumbidgee River catchment in southeast Australia. It correctly predicts the presence or absence of accumulation in 71% of river links, which is significantly better performance than previous models, which do not account for spatial variability in sediment supply and transport capacity. Representing transient sediment storage is important for predicting smaller accumulations. Bed material accumulation is predicted in 25% of the river network, indicating its importance as an environmental problem in Australia.
Performance evaluation of the NASA/KSC CAD/CAE and office automation LAN's
NASA Technical Reports Server (NTRS)
Zobrist, George W.
1994-01-01
This study's objective is the performance evaluation of the existing CAD/CAE (Computer Aided Design/Computer Aided Engineering) network at NASA/KSC. This evaluation also includes a similar study of the Office Automation network, since it is being planned to integrate this network into the CAD/CAE network. The Microsoft mail facility which is presently on the CAD/CAE network was monitored to determine its present usage. This performance evaluation of the various networks will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the CAD/CAE network and determining the effectiveness of the planned FDDI (Fiber Distributed Data Interface) migration.
NASA Astrophysics Data System (ADS)
Huo, Xianxu; Li, Guodong; Jiang, Ling; Wang, Xudong
2017-08-01
With the development of electricity market, distributed generation (DG) technology and related policies, regional energy suppliers are encouraged to build DG. Under this background, the concept of active distribution network (ADN) is put forward. In this paper, a bi-level model of intermittent DG considering benefit of regional energy suppliers is proposed. The objective of the upper level is the maximization of benefit of regional energy suppliers. On this basis, the lower level is optimized for each scene. The uncertainties of DG output and load of users, as well as four active management measures, which include demand-side management, curtailing the output power of DG, regulating reactive power compensation capacity and regulating the on-load tap changer, are considered. Harmony search algorithm and particle swarm optimization are combined as a hybrid strategy to solve the model. This model and strategy are tested with IEEE-33 node system, and results of case study indicate that the model and strategy successfully increase the capacity of DG and benefit of regional energy suppliers.
Pan, Jeng-Jong; Nahm, Meredith; Wakim, Paul; Cushing, Carol; Poole, Lori; Tai, Betty; Pieper, Carl F
2009-02-01
Clinical trial networks (CTNs) were created to provide a sustaining infrastructure for the conduct of multisite clinical trials. As such, they must withstand changes in membership. Centralization of infrastructure including knowledge management, portfolio management, information management, process automation, work policies, and procedures in clinical research networks facilitates consistency and ultimately research. In 2005, the National Institute on Drug Abuse (NIDA) CTN transitioned from a distributed data management model to a centralized informatics infrastructure to support the network's trial activities and administration. We describe the centralized informatics infrastructure and discuss our challenges to inform others considering such an endeavor. During the migration of a clinical trial network from a decentralized to a centralized data center model, descriptive data were captured and are presented here to assess the impact of centralization. We present the framework for the informatics infrastructure and evaluative metrics. The network has decreased the time from last patient-last visit to database lock from an average of 7.6 months to 2.8 months. The average database error rate decreased from 0.8% to 0.2%, with a corresponding decrease in the interquartile range from 0.04%-1.0% before centralization to 0.01-0.27% after centralization. Centralization has provided the CTN with integrated trial status reporting and the first standards-based public data share. A preliminary cost-benefit analysis showed a 50% reduction in data management cost per study participant over the life of a trial. A single clinical trial network comprising addiction researchers and community treatment programs was assessed. The findings may not be applicable to other research settings. The identified informatics components provide the information and infrastructure needed for our clinical trial network. Post centralization data management operations are more efficient and less costly, with higher data quality.
78 FR 65963 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-04
... Branch seeks to collect data on distribution networks and business practices from fish processors that... analysis on the economic impacts of potential fishery management actions, consistent with the Magnuson...
Parallel processing for scientific computations
NASA Technical Reports Server (NTRS)
Alkhatib, Hasan S.
1991-01-01
The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.
NASA Astrophysics Data System (ADS)
Deuerlein, Jochen; Meyer-Harries, Lea; Guth, Nicolai
2017-07-01
Drinking water distribution networks are part of critical infrastructures and are exposed to a number of different risks. One of them is the risk of unintended or deliberate contamination of the drinking water within the pipe network. Over the past decade research has focused on the development of new sensors that are able to detect malicious substances in the network and early warning systems for contamination. In addition to the optimal placement of sensors, the automatic identification of the source of a contamination is an important component of an early warning and event management system for security enhancement of water supply networks. Many publications deal with the algorithmic development; however, only little information exists about the integration within a comprehensive real-time event detection and management system. In the following the analytical solution and the software implementation of a real-time source identification module and its integration within a web-based event management system are described. The development was part of the SAFEWATER project, which was funded under FP 7 of the European Commission.
Security and Efficiency Concerns With Distributed Collaborative Networking Environments
2003-09-01
have the ability to access Web communications services of the WebEx MediaTone Network from a single login. [24] WebEx provides a range of secure...Web. WebEx services enable secure data, voice and video communications through the browser and are supported by the WebEx MediaTone Network, a global...designed to host large-scale, structured events and conferences, featuring a Q&A Manager that allows multiple moderators to handle questions while
On Distributed PV Hosting Capacity Estimation, Sensitivity Study, and Improvement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Mather, Barry
This paper first studies the estimated distributed PV hosting capacities of seventeen utility distribution feeders using the Monte Carlo simulation based stochastic analysis, and then analyzes the sensitivity of PV hosting capacity to both feeder and photovoltaic system characteristics. Furthermore, an active distribution network management approach is proposed to maximize PV hosting capacity by optimally switching capacitors, adjusting voltage regulator taps, managing controllable branch switches and controlling smart PV inverters. The approach is formulated as a mixed-integer nonlinear optimization problem and a genetic algorithm is developed to obtain the solution. Multiple simulation cases are studied and the effectiveness of themore » proposed approach on increasing PV hosting capacity is demonstrated.« less
An eConsent-based System Architecture Supporting Cooperation in Integrated Healthcare Networks.
Bergmann, Joachim; Bott, Oliver J; Hoffmann, Ina; Pretschner, Dietrich P
2005-01-01
The economical need for efficient healthcare leads to cooperative shared care networks. A virtual electronic health record is required, which integrates patient related information but reflects the distributed infrastructure and restricts access only to those health professionals involved into the care process. Our work aims on specification and development of a system architecture fulfilling these requirements to be used in concrete regional pilot studies. Methodical analysis and specification have been performed in a healthcare network using the formal method and modelling tool MOSAIK-M. The complexity of the application field was reduced by focusing on the scenario of thyroid disease care, which still includes various interdisciplinary cooperation. Result is an architecture for a secure distributed electronic health record for integrated care networks, specified in terms of a MOSAIK-M-based system model. The architecture proposes business processes, application services, and a sophisticated security concept, providing a platform for distributed document-based, patient-centred, and secure cooperation. A corresponding system prototype has been developed for pilot studies, using advanced application server technologies. The architecture combines a consolidated patient-centred document management with a decentralized system structure without needs for replication management. An eConsent-based approach assures, that access to the distributed health record remains under control of the patient. The proposed architecture replaces message-based communication approaches, because it implements a virtual health record providing complete and current information. Acceptance of the new communication services depends on compatibility with the clinical routine. Unique and cross-institutional identification of a patient is also a challenge, but will loose significance with establishing common patient cards.
ERIC Educational Resources Information Center
Naito, Eisuke
This paper discusses knowledge management (KM) in relation to a shared cataloging system in Japanese university libraries. The first section describes the Japanese scene related to knowledge management and the working environment, including the SECI (Socialization, Externalization, Combination, Internalization) model, the context of knowledge, and…
Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate. PMID:22163789
Huang, Rimao; Qiu, Xuesong; Rui, Lanlan
2011-01-01
Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.
PILOT: An intelligent distributed operations support system
NASA Technical Reports Server (NTRS)
Rasmussen, Arthur N.
1993-01-01
The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less
ERIC Educational Resources Information Center
Morag, Azriel
Libraries faced with the challenge of cooperative cataloging must maintain a high degree of unification within the library network (consortium) without compromising local libraries' independence. This paper compares a traditional model for cooperative catalogs achieved by means of a Union Catalog that depends entirely on replication of data…
Wireless Distribution Systems To Support Medical Response to Disasters
Arisoylu, Mustafa; Mishra, Rajesh; Rao, Ramesh; Lenert, Leslie A.
2005-01-01
We discuss the design of multi-hop access networks with multiple gateways that supports medical response to disasters. We examine and implement protocols to ensure high bandwidth, robust, self-healing and secure wireless multi-hop access networks for extreme conditions. Address management, path setup, gateway discovery and selection protocols are described. Future directions and plans are also considered. PMID:16779171
TTCN-3 Based Conformance Testing of Mobile Broadcast Business Management System in 3G Networks
NASA Astrophysics Data System (ADS)
Wang, Zhiliang; Yin, Xia; Xiang, Yang; Zhu, Ruiping; Gao, Shirui; Wu, Xin; Liu, Shijian; Gao, Song; Zhou, Li; Li, Peng
Mobile broadcast service is one of the emerging most important new services in 3G networks. To better operate and manage mobile broadcast services, mobile broadcast business management system (MBBMS) should be designed and developed. Such a system, with its distributed nature, complicated XML data and security mechanism, faces many challenges in testing technology. In this paper, we study the conformance testing methodology of MBBMS, and design and implement a MBBMS protocol conformance testing tool based on TTCN-3, a standardized test description language that can be used in black-box testing of reactive and distributed system. In this methodology and testing tool, we present a semi-automatic XML test data generation method of TTCN-3 test suite and use HMSC model to help the design of test suite. In addition, we also propose an integrated testing method for hierarchical MBBMS security architecture. This testing tool has been used in industrial level’s testing.
Managing RFID Sensors Networks with a General Purpose RFID Middleware
Abad, Ismael; Cerrada, Carlos; Cerrada, Jose A.; Heradio, Rubén; Valero, Enrique
2012-01-01
RFID middleware is anticipated to one of the main research areas in the field of RFID applications in the near future. The Data EPC Acquisition System (DEPCAS) is an original proposal designed by our group to transfer and apply fundamental ideas from System and Data Acquisition (SCADA) systems into the areas of RFID acquisition, processing and distribution systems. In this paper we focus on how to organize and manage generic RFID sensors (edge readers, readers, PLCs, etc…) inside the DEPCAS middleware. We denote by RFID Sensors Networks Management (RSNM) this part of DEPCAS, which is built on top of two new concepts introduced and developed in this work: MARC (Minimum Access Reader Command) and RRTL (RFID Reader Topology Language). MARC is an abstraction layer used to hide heterogeneous devices inside a homogeneous acquisition network. RRTL is a language to define RFID Reader networks and to describe the relationship between them (concentrator, peer to peer, master/submaster). PMID:22969370
Description of a MIL-STD-1553B Data Bus Ada Driver for the LeRC EPS Testbed
NASA Technical Reports Server (NTRS)
Mackin, Michael A.
1995-01-01
This document describes the software designed to provide communication between control computers in the NASA Lewis Research Center Electrical Power System Testbed using MIL-STD-1553B. The software drivers are coded in the Ada programming language and were developed on a MSDOS-based computer workstation. The Electrical Power System (EPS) Testbed is a reduced-scale prototype space station electrical power system. The power system manages and distributes electrical power from the sources (batteries or photovoltaic arrays) to the end-user loads. The electrical system primary operates at 120 volts DC, and the secondary system operates at 28 volts DC. The devices which direct the flow of electrical power are controlled by a network of six control computers. Data and control messages are passed between the computers using the MIL-STD-1553B network. One of the computers, the Power Management Controller (PMC), controls the primary power distribution and another, the Load Management Controller (LMC), controls the secondary power distribution. Each of these computers communicates with two other computers which act as subsidiary controllers. These subsidiary controllers are, in turn, connected to the devices which directly control the flow of electrical power.
Adaptive Management of Computing and Network Resources for Spacecraft Systems
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)
2000-01-01
It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.
NASA Technical Reports Server (NTRS)
Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul
1995-01-01
Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.
NASA Astrophysics Data System (ADS)
Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca
2014-12-01
The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.
A Distributed Data Base Version of INGRES.
ERIC Educational Resources Information Center
Stonebraker, Michael; Neuhold, Eric
Extensions are required to the currently operational INGRES data base system for it to manage a data base distributed over multiple machines in a computer network running the UNIX operating system. Three possible user views include: (1) each relation in a unique machine, (2) a user interaction with the data base which can only span relations at a…
[Research on Zhejiang blood information network and management system].
Yan, Li-Xing; Xu, Yan; Meng, Zhong-Hua; Kong, Chang-Hong; Wang, Jian-Min; Jin, Zhen-Liang; Wu, Shi-Ding; Chen, Chang-Shui; Luo, Ling-Fei
2007-02-01
This research was aimed to develop the first level blood information centralized database and real time communication network at a province area in China. Multiple technology like local area network database separate operation, real time data concentration and distribution mechanism, allopatric backup, and optical fiber virtual private network (VPN) were used. As a result, the blood information centralized database and management system were successfully constructed, which covers all the Zhejiang province, and the real time exchange of blood data was realised. In conclusion, its implementation promote volunteer blood donation and ensure the blood safety in Zhejiang, especially strengthen the quick response to public health emergency. This project lays the first stone of centralized test and allotment among blood banks in Zhejiang, and can serve as a reference of contemporary blood bank information systems in China.
Network interface unit design options performance analysis
NASA Technical Reports Server (NTRS)
Miller, Frank W.
1991-01-01
An analysis is presented of three design options for the Space Station Freedom (SSF) onboard Data Management System (DMS) Network Interface Unit (NIU). The NIU provides the interface from the Fiber Distributed Data Interface (FDDI) local area network (LAN) to the DMS processing elements. The FDDI LAN provides the primary means for command and control and low and medium rate telemetry data transfers on board the SSF. The results of this analysis provide the basis for the implementation of the NIU.
Extremely high data-rate, reliable network systems research
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Maly, Kurt J.; Mukkamala, R.; Murray, Nicholas D.; Overstreet, C. Michael
1990-01-01
Significant progress was made over the year in the four focus areas of this research group: gigabit protocols, extensions of metropolitan protocols, parallel protocols, and distributed simulations. Two activities, a network management tool and the Carrier Sensed Multiple Access Collision Detection (CSMA/CD) protocol, have developed to the point that a patent is being applied for in the next year; a tool set for distributed simulation using the language SIMSCRIPT also has commercial potential and is to be further refined. The year's results for each of these areas are summarized and next year's activities are described.
Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Alvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier
2011-01-01
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the "Smart Grid" which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called "MagicBox" equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.
NASA Astrophysics Data System (ADS)
Benson, R. B.; Ahern, T. K.; Trabant, C.
2006-12-01
The IRIS Data Management System has long supported international collaboration for seismology by both deploying a global network of seismometers and creating and maintaining an open and accessible archive in Seattle, WA, known as the Data Management Center (DMC). With sensors distributed on a global scale spanning more than 30 years of digital data, the DMC provides a rich repository of observations across broad time and space domains. Primary seismological data types include strong motion and broadband seismometers, conventional and superconducting gravimeters, tilt and creep meters, GPS measurements, along with other similar sensors that record accurate and calibrated ground motion. What may not be as well understood is the volume of environmental data that accompanies typical seismological data these days. This poster will review the types of time-series data that are currently being collected, how they are collected, and made freely available for download at the IRIS DMC. Environmental sensor data that is often co-located with geophysical data sensors include temperature, barometric pressure, wind direction and speed, humidity, insolation, rain gauge, and sometimes hydrological data like water current, level, temperature and depth. As the primary archival institution of the International Federation of Digital Seismograph Networks (FDSN), the IRIS DMC collects approximately 13,600 channels of real-time data from 69 different networks, from close to 1600 individual stations, currently averaging 10Tb per year in total. A major contribution to the IRIS archive currently is the EarthScope project data, a ten-year science undertaking that is collecting data from a high-resolution, multi-variate sensor network. Data types include magnetotelluric, high-sample rate seismics from a borehole drilled into the San Andreas fault (SAFOD) and various types of strain data from the Plate Boundary Observatory (PBO). In addition to the DMC, data centers located in other countries are networked seamlessly, and are providing access for researchers to these data from national networks around the world utilizing the IRIS developed Data Handling Interface (DHI) system. This poster will highlight some of the DHI enabled clients that allow geophysical information to be directly transferred to the clients. This ability allows one to construct a virtual network of data centers providing the illusion of a single virtual observatory. Furthermore, some of the features that will be shown include direct connections to MATLAB and the ability to access globally distributed sensor data in real time. We encourage discussion and participation from network operators who would like to leverage existing technology, as well as enabling collaboration.
Telecommunications: Opportunities in the emerging technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schultheis, R.W.
1994-12-31
A series of slides present opportunities for Utility Telecommunications. The following aspects are covered: (1) Technology in a period of revolution; (2) Technology Management by the Energy Utility, (3) Contemporary Telecommunication Network Architectures, (4) Opportunity Management, (5) Strategic Planning for Profits and Growth, (6) Energy Industry in a period of challenge. Management topics and applications are presented in a matrix for generation, transmission, distribution, customer service and new business revenue growth subjects.
Network Policy Languages: A Survey and a New Approach
2000-08-01
go, throwing a community into economic chaos. These events could result from discontinued funding from Silicon Valley investors who became aware of...prevented. C. POLICY HIERARCHIES FOR DISTRIBUTED SYSTEMS MANAGEMENT In [23], Moffett and Sloman form a policy hierarchy is by refining general high...management behavior of a system, without coding the behavior into the manager agents. Lupu and Sloman focus on techniques and tool support for off-line
Channel and Interface Management in a Heterogeneous Multi-Channel Multi-Radio Wireless Network
2009-03-01
its channel is changed, the change is advertised to neighbors. An R-interface is also used for transmitting packets that are to be sent on its...Agrawal, S. Banerjee, and S. Ganguly, “Distributed channel management in uncoordinated wireless environments.” in MOBI - COM, 2006, pp. 170–181. [18] W. Wang
1981-06-01
independently on the same network. Given this reduction in scale, the projected impl widely distributed, fully replicated, synchronized dat design...Manager that "owns" other resources. This strategy requires minimum synchronization while providing advantages in reliability and robustness. 2 3...interactive tools on TENEX, transparent file motion and translation, and a primitive set of project management functions. This demonstration confirmed that
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katipamula, Srinivas; Gowri, Krishnan; Hernandez, George
This paper describes one such reference process that can be deployed to provide continuous automated conditioned-based maintenance management for buildings that have BIM, a building automation system (BAS) and a computerized maintenance management software (CMMS) systems. The process can be deployed using an open source transactional network platform, VOLTTRON™, designed for distributed sensing and controls and supports both energy efficiency and grid services.
Game theoretic sensor management for target tracking
NASA Astrophysics Data System (ADS)
Shen, Dan; Chen, Genshe; Blasch, Erik; Pham, Khanh; Douville, Philip; Yang, Chun; Kadar, Ivan
2010-04-01
This paper develops and evaluates a game-theoretic approach to distributed sensor-network management for target tracking via sensor-based negotiation. We present a distributed sensor-based negotiation game model for sensor management for multi-sensor multi-target tacking situations. In our negotiation framework, each negotiation agent represents a sensor and each sensor maximizes their utility using a game approach. The greediness of each sensor is limited by the fact that the sensor-to-target assignment efficiency will decrease if too many sensor resources are assigned to a same target. It is similar to the market concept in real world, such as agreements between buyers and sellers in an auction market. Sensors are willing to switch targets so that they can obtain their highest utility and the most efficient way of applying their resources. Our sub-game perfect equilibrium-based negotiation strategies dynamically and distributedly assign sensors to targets. Numerical simulations are performed to demonstrate our sensor-based negotiation approach for distributed sensor management.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali mohammad
2014-01-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14% reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
NASA Astrophysics Data System (ADS)
Izadi, Arman; Kimiagari, Ali Mohammad
2014-05-01
Distribution network design as a strategic decision has long-term effect on tactical and operational supply chain management. In this research, the location-allocation problem is studied under demand uncertainty. The purposes of this study were to specify the optimal number and location of distribution centers and to determine the allocation of customer demands to distribution centers. The main feature of this research is solving the model with unknown demand function which is suitable with the real-world problems. To consider the uncertainty, a set of possible scenarios for customer demands is created based on the Monte Carlo simulation. The coefficient of variation of costs is mentioned as a measure of risk and the most stable structure for firm's distribution network is defined based on the concept of robust optimization. The best structure is identified using genetic algorithms and 14 % reduction in total supply chain costs is the outcome. Moreover, it imposes the least cost variation created by fluctuation in customer demands (such as epidemic diseases outbreak in some areas of the country) to the logistical system. It is noteworthy that this research is done in one of the largest pharmaceutical distribution firms in Iran.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Ravindra; Reilly, James T.; Wang, Jianhui
Deregulation of the electric utility industry, environmental concerns associated with traditional fossil fuel-based power plants, volatility of electric energy costs, Federal and State regulatory support of “green” energy, and rapid technological developments all support the growth of Distributed Energy Resources (DERs) in electric utility systems and ensure an important role for DERs in the smart grid and other aspects of modern utilities. DERs include distributed generation (DG) systems, such as renewables; controllable loads (also known as demand response); and energy storage systems. This report describes the role of aggregators of DERs in providing optimal services to distribution networks, through DERmore » monitoring and control systems—collectively referred to as a Distributed Energy Resource Management System (DERMS)—and microgrids in various configurations.« less
Design of a QoS-controlled ATM-based communications system in chorus
NASA Astrophysics Data System (ADS)
Coulson, Geoff; Campbell, Andrew; Robin, Philippe; Blair, Gordon; Papathomas, Michael; Shepherd, Doug
1995-05-01
We describe the design of an application platform able to run distributed real-time and multimedia applications alongside conventional UNIX programs. The platform is embedded in a microkernel/PC environment and supported by an ATM-based, QoS-driven communications stack. In particular, we focus on resource-management aspects of the design and deal with CPU scheduling, network resource-management and memory-management issues. An architecture is presented that guarantees QoS levels of both communications and processing with varying degrees of commitment as specified by user-level QoS parameters. The architecture uses admission tests to determine whether or not new activities can be accepted and includes modules to translate user-level QoS parameters into representations usable by the scheduling, network, and memory-management subsystems.
1983-07-01
Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, S.J.Ben; Lauer, Gregory S.
Extreme-science drives the need for distributed exascale processing and communications that are carefully, yet flexibly, managed. Exponential growth of data for scientific simulations, experimental data, collaborative data analyses, remote visualization and GRID computing requirements of scientists in fields as diverse as high energy physics, climate change, genomics, fusion, synchrotron radiation, material science, medicine, and other scientific disciplines cannot be accommodated by simply applying existing transport protocols to faster pipes. Further, scientific challenges today demand diverse research teams, heightening the need for and increasing the complexity of collaboration. To address these issues within the network layer and physical layer, we havemore » performed a number of research activities surrounding effective allocation and management of elastic optical network (EON) resources, particularly focusing on FlexGrid transponders. FlexGrid transponders support the opportunity to build Layer-1 connections at a wide range of bandwidths and to reconfigure them rapidly. The new flexibility supports complex new ways of using the physical layer that must be carefully managed and hidden from the scientist end-users. FlexGrid networks utilize flexible (or elastic) spectral bandwidths for each data link without using fixed wavelength grids. The flexibility in spectrum allocation brings many appealing features to network operations. Current networks are designed for the worst case impairments in transmission performance and the assigned spectrum is over-provisioned. In contrast, the FlexGrid networks can operate with the highest spectral efficiency and minimum bandwidth for the given traffic demand while meeting the minimum quality of transmission (QoT) requirement. Two primary focuses of our research are: (1) resource and spectrum allocation (RSA) for IP traffic over EONs, and (2) RSA for cross-domain optical networks. Previous work concentrates primarily on large file transfers within a single domain. Adding support for IP traffic changes the nature of the RSA problem: instead of choosing to accept or deny each request for network support, IP traffic is inherently elastic and thus lends itself to a bandwidth maximization formulation. We developed a number of algorithms that could be easily deployed within existing and new FlexGrid networks, leading to networks that better support scientific collaboration. Cross-domain RSA research is essential to support large-scale FlexGrid networks, since configuration information is generally not shared or coordinated across domains. The results presented here are in their early stages. They are technically feasible and practical, but still require coordination among organizations and equipment owners and a higher-layer framework for managing network requests.« less
Matthew J. Macander; Tricia L. Wurtz
2007-01-01
Alaska has relatively few invasive plants, and most of them are found only along the state's limited road system. Melilotus alba, or sweetclover, is one of the most widely distributed invasives in the state. Melilotus has recently moved from roadsides to the flood plains of at least three glacial rivers. We developed a network...
2016-12-02
Quantum Computing , University of Waterloo, Waterloo ON, N2L 3G1, Canada (Dated: December 1, 2016) Continuous variable (CV) quantum key distribution (QKD...Networking with QUantum operationally-Secure Technology for Maritime Deployment (CONQUEST) Contract Period of Performance: 2 September 2016 – 1 September...this letter or have any other questions. Sincerely, Raytheon BBN Technologies Kathryn Carson Program Manager Quantum Information Processing
The SysMan monitoring service and its management environment
NASA Astrophysics Data System (ADS)
Debski, Andrzej; Janas, Ekkehard
1996-06-01
Management of modern information systems is becoming more and more complex. There is a growing need for powerful, flexible and affordable management tools to assist system managers in maintaining such systems. It is at the same time evident that effective management should integrate network management, system management and application management in a uniform way. Object oriented OSI management architecture with its four basic modelling concepts (information, organization, communication and functional models) together with widely accepted distribution platforms such as ANSA/CORBA, constitutes a reliable and modern framework for the implementation of a management toolset. This paper focuses on the presentation of concepts and implementation results of an object oriented management toolset developed and implemented within the framework of the ESPRIT project 7026 SysMan. An overview is given of the implemented SysMan management services including the System Management Service, Monitoring Service, Network Management Service, Knowledge Service, Domain and Policy Service, and the User Interface. Special attention is paid to the Monitoring Service which incorporates the architectural key entity responsible for event management. Its architecture and building components, especially filters, are emphasized and presented in detail.
2008-06-01
numbers—into inventory, sales, purchasing, marketing , and similar database systems distributed throughout an enterprise.(Sweeney, 2005) It can be seen as...the following: • Data sharing , both inside and outside of an enterprise. • Efficient management of massive data produced by an RFID system...matrix can be read omni-directionally and can be scaled down so that it can be affixed to small items. The DoD brokered an agreement with EAN/ UCC , the
Influences of VSAT network on the economical and industrial development
NASA Astrophysics Data System (ADS)
Lancrenon, B.; Lorent, P.
1990-10-01
The adaptable, rapidly assembled and operational VSAT (very small aperature terminal) satellite network is a tool which rapidly provides essential digital infrastructure for business communication networks in order to support and stimulate the development of modern industry. A market analysis is given for VSATs, discussing such topics as applications of the product, retail and distribution, banking finance, and manufacturing industry. The centralized booking of the tourism transport sector is also investigated. The network including the earth stations, the satellite, the systems aspects, and the network management is described in detail and diagrams are provided. Some estimates of space channel cost per year are given.
Study and Application of Remote Data Moving Transmission under the Network Convergence
NASA Astrophysics Data System (ADS)
Zhiguo, Meng; Du, Zhou
The data transmission is an important problem in remote applications. Advance of network convergence has help to select and use data transmission model. The embedded system and data management platform is a key of the design. With communication module, interface technology and the transceiver which has independent intellectual property rights connected broadband network and mobile network seamlessly. Using the distribution system of mobile base station to realize the wireless transmission, using public networks to implement the data transmission, making the distant information system break through area restrictions and realizing transmission of the moving data, it has been fully recognized in long-distance medical care applications.
Dynamic waste management (DWM): towards an evolutionary decision-making approach.
Rojo, Gabriel; Glaus, Mathias; Laforest, Valerie; Laforest, Valérie; Bourgois, Jacques; Bourgeois, Jacques; Hausler, Robert
2013-12-01
To guarantee sustainable and dynamic waste management, the dynamic waste management approach (DWM) suggests an evolutionary new approach that maintains a constant flow towards the most favourable waste treatment processes (facilities) within a system. To that end, DWM is based on the law of conservation of energy, which allows the balancing of a network, while considering the constraints of incoming (h1 ) and outgoing (h2 ) loads, as well as the distribution network (ΔH) characteristics. The developed approach lies on the identification of the prioritization index (PI) for waste generators (analogy to h1 ), a global allocation index for each of the treatment processes (analogy to h2 ) and the linear index load loss (ΔH) associated with waste transport. To demonstrate the scope of DWM, we outline this approach, and then present an example of its application. The case study shows that the variable monthly waste from the three considered sources is dynamically distributed in priority to the more favourable processes. Moreover, the reserve (stock) helps temporarily store waste in order to ease the global load of the network and favour a constant feeding of the treatment processes. The DWM approach serves as a decision-making tool by evaluating new waste treatment processes, as well as their location and new means of transport for waste.
An IEEE 1451.1 Architecture for ISHM Applications
NASA Technical Reports Server (NTRS)
Morris, Jon A.; Turowski, Mark; Schmalzel, John L.; Figueroa, Jorge F.
2007-01-01
The IEEE 1451.1 Standard for a Smart Transducer Interface defines a common network information model for connecting and managing smart elements in control and data acquisition networks using network-capable application processors (NCAPs). The Standard is a network-neutral design model that is easily ported across operating systems and physical networks for implementing complex acquisition and control applications by simply plugging in the appropriate network level drivers. To simplify configuration and tracking of transducer and actuator details, the family of 1451 standards defines a Transducer Electronic Data Sheet (TEDS) that is associated with each physical element. The TEDS contains all of the pertinent information about the physical operations of a transducer (such as operating regions, calibration tables, and manufacturer information), which the NCAP uses to configure the system to support a specific transducer. The Integrated Systems Health Management (ISHM) group at NASA's John C. Stennis Space Center (SSC) has been developing an ISHM architecture that utilizes IEEE 1451.1 as the primary configuration and data acquisition mechanism for managing and collecting information from a network of distributed intelligent sensing elements. This work has involved collaboration with other NASA centers, universities and aerospace industries to develop IEEE 1451.1 compliant sensors and interfaces tailored to support health assessment of complex systems. This paper and presentation describe the development and implementation of an interface for the configuration, management and communication of data, information and knowledge generated by a distributed system of IEEE 1451.1 intelligent elements monitoring a rocket engine test system. In this context, an intelligent element is defined as one incorporating support for the IEEE 1451.x standards and additional ISHM functions. Our implementation supports real-time collection of both measurement data (raw ADC counts and converted engineering units) and health statistics produced by each intelligent element. The handling of configuration, calibration and health information is automated by using the TEDS in combination with other electronic data sheets extensions to convey health parameters. By integrating the IEEE 1451.1 Standard for a Smart Transducer Interface with ISHM technologies, each element within a complex system becomes a highly flexible computation engine capable of self-validation and performing other measures of the quality of information it is producing.
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
Lim, Han Chuen; Yoshizawa, Akio; Tsuchida, Hidemi; Kikuchi, Kazuro
2008-09-15
We present a theoretical model for the distribution of polarization-entangled photon-pairs produced via spontaneous parametric down-conversion within a local-area fiber network. This model allows an entanglement distributor who plays the role of a service provider to determine the photon-pair generation rate giving highest two-photon interference fringe visibility for any pair of users, when given user-specific parameters. Usefulness of this model is illustrated in an example and confirmed in an experiment, where polarization-entangled photon-pairs are distributed over 82 km and 132 km of dispersion-managed optical fiber. Experimentally observed visibilities and entanglement fidelities are in good agreement with theoretically predicted values.
A Spectrum Sensing Network for Cognitive PMSE Systems
NASA Astrophysics Data System (ADS)
Brendel, Johannes; Riess, Steffen; Stoeckle, Andreas; Rummel, Rafael; Fischer, Georg
2012-09-01
This article is about a Spectrum Sensing Network (SSN) which generates an accurate radio environment map (e.g. power over frequency, time, and location) from a given application area. It is intended to be used in combination with cognitive Program Making and Special Events (PMSE) devices (e.g. wireless microphones) to improve their operation reliability. The SSN consists of a distributed network of multiple scanning radio receivers and a central data management and storage unit. The parts of the SSN are presented in detail and the advantages and use cases of such a sensing network structure will be outlined.
Airame, S.; Dugan, J.E.; Lafferty, K.D.; Leslie, H.; McArdle, D.A.; Warner, R.R.
2003-01-01
Using ecological criteria as a theoretical framework, we describe the steps involved in designing a network of marine reserves for conservation and fisheries management. Although we describe the case study of the Channel Islands, the approach to marine reserve design may be effective in other regions where traditional management alone does not sustain marine resources. A group of agencies, organizations, and individuals established clear goals for marine reserves in the Channel Islands, including conservation of ecosystem biodiversity, sustainable fisheries, economic viability, natural and cultural heritage, and education. Given the constraints of risk management, experimental design, monitoring, and enforcement, scientists recommended at least one, but no more than four, reserves in each biogeographic region. In general, the percentage of an area to be included in a reserve network depends on the goals. In the Channel Islands, after consideration of both conservation goals and the risk from human threats and natural catastrophes, scientists recommended reserving an area of 30-50% of all representative habitats in each biogeographic region. For most species of concern, except pinnipeds and seabirds, information about distributions, dispersal, and population growth was limited. As an alternative to species distribution information, suitable habitats for species of concern were used to locate potential reserve sites. We used a simulated annealing algorithm to identify potential reserve network scenarios that would represent all habitats within the smallest area possible. The analysis produced an array of potential reserve network scenarios that all met the established goals.
Developing Use Cases for Evaluation of ADMS Applications to Accelerate Technology Adoption: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veda, Santosh; Wu, Hongyu; Martin, Maurice
Grid modernization for the distribution systems comprise of the ability to effectively monitor and manage unplanned events while ensuring reliable operations. Integration of Distributed Energy Resources (DERs) and proliferation of autonomous smart controllers like microgrids and smart inverters in the distribution networks challenge the status quo of distribution system operations. Advanced Distribution Management System (ADMS) technologies are being increasingly deployed to manage the complexities of operating distribution systems. The ability to evaluate the ADMS applications in specific utility environments and for future scenarios will accelerate wider adoption of the ADMS and will lower the risks and costs of their implementation.more » This paper addresses the first step - identify and define the use cases for evaluating these applications. The applications that are selected for this discussion include Volt-VAr Optimization (VVO), Fault Location Isolation and Service Restoration (FLISR), Online Power Flow (OLPF)/Distribution System State Estimation (DSSE) and Market Participation. A technical description and general operational requirements for each of these applications is presented. The test scenarios that are most relevant to the utility challenges are also addressed.« less
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen; ...
2018-01-26
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
RICIS Software Engineering 90 Symposium: Aerospace Applications and Research Directions Proceedings
NASA Technical Reports Server (NTRS)
1990-01-01
Papers presented at RICIS Software Engineering Symposium are compiled. The following subject areas are covered: synthesis - integrating product and process; Serpent - a user interface management system; prototyping distributed simulation networks; and software reuse.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-12
...) Not to exceed 3000 positions that require unique cyber security skills and knowledge to perform cyber..., distributed control systems security, cyber incident response, cyber exercise facilitation and management, cyber vulnerability detection and assessment, network and systems engineering, enterprise architecture...
Secure Large-Scale Airport Simulations Using Distributed Computational Resources
NASA Technical Reports Server (NTRS)
McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)
2001-01-01
To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.
Small Aircraft Data Distribution System
NASA Technical Reports Server (NTRS)
Chazanoff, Seth L.; Dinardo, Steven J.
2012-01-01
The CARVE Small Aircraft Data Distribution System acquires the aircraft location and attitude data that is required by the various programs running on a distributed network. This system distributes the data it acquires to the data acquisition programs for inclusion in their data files. It uses UDP (User Datagram Protocol) to broadcast data over a LAN (Local Area Network) to any programs that might have a use for the data. The program is easily adaptable to acquire additional data and log that data to disk. The current version also drives displays using precision pitch and roll information to aid the pilot in maintaining a level-level attitude for radar/radiometer mapping beyond the degree available by flying visually or using a standard gyro-driven attitude indicator. The software is designed to acquire an array of data to help the mission manager make real-time decisions as to the effectiveness of the flight. This data is displayed for the mission manager and broadcast to the other experiments on the aircraft for inclusion in their data files. The program also drives real-time precision pitch and roll displays for the pilot and copilot to aid them in maintaining the desired attitude, when required, during data acquisition on mapping lines.
Romolini, Michele; Morgan Grove, J; Ventriss, Curtis L; Koliba, Christopher J; Krymkowski, Daniel H
2016-08-01
Efforts to create more sustainable cities are evident in the proliferation of sustainability policies in cities worldwide. It has become widely proposed that the success of these urban sustainability initiatives will require city agencies to partner with, and even cede authority to, organizations from other sectors and levels of government. Yet the resulting collaborative networks are often poorly understood, and the study of large whole networks has been a challenge for researchers. We believe that a better understanding of citywide environmental governance networks can inform evaluations of their effectiveness, thus contributing to improved environmental management. Through two citywide surveys in Baltimore and Seattle, we collected data on the attributes of environmental stewardship organizations and their network relationships. We applied missing data treatment approaches and conducted social network and comparative analyses to examine (a) the organizational composition of the network, and (b) how information and knowledge are shared throughout the network. Findings revealed similarities in the number of actors and their distribution across sectors, but considerable variation in the types and locations of environmental stewardship activities, and in the number and distribution of network ties in the networks of each city. We discuss the results and potential implications of network research for urban sustainability governance.
NASA Astrophysics Data System (ADS)
Marcus, Kelvin
2014-06-01
The U.S Army Research Laboratory (ARL) has built a "Network Science Research Lab" to support research that aims to improve their ability to analyze, predict, design, and govern complex systems that interweave the social/cognitive, information, and communication network genres. Researchers at ARL and the Network Science Collaborative Technology Alliance (NS-CTA), a collaborative research alliance funded by ARL, conducted experimentation to determine if automated network monitoring tools and task-aware agents deployed within an emulated tactical wireless network could potentially increase the retrieval of relevant data from heterogeneous distributed information nodes. ARL and NS-CTA required the capability to perform this experimentation over clusters of heterogeneous nodes with emulated wireless tactical networks where each node could contain different operating systems, application sets, and physical hardware attributes. Researchers utilized the Dynamically Allocated Virtual Clustering Management System (DAVC) to address each of the infrastructure support requirements necessary in conducting their experimentation. The DAVC is an experimentation infrastructure that provides the means to dynamically create, deploy, and manage virtual clusters of heterogeneous nodes within a cloud computing environment based upon resource utilization such as CPU load, available RAM and hard disk space. The DAVC uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex private networks. Clusters created by the DAVC system can be utilized for software development, experimentation, and integration with existing hardware and software. The goal of this paper is to explore how ARL and the NS-CTA leveraged the DAVC to create, deploy and manage multiple experimentation clusters to support their experimentation goals.
Gonzalez, Dennis; Tjandraatmadja, Grace; Barry, Karen; Vanderzalm, Joanne; Kaksonen, Anna H; Dillon, Peter; Puzon, Geoff J; Sidhu, Jatinder; Wylie, Jason; Goodman, Nigel; Low, Jason
2016-11-15
The injection of stormwater into aquifers for storage and recovery during high water demand periods is a promising technology for augmenting conventional water reserves. Limited information exists regarding the potential impact of aquifer treated stormwater in distribution system infrastructure. This study describes a one year pilot distribution pipe network trial to determine the biofouling potential for cement, copper and polyvinyl chloride pipe materials exposed to stormwater stored in a limestone aquifer compared to an identical drinking water rig. Median alkalinity (123 mg/L) and colour (12 HU) in stormwater was significantly higher than in drinking water (82 mg/L and 1 HU) and pipe discolouration was more evident for stormwater samples. X-ray Diffraction and Fluorescence analyses confirmed this was driven by the presence of iron rich amorphous compounds in more thickly deposited sediments also consistent with significantly higher median levels of iron (∼0.56 mg/L) in stormwater compared to drinking water (∼0.17 mg/L). Water type did not influence biofilm development as determined by microbial density but faecal indicators were significantly higher for polyvinyl chloride and cement exposed to stormwater. Treatment to remove iron through aeration and filtration would reduce the potential for sediment accumulation. Operational and verification monitoring parameters to manage scaling, corrosion, colour, turbidity and microbial growth in recycled stormwater distribution networks are discussed. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi
2017-09-01
Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.
Industrial application for global quantum communication
NASA Astrophysics Data System (ADS)
Mirza, A.; Petruccione, F.
2012-09-01
In the last decade the quantum communication community has witnessed great advances in photonic quantum cryptography technology with the research, development and commercialization of automated Quantum Key Distribution (QKD) devices. These first generation devices are however bottlenecked by the achievable spatial coverage. This is due to the intrinsic absorption of the quantum particle into the communication medium. As QKD is of paramount importance in the future ICT landscape, various innovative solutions have been developed and tested to expand the spatial coverage of these networks such as the Quantum City initiative in Durban, South Africa. To expand this further into a global QKD-secured network, recent efforts have focussed on high-altitude free-space techniques through the use of satellites. This couples the QKD-secured Metropolitan Area Networks (MANs) with secured ground-tosatellite links as access points to a global network. Such a solution, however, has critical limitations that reduce its commercial feasibility. As parallel step to the development of satellitebased global QKD networks, we investigate the use of the commercial aircrafts' network as secure transport mechanisms in a global QKD network. This QKD-secured global network will provide a robust infrastructure to create, distribute and manage encryption keys between the MANs of the participating cities.
Optimizing Cluster Heads for Energy Efficiency in Large-Scale Heterogeneous Wireless Sensor Networks
Gu, Yi; Wu, Qishi; Rao, Nageswara S. V.
2010-01-01
Many complex sensor network applications require deploying a large number of inexpensive and small sensors in a vast geographical region to achieve quality through quantity. Hierarchical clustering is generally considered as an efficient and scalable way to facilitate the management and operation of such large-scale networks and minimize the total energy consumption for prolonged lifetime. Judicious selection of cluster heads for data integration and communication is critical to the success of applications based on hierarchical sensor networks organized as layered clusters. We investigate the problem of selecting sensor nodes in a predeployed sensor network to be the cluster heads tomore » minimize the total energy needed for data gathering. We rigorously derive an analytical formula to optimize the number of cluster heads in sensor networks under uniform node distribution, and propose a Distance-based Crowdedness Clustering algorithm to determine the cluster heads in sensor networks under general node distribution. The results from an extensive set of experiments on a large number of simulated sensor networks illustrate the performance superiority of the proposed solution over the clustering schemes based on k -means algorithm.« less
Chip-set for quality of service support in passive optical networks
NASA Astrophysics Data System (ADS)
Ringoot, Edwin; Hoebeke, Rudy; Slabbinck, B. Hans; Verhaert, Michel
1998-10-01
In this paper the design of a chip-set for QoS provisioning in ATM-based Passive Optical Networks is discussed. The implementation of a general-purpose switch chip on the Optical Network Unit is presented, with focus on the design of the cell scheduling and buffer management logic. The cell scheduling logic supports `colored' grants, priority jumping and weighted round-robin scheduling. The switch chip offers powerful buffer management capabilities enabling the efficient support of GFR and UBR services. Multicast forwarding is also supported. In addition, the architecture of a MAC controller chip developed for a SuperPON access network is introduced. In particular, the permit scheduling logic and its implementation on the Optical Line Termination will be discussed. The chip-set enables the efficient support of services with different service requirements on the SuperPON. The permit scheduling logic built into the MAC controller chip in combination with the cell scheduling and buffer management capabilities of the switch chip can be used by network operators to offer guaranteed service performance to delay sensitive services, and to efficiently and fairly distribute any spare capacity to delay insensitive services.
Neel, Maile; Tumas, Hayley R; Marsden, Brittany W
2014-01-01
We apply a comprehensive suite of graph theoretic metrics to illustrate how landscape connectivity can be effectively incorporated into conservation status assessments and in setting conservation objectives. These metrics allow conservation practitioners to evaluate and quantify connectivity in terms of representation, resiliency, and redundancy and the approach can be applied in spite of incomplete knowledge of species-specific biology and dispersal processes. We demonstrate utility of the graph metrics by evaluating changes in distribution and connectivity that would result from implementing two conservation plans for three endangered plant species (Erigeron parishii, Acanthoscyphus parishii var. goodmaniana, and Eriogonum ovalifolium var. vineum) relative to connectivity under current conditions. Although distributions of the species differ from one another in terms of extent and specific location of occupied patches within the study landscape, the spatial scale of potential connectivity in existing networks were strikingly similar for Erigeron and Eriogonum, but differed for Acanthoscyphus. Specifically, patches of the first two species were more regularly distributed whereas subsets of patches of Acanthoscyphus were clustered into more isolated components. Reserves based on US Fish and Wildlife Service critical habitat designation would not greatly contribute to maintain connectivity; they include 83-91% of the extant occurrences and >92% of the aerial extent of each species. Effective connectivity remains within 10% of that in the whole network for all species. A Forest Service habitat management strategy excluded up to 40% of the occupied habitat of each species resulting in both range reductions and loss of occurrences from the central portions of each species' distribution. Overall effective network connectivity was reduced to 62-74% of the full networks. The distance at which each CHMS network first became fully connected was reduced relative to the full network in Erigeron and Acanthoscyphus due to exclusion of peripheral patches, but was slightly increased for Eriogonum. Distances at which networks were sensitive to loss of connectivity due to presence non-redundant connections were affected mostly for Acanthoscyphos. Of most concern was that the range of distances at which lack of redundancy yielded high risk was much greater than in the full network. Through this in-depth example evaluating connectivity using a comprehensive suite of developed graph theoretic metrics, we establish an approach as well as provide sample interpretations of subtle variations in connectivity that conservation managers can incorporate into planning.
Tumas, Hayley R.; Marsden, Brittany W.
2014-01-01
We apply a comprehensive suite of graph theoretic metrics to illustrate how landscape connectivity can be effectively incorporated into conservation status assessments and in setting conservation objectives. These metrics allow conservation practitioners to evaluate and quantify connectivity in terms of representation, resiliency, and redundancy and the approach can be applied in spite of incomplete knowledge of species-specific biology and dispersal processes. We demonstrate utility of the graph metrics by evaluating changes in distribution and connectivity that would result from implementing two conservation plans for three endangered plant species (Erigeron parishii, Acanthoscyphus parishii var. goodmaniana, and Eriogonum ovalifolium var. vineum) relative to connectivity under current conditions. Although distributions of the species differ from one another in terms of extent and specific location of occupied patches within the study landscape, the spatial scale of potential connectivity in existing networks were strikingly similar for Erigeron and Eriogonum, but differed for Acanthoscyphus. Specifically, patches of the first two species were more regularly distributed whereas subsets of patches of Acanthoscyphus were clustered into more isolated components. Reserves based on US Fish and Wildlife Service critical habitat designation would not greatly contribute to maintain connectivity; they include 83–91% of the extant occurrences and >92% of the aerial extent of each species. Effective connectivity remains within 10% of that in the whole network for all species. A Forest Service habitat management strategy excluded up to 40% of the occupied habitat of each species resulting in both range reductions and loss of occurrences from the central portions of each species’ distribution. Overall effective network connectivity was reduced to 62–74% of the full networks. The distance at which each CHMS network first became fully connected was reduced relative to the full network in Erigeron and Acanthoscyphus due to exclusion of peripheral patches, but was slightly increased for Eriogonum. Distances at which networks were sensitive to loss of connectivity due to presence non-redundant connections were affected mostly for Acanthoscyphos. Of most concern was that the range of distances at which lack of redundancy yielded high risk was much greater than in the full network. Through this in-depth example evaluating connectivity using a comprehensive suite of developed graph theoretic metrics, we establish an approach as well as provide sample interpretations of subtle variations in connectivity that conservation managers can incorporate into planning. PMID:25320685
Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Álvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier
2011-01-01
In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency. PMID:22247680
Technologies for Networked Enabled Operations
NASA Technical Reports Server (NTRS)
Glass, B.; Levine, J.
2005-01-01
Current point-to-point data links will not scale to support future integration of surveillance, security, and globally-distributed air traffic data, and already hinders efficiency and capacity. While the FAA and industry focus on a transition to initial system-wide information management (SWIM) capabilities, this paper describes a set of initial studies of NAS network-enabled operations technology gaps targeted for maturity in later SWIM spirals (201 5-2020 timeframe).
Effective Network Management via System-Wide Coordination and Optimization
2010-08-01
Srinath Sridhar, Matthew Streeter, Jimeng Sun, Michael Tschantz, Rangarajan Vasudevan, Vijay Vasude- van, Gaurav Veda, Shobha Venkataraman, Justin... Sharma and Byers [150] suggest the use of Bloom filters. While minimizing redundant measurements is a common high-level theme between cSamp and their...NSDI, 2004. [150] M. R. Sharma and J. W. Byers. Scalable Coordination Techniques for Distributed Network Monitoring. In Proc. of PAM, 2005. [151] S
Use of EPANET solver to manage water distribution in Smart City
NASA Astrophysics Data System (ADS)
Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.
2018-02-01
Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.
Filtered Push: Annotating Distributed Data for Quality Control and Fitness for Use Analysis
NASA Astrophysics Data System (ADS)
Morris, P. J.; Kelly, M. A.; Lowery, D. B.; Macklin, J. A.; Morris, R. A.; Tremonte, D.; Wang, Z.
2009-12-01
The single greatest problem with the federation of scientific data is the assessment of the quality and validity of the aggregated data in the context of particular research problems, that is, its fitness for use. There are three critical data quality issues in networks of distributed natural science collections data, as in all scientific data: identifying and correcting errors, maintaining currency, and assessing fitness for use. To this end, we have designed and implemented a prototype network in the domain of natural science collections. This prototype is built over the open source Map-Reduce platform Hadoop with a network client in the open source collections management system Specify 6. We call this network “Filtered Push” as, at its core, annotations are pushed from the network edges to relevant authoritative repositories, where humans and software filter the annotations before accepting them as changes to the authoritative data. The Filtered Push software is a domain-neutral framework for originating, distributing, and analyzing record-level annotations. Network participants can subscribe to notifications arising from ontology-based analyses of new annotations or of purpose-built queries against the network's global history of annotations. Quality and fitness for use of distributed natural science collections data can be addressed with Filtered Push software by implementing a network that allows data providers and consumers to define potential errors in data, develop metrics for those errors, specify workflows to analyze distributed data to detect potential errors, and to close the quality management cycle by providing a network architecture to pushing assertions about data quality such as corrections back to the curators of the participating data sets. Quality issues in distributed scientific data have several things in common: (1) Statements about data quality should be regarded as hypotheses about inconsistencies between perhaps several records, data sets, or practices of science. (2) Data quality problems often cannot be detected only from internal statistical correlations or logical analysis, but may need the application of defined workflows that signal illogical output. (3) Changes in scientific theory or practice over time can result in changes of what QC tests should be applied to legacy data. (4) The frequency of some classes of error in a data set may be identifiable without the ability to assert that a particular record is in error. To address these issues requires, as does science itself, framing QC hypotheses against data that may be anywhere and may arise at any time in the future. In short, QC for science data is a never ending process. It must provide for notice to an agent (human or software) that a given dataset supports a hypothesis of inconsistency with a current scientific resource or model, or with potential generalizations of the concepts in a metadata ontology. Like quality control in general, quality control of distributed data is a repeated cyclical process. In implementing a Filtered Push network for quality control, we have a model in which the cost of QC forever is not substantially greater than QC once.
Real-Time Alpine Measurement System Using Wireless Sensor Networks
2017-01-01
Monitoring the snow pack is crucial for many stakeholders, whether for hydro-power optimization, water management or flood control. Traditional forecasting relies on regression methods, which often results in snow melt runoff predictions of low accuracy in non-average years. Existing ground-based real-time measurement systems do not cover enough physiographic variability and are mostly installed at low elevations. We present the hardware and software design of a state-of-the-art distributed Wireless Sensor Network (WSN)-based autonomous measurement system with real-time remote data transmission that gathers data of snow depth, air temperature, air relative humidity, soil moisture, soil temperature, and solar radiation in physiographically representative locations. Elevation, aspect, slope and vegetation are used to select network locations, and distribute sensors throughout a given network location, since they govern snow pack variability at various scales. Three WSNs were installed in the Sierra Nevada of Northern California throughout the North Fork of the Feather River, upstream of the Oroville dam and multiple powerhouses along the river. The WSNs gathered hydrologic variables and network health statistics throughout the 2017 water year, one of northern Sierra’s wettest years on record. These networks leverage an ultra-low-power wireless technology to interconnect their components and offer recovery features, resilience to data loss due to weather and wildlife disturbances and real-time topological visualizations of the network health. Data show considerable spatial variability of snow depth, even within a 1 km2 network location. Combined with existing systems, these WSNs can better detect precipitation timing and phase in, monitor sub-daily dynamics of infiltration and surface runoff during precipitation or snow melt, and inform hydro power managers about actual ablation and end-of-season date across the landscape. PMID:29120376
Real-Time Alpine Measurement System Using Wireless Sensor Networks.
Malek, Sami A; Avanzi, Francesco; Brun-Laguna, Keoma; Maurer, Tessa; Oroza, Carlos A; Hartsough, Peter C; Watteyne, Thomas; Glaser, Steven D
2017-11-09
Monitoring the snow pack is crucial for many stakeholders, whether for hydro-power optimization, water management or flood control. Traditional forecasting relies on regression methods, which often results in snow melt runoff predictions of low accuracy in non-average years. Existing ground-based real-time measurement systems do not cover enough physiographic variability and are mostly installed at low elevations. We present the hardware and software design of a state-of-the-art distributed Wireless Sensor Network (WSN)-based autonomous measurement system with real-time remote data transmission that gathers data of snow depth, air temperature, air relative humidity, soil moisture, soil temperature, and solar radiation in physiographically representative locations. Elevation, aspect, slope and vegetation are used to select network locations, and distribute sensors throughout a given network location, since they govern snow pack variability at various scales. Three WSNs were installed in the Sierra Nevada of Northern California throughout the North Fork of the Feather River, upstream of the Oroville dam and multiple powerhouses along the river. The WSNs gathered hydrologic variables and network health statistics throughout the 2017 water year, one of northern Sierra's wettest years on record. These networks leverage an ultra-low-power wireless technology to interconnect their components and offer recovery features, resilience to data loss due to weather and wildlife disturbances and real-time topological visualizations of the network health. Data show considerable spatial variability of snow depth, even within a 1 km 2 network location. Combined with existing systems, these WSNs can better detect precipitation timing and phase in, monitor sub-daily dynamics of infiltration and surface runoff during precipitation or snow melt, and inform hydro power managers about actual ablation and end-of-season date across the landscape.
NSI directed to continue SPAN's functions
NASA Technical Reports Server (NTRS)
Rounds, Fred
1991-01-01
During a series of network management retreats in June and July 1990, representatives from NASA Headquarters Codes O and S agreed on networking roles and responsibilities for their respective organizations. The representatives decided that NASA Science Internet (NSI) will assume management of both the Space Physics Analysis Network (SPAN) and the NASA Science Network (NSN). SPAN is now known as the NSI/DECnet, and NSN is now known as the NSI/IP. Some management functions will be distributed between Ames Research Center (ARC) and Goddard Space Flight Center (GSFC). NSI at ARC has the lead role for requirements generation and networking engineering. Advanced Applications and the Network Information Center is being developed at GSFC. GSFC will lead the NSI User Services, but NSI at Ames will continue to provide the User Services during the transition. The transition will be made as transparent as possible for the users. DECnet service will continue, but is now directly managed by NSI at Ames. NSI will continue to work closely with routing center managers at other NASA centers, and has formed a transition team to address the change in management. An NSI/DECnet working group had also been formed as a separate engineering group within NSI to plan the transition to Phase 5, DECnet's approach to Open System Integration (OSI). Transition is not expected for a year or more due to delays in produce releases. Plans to upgrade speeds in tail circuits and the backbone are underway. The proposed baseline service for new connections is up to 56 Kbps; 9.6 Kbps lines will gradually be upgraded as requirements dictate. NSI is in the process of consolidating protocol traffic, tail circuits, and the backbone. Currently NSI's backbone is fractional T1; NSI will go to full T1 service as soon as it is feasible.
Reverse logistics in the Brazilian construction industry.
Nunes, K R A; Mahler, C F; Valle, R A
2009-09-01
In Brazil most Construction and Demolition Waste (C&D waste) is not recycled. This situation is expected to change significantly, since new federal regulations oblige municipalities to create and implement sustainable C&D waste management plans which assign an important role to recycling activities. The recycling organizational network and its flows and components are fundamental to C&D waste recycling feasibility. Organizational networks, flows and components involve reverse logistics. The aim of this work is to introduce the concepts of reverse logistics and reverse distribution channel networks and to study the Brazilian C&D waste case.
NASA Astrophysics Data System (ADS)
Leon, Barbara D.; Heller, Paul R.
1987-05-01
A surveillance network is a group of multiplatform sensors cooperating to improve network performance. Network control is distributed as a measure to decrease vulnerability to enemy threat. The network may contain diverse sensor types such as radar, ESM (Electronic Support Measures), IRST (Infrared search and track) and E-0 (Electro-Optical). Each platform may contain a single sensor or suite of sensors. In a surveillance network it is desirable to control sensors to make the overall system more effective. This problem has come to be known as sensor management and control (SM&C). Two major facets of network performance are surveillance and survivability. In a netted environment, surveillance can be enhanced if information from all sensors is combined and sensor operating conditions are controlled to provide a synergistic effect. In contrast, when survivability is the main concern for the network, the best operating status for all sensors would be passive or off. Of course, improving survivability tends to degrade surveillance. Hence, the objective of SM&C is to optimize surveillance and survivability of the network. Too voluminous data of various formats and the quick response time are two characteristics of this problem which make it an ideal application for Artificial Intelligence. A solution to the SM&C problem, presented as a computer simulation, will be presented in this paper. The simulation is a hybrid production written in LISP and FORTRAN. It combines the latest conventional computer programming methods with Artificial Intelligence techniques to produce a flexible state-of-the-art tool to evaluate network performance. The event-driven simulation contains environment models coupled with an expert system. These environment models include sensor (track-while-scan and agile beam) and target models, local tracking, and system tracking. These models are used to generate the environment for the sensor management and control expert system. The expert system, driven by a forward chaining inference engine, makes decisions based on the global database. The global database contains current track and sensor information supplied by the simulation. At present, the rule base emphasizes the surveillance features with rules grouped into three main categories: maintenance and enhancing track on prioritized targets; filling coverage holes and countering jamming; and evaluating sensor status. The paper will describe the architecture used for the expert system and the reasons for selecting the chosen methods. The SM&C simulation produces a graphical representation of sensors and their associated tracks such that the benefits of the sensor management and control expert system are evident. Jammer locations are also part of the display. The paper will describe results from several scenarios that best illustrate the sensor management and control concepts.
A Minimax Network Flow Model for Characterizing the Impact of Slot Restrictions
NASA Technical Reports Server (NTRS)
Lee, Douglas W.; Patek, Stephen D.; Alexandrov, Natalia; Bass, Ellen J.; Kincaid, Rex K.
2010-01-01
This paper proposes a model for evaluating long-term measures to reduce congestion at airports in the National Airspace System (NAS). This model is constructed with the goal of assessing the global impacts of congestion management strategies, specifically slot restrictions. We develop the Minimax Node Throughput Problem (MINNTHRU), a multicommodity network flow model that provides insight into air traffic patterns when one minimizes the worst-case operation across all airports in a given network. MINNTHRU is thus formulated as a model where congestion arises from network topology. It reflects not market-driven airline objectives, but those of a regulatory authority seeking a distribution of air traffic beneficial to all airports, in response to congestion management measures. After discussing an algorithm for solving MINNTHRU for moderate-sized (30 nodes) and larger networks, we use this model to study the impacts of slot restrictions on the operation of an entire hub-spoke airport network. For both a small example network and a medium-sized network based on 30 airports in the NAS, we use MINNTHRU to demonstrate that increasing the severity of slot restrictions increases the traffic around unconstrained hub airports as well as the worst-case level of operation over all airports.
Driving Innovation in Optical Networking
NASA Astrophysics Data System (ADS)
Colizzi, Ernesto
Over the past 30 years, network applications have changed with the advent of innovative services spanning from high-speed broadband access to mobile data communications and to video signal distribution. To support this service evolution, optical transport infrastructures have changed their role. Innovations in optical networking have not only allowed the pure "bandwidth per fiber" increase, but also the realization of highly dependable and easy-to-manage networks. This article analyzes the innovations that have characterized the optical networking solutions from different perspectives, with a specific focus on the advancements introduced by Alcatel-Lucent's research and development laboratories located in Italy. The advancements of optical networking will be explored and discussed through Alcatel-Lucent's optical products to contextualize each innovation with the market evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Liu, Cheng; Kim, Hoe Kyoung
2011-01-01
The variation of household attributes such as income, travel distance, age, household member, and education for different residential areas may generate different market penetration rates for plug-in hybrid electric vehicle (PHEV). Residential areas with higher PHEV ownership could increase peak electric demand locally and require utilities to upgrade the electric distribution infrastructure even though the capacity of the regional power grid is under-utilized. Estimating the future PHEV ownership distribution at the residential household level can help us understand the impact of PHEV fleet on power line congestion, transformer overload and other unforeseen problems at the local residential distribution network level.more » It can also help utilities manage the timing of recharging demand to maximize load factors and utilization of existing distribution resources. This paper presents a multi agent-based simulation framework for 1) modeling spatial distribution of PHEV ownership at local residential household level, 2) discovering PHEV hot zones where PHEV ownership may quickly increase in the near future, and 3) estimating the impacts of the increasing PHEV ownership on the local electric distribution network with different charging strategies. In this paper, we use Knox County, TN as a case study to show the simulation results of the agent-based model (ABM) framework. However, the framework can be easily applied to other local areas in the US.« less
Distributed Cognition on the road: Using EAST to explore future road transportation systems.
Banks, Victoria A; Stanton, Neville A; Burnett, Gary; Hermawati, Setia
2018-04-01
Connected and Autonomous Vehicles (CAV) are set to revolutionise the way in which we use our transportation system. However, we do not fully understand how the integration of wireless and autonomous technology into the road transportation network affects overall network dynamism. This paper uses the theoretical principles underlying Distributed Cognition to explore the dependencies and interdependencies that exist between system agents located within the road environment, traffic management centres and other external agencies in both non-connected and connected transportation systems. This represents a significant step forward in modelling complex sociotechnical systems as it shows that the principles underlying Distributed Cognition can be applied to macro-level systems using the visual representations afforded by the Event Analysis of Systemic Teamwork (EAST) method. Copyright © 2017 Elsevier Ltd. All rights reserved.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
NASA Astrophysics Data System (ADS)
Fuentes, Daniel; Pérez-Luque, Antonio J.; Bonet García, Francisco J.; Moreno-LLorca, Ricardo A.; Sánchez-Cano, Francisco M.; Suárez-Muñoz, María
2017-04-01
The Long Term Ecological Research (LTER) network aims to provide the scientific community, policy makers, and society with the knowledge and predictive understanding necessary to conserve, protect, and manage the ecosystems. LTER is organized into networks ranging from the global to national scale. In the top of network, the International Long Term Ecological Research (ILTER) Network coordinates among ecological researchers and LTER research networks at local, regional and global scales. In Spain, the Spanish Long Term Ecological Research (LTER-Spain) network was built to foster the collaboration and coordination between longest-lived ecological researchers and networks on a local scale. Currently composed by nine nodes, this network facilitates the data exchange, documentation and preservation encouraging the development of cross-disciplinary works. However, most nodes have no specific information systems, tools or qualified personnel to manage their data for continued conservation and there are no harmonized methodologies for long-term monitoring protocols. Hence, the main challenge is to place the nodes in its correct position in the network, providing the best tools that allow them to manage their data autonomously and make it easier for them to access information and knowledge in the network. This work proposes a connected structure composed by four LTER nodes located in southern Spain. The structure is built considering hierarchical approach: nodes that create information which is documented using metadata standards (such as Ecological Metadata Language, EML); and others nodes that gather metadata and information. We also take into account the capacity of each node to manage their own data and the premise that the data and metadata must be maintained where it is generated. The current state of the nodes is a follows: two of them have their own information management system (Sierra Nevada-Granada and Doñana Long-Term Socio-ecological Research Platform) and another has no infrastructure to maintain their data (The Arid Iberian South East LTSER Platform). The last one (Environmental Information Network of Andalusia-REDIAM) acts as the coordinator, providing physical and logical support to other nodes and also gathers and distributes the information "uphill" to the rest of the network (LTER Europe and ILTER). The development of the network has been divided in three stages. First, existing resources and data management requirements are identified in each node. Second, the necessary software tools and interoperable standards to manage and exchange the data have been selected, installed and configured in each participant. Finally, once the network has been set up completely, it is expected to expand it all over Spain with new nodes and its connection to others LTER and similar networks. This research has been funded by ADAPTAMED (Protection of key ecosystem services by adaptive management of Climate Change endangered Mediterranean socioecosystems) Life EU project, Sierra Nevada Global Change Observatory (LTER-site) and eLTER (Integrated European Long Term Ecosystem & Socio-Ecological Research Infrastructure).
Smartphone technologies and Bayesian networks to assess shorebird habitat selection
Zeigler, Sara; Thieler, E. Robert; Gutierrez, Ben; Plant, Nathaniel G.; Hines, Megan K.; Fraser, James D.; Catlin, Daniel H.; Karpanty, Sarah M.
2017-01-01
Understanding patterns of habitat selection across a species’ geographic distribution can be critical for adequately managing populations and planning for habitat loss and related threats. However, studies of habitat selection can be time consuming and expensive over broad spatial scales, and a lack of standardized monitoring targets or methods can impede the generalization of site-based studies. Our objective was to collaborate with natural resource managers to define available nesting habitat for piping plovers (Charadrius melodus) throughout their U.S. Atlantic coast distribution from Maine to North Carolina, with a goal of providing science that could inform habitat management in response to sea-level rise. We characterized a data collection and analysis approach as being effective if it provided low-cost collection of standardized habitat-selection data across the species’ breeding range within 1–2 nesting seasons and accurate nesting location predictions. In the method developed, >30 managers and conservation practitioners from government agencies and private organizations used a smartphone application, “iPlover,” to collect data on landcover characteristics at piping plover nest locations and random points on 83 beaches and barrier islands in 2014 and 2015. We analyzed these data with a Bayesian network that predicted the probability a specific combination of landcover variables would be associated with a nesting site. Although we focused on a shorebird, our approach can be modified for other taxa. Results showed that the Bayesian network performed well in predicting habitat availability and confirmed predicted habitat preferences across the Atlantic coast breeding range of the piping plover. We used the Bayesian network to map areas with a high probability of containing nesting habitat on the Rockaway Peninsula in New York, USA, as an example application. Our approach facilitated the collation of evidence-based information on habitat selection from many locations and sources, which can be used in management and decision-making applications.
Crist, Michele R.; Knick, Steven T.; Hanser, Steven E.
2015-09-08
The network of areas delineated in 11 Western States for prioritizing management of greater sage-grouse (Centrocercus urophasianus) represents a grand experiment in conservation biology and reserve design. We used centrality metrics from social network theory to gain insights into how this priority area network might function. The network was highly centralized. Twenty of 188 priority areas accounted for 80 percent of the total centrality scores. These priority areas, characterized by large size and a central location in the range-wide distribution, are strongholds for greater sage-grouse populations and also might function as sources. Mid-ranking priority areas may serve as stepping stones because of their location between large central and smaller peripheral priority areas. The current network design and conservation strategy has risks. The contribution of almost one-half (n = 93) of the priority areas combined for less than 1 percent of the cumulative centrality scores for the network. These priority areas individually are likely too small to support viable sage-grouse populations within their boundary. Without habitat corridors to connect small priority areas either to larger priority areas or as a clustered group within the network, their isolation could lead to loss of sage-grouse within these regions of the network.
NASA Astrophysics Data System (ADS)
Schneider, J.; Klein, A.; Mannweiler, C.; Schotten, H. D.
2011-08-01
In the future, sensors will enable a large variety of new services in different domains. Important application areas are service adaptations in fixed and mobile environments, ambient assisted living, home automation, traffic management, as well as management of smart grids. All these applications will share a common property, the usage of networked sensors and actuators. To ensure an efficient deployment of such sensor-actuator networks, concepts and frameworks for managing and distributing sensor data as well as for triggering actuators need to be developed. In this paper, we present an architecture for integrating sensors and actuators into the future Internet. In our concept, all sensors and actuators are connected via gateways to the Internet, that will be used as comprehensive transport medium. Additionally, an entity is needed for registering all sensors and actuators, and managing sensor data requests. We decided to use a hierarchical structure, comparable to the Domain Name Service. This approach realizes a cost-efficient architecture disposing of "plug and play" capabilities and accounting for privacy issues.
Performance evaluation of NASA/KSC CAD/CAE graphics local area network
NASA Technical Reports Server (NTRS)
Zobrist, George
1988-01-01
This study had as an objective the performance evaluation of the existing CAD/CAE graphics network at NASA/KSC. This evaluation will also aid in projecting planned expansions, such as the Space Station project on the existing CAD/CAE network. The objectives were achieved by collecting packet traffic on the various integrated sub-networks. This included items, such as total number of packets on the various subnetworks, source/destination of packets, percent utilization of network capacity, peak traffic rates, and packet size distribution. The NASA/KSC LAN was stressed to determine the useable bandwidth of the Ethernet network and an average design station workload was used to project the increased traffic on the existing network and the planned T1 link. This performance evaluation of the network will aid the NASA/KSC network managers in planning for the integration of future workload requirements into the existing network.
The Feasibility of Implementing Multicommand Software Functions on a Microcomputer Network.
1979-10-01
studies 20 ABSTRACT fConllnuo on revete &Ide it necessary and Identify by block number) ’This report presents the results of a study of design...considerations for hybrid monitor systems for distributed microcomputer networks. The objective of the study was to determine the feasibility of such monitor...Management Informa- tion and Computer Sciences. The study was one task on a project bentitled "The Feasibility of Implementing Multicommand Software
Power Hardware-in-the-Loop Testing of a Smart Distribution System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mendoza Carrillo, Ismael; Breaden, Craig; Medley, Paige
This paper presents the results of the third and final phase of the National Renewable Energy Lab (NREL) INTEGRATE demonstration: Smart Distribution. For this demonstration, high penetrations of solar PV and wind energy systems were simulated in a power hardware-in-the-loop set-up using a smart distribution test feeder. Simulated and real DERs were controlled by a real-time control platform, which manages grid constraints under high clean energy deployment levels. The power HIL testing, conducted at NREL's ESIF smart power lab, demonstrated how dynamically managing DER increases the grid's hosting capacity by leveraging active network management's (ANM) safe and reliable control framework.more » Results are presented for how ANM's real-time monitoring, automation, and control can be used to manage multiple DERs and multiple constraints associated with high penetrations of DER on a distribution grid. The project also successfully demonstrated the importance of escalating control actions given how ANM enables operation of grid equipment closer to their actual physical limit in the presence of very high levels of intermittent DER.« less
Clinical results of HIS, RIS, PACS integration using data integration CASE tools
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Chan, Hing-Ming; Breant, Claudine M.; Huang, Lu J.; Valentino, Daniel J.
1995-05-01
Current infrastructure research in PACS is dominated by the development of communication networks (local area networks, teleradiology, ATM networks, etc.), multimedia display workstations, and hierarchical image storage architectures. However, limited work has been performed on developing flexible, expansible, and intelligent information processing architectures for the vast decentralized image and text data repositories prevalent in healthcare environments. Patient information is often distributed among multiple data management systems. Current large-scale efforts to integrate medical information and knowledge sources have been costly with limited retrieval functionality. Software integration strategies to unify distributed data and knowledge sources is still lacking commercially. Systems heterogeneity (i.e., differences in hardware platforms, communication protocols, database management software, nomenclature, etc.) is at the heart of the problem and is unlikely to be standardized in the near future. In this paper, we demonstrate the use of newly available CASE (computer- aided software engineering) tools to rapidly integrate HIS, RIS, and PACS information systems. The advantages of these tools include fast development time (low-level code is generated from graphical specifications), and easy system maintenance (excellent documentation, easy to perform changes, and centralized code repository in an object-oriented database). The CASE tools are used to develop and manage the `middle-ware' in our client- mediator-serve architecture for systems integration. Our architecture is scalable and can accommodate heterogeneous database and communication protocols.
Wang, Liangmin
2018-01-01
Today IoT integrate thousands of inter networks and sensing devices e.g., vehicular networks, which are considered to be challenging due to its high speed and network dynamics. The goal of future vehicular networks is to improve road safety, promote commercial or infotainment products and to reduce the traffic accidents. All these applications are based on the information exchange among nodes, so not only reliable data delivery but also the authenticity and credibility of the data itself are prerequisite. To cope with the aforementioned problem, trust management come up as promising candidate to conduct node’s transaction and interaction management, which requires distributed mobile nodes cooperation for achieving design goals. In this paper, we propose a trust-based routing protocol i.e., 3VSR (Three Valued Secure Routing), which extends the widely used AODV (Ad hoc On-demand Distance Vector) routing protocol and employs the idea of Sensing Logic-based trust model to enhance the security solution of VANET (Vehicular Ad-Hoc Network). The existing routing protocol are mostly based on key or signature-based schemes, which off course increases computation overhead. In our proposed 3VSR, trust among entities is updated frequently by means of opinion derived from sensing logic due to vehicles random topologies. In 3VSR the theoretical capabilities are based on Dirichlet distribution by considering prior and posterior uncertainty of the said event. Also by using trust recommendation message exchange, nodes are able to reduce computation and routing overhead. The simulated results shows that the proposed scheme is secure and practical. PMID:29538314
Sohail, Muhammad; Wang, Liangmin
2018-03-14
Today IoT integrate thousands of inter networks and sensing devices e.g., vehicular networks, which are considered to be challenging due to its high speed and network dynamics. The goal of future vehicular networks is to improve road safety, promote commercial or infotainment products and to reduce the traffic accidents. All these applications are based on the information exchange among nodes, so not only reliable data delivery but also the authenticity and credibility of the data itself are prerequisite. To cope with the aforementioned problem, trust management come up as promising candidate to conduct node's transaction and interaction management, which requires distributed mobile nodes cooperation for achieving design goals. In this paper, we propose a trust-based routing protocol i.e., 3VSR (Three Valued Secure Routing), which extends the widely used AODV (Ad hoc On-demand Distance Vector) routing protocol and employs the idea of Sensing Logic-based trust model to enhance the security solution of VANET (Vehicular Ad-Hoc Network). The existing routing protocol are mostly based on key or signature-based schemes, which off course increases computation overhead. In our proposed 3VSR, trust among entities is updated frequently by means of opinion derived from sensing logic due to vehicles random topologies. In 3VSR the theoretical capabilities are based on Dirichlet distribution by considering prior and posterior uncertainty of the said event. Also by using trust recommendation message exchange, nodes are able to reduce computation and routing overhead. The simulated results shows that the proposed scheme is secure and practical.
Using posts to an online social network to assess fishing effort
Martin, Dustin R.; Chizinski, Christopher J.; Eskridge, Kent M.; Pope, Kevin L
2014-01-01
Fisheries management has evolved from reservoir to watershed management, creating a need to simultaneously gather information within and across interacting reservoirs. However, costs to gather information on the fishing effort on multiple reservoirs using traditional creel methodology are often prohibitive. Angler posts about reservoirs online provide a unique medium to test hypotheses on the distribution of fishing pressure. We show that the activity on an online fishing social network is related to fishing effort and can be used to facilitate management goals. We searched the Nebraska Fish and Game Association Fishing Forum for all references from April 2009 to December 2010 to 19 reservoirs that comprise the Salt Valley regional fishery in southeastern Nebraska. The number of posts was positively related to monthly fishing effort on a regional scale, with individual reservoirs having the most annual posts also having the most annual fishing effort. Furthermore, this relationship held temporally. Online fishing social networks provide the potential to assess effort on larger spatial scales than currently feasible.
The CASA Dallas Fort Worth Remote Sensing Network ICT for Urban Disaster Mitigation
NASA Astrophysics Data System (ADS)
Chandrasekar, Venkatachalam; Chen, Haonan; Philips, Brenda; Seo, Dong-jun; Junyent, Francesc; Bajaj, Apoorva; Zink, Mike; Mcenery, John; Sukheswalla, Zubin; Cannon, Amy; Lyons, Eric; Westbrook, David
2013-04-01
The dual-polarization X-band radar network developed by the U.S. National Science Foundation Engineering Center for Collaborative Adaptive Sensing of the Atmosphere (CASA) has shown great advantages for observing and prediction of hazardous weather events in the lower atmosphere (1-3 km above ground level). The network is operating though a scanning methodology called DCAS, distributed collaborative adaptive sensing, which is designed to focus on particular interesting regions of the atmosphere and disseminate information for decision-making to multiple end-users, such as emergency managers and policy analysts. Since spring 2012, CASA and the North Central Texas Council of Governments (NCTCOG) have embarked the development of Dallas Fort Worth (DFW) urban remote sensing network, including 8-node of dual-polarization X-band radars, in the populous DFW Metroplex (pop. 6.3 million in 2010). The main goal of CASA DFW urban demonstration network is to protect the safety and prosperity of humans and ecosystems through research activities that include: 1) to demonstrate the DCAS operation paradigm developed by CASA; 2) to create high-resolution, three-dimensional mapping of the meteorological conditions; 3) to help the local emergency managers issue impacts-based warnings and forecasts for severe wind, tornado, hail, and flash flood hazards. The products of this radar network will include single and multi-radar data, vector wind retrieval, quantitative precipitation estimation and nowcasting, and numerical weather predictions. In addition, the high spatial and temporal resolution rainfall products from CASA can serve as a reliable data input for distributed hydrological models in urban area. This paper presents the information and communication link between radars, rainfall product generation, hydrologic model link and end user community in the Dallas Fort Worth Urban Network. Specific details of the Information and Communication Technologies (ICT) between the various subsystems are presented.
Mandellos, George J; Koutelakis, George V; Panagiotakopoulos, Theodor C; Koukias, Andreas M; Koukias, Mixalis N; Lymberopoulos, Dimitrios K
2008-01-01
Early and specialized pre-hospital patient treatment improves outcome in terms of mortality and morbidity, in emergency cases. This paper focuses on the design and implementation of a telemedicine system that supports diverse types of endpoints including moving transports (MT) (ambulances, ships, planes, etc.), handheld devices and fixed units, using diverse communication networks. Target of the above telemedicine system is the pre-hospital patient treatment. While vital sign transmission is prior to other services provided by the telemedicine system (videoconference, remote management, voice calls etc.), a predefined algorithm controls provision and quality of the other services. A distributed database system controlled by a central server, aims to manage patient attributes, exams and incidents handled by different Telemedicine Coordination Centers (TCC).
The Management and Security Expert (MASE)
NASA Technical Reports Server (NTRS)
Miller, Mark D.; Barr, Stanley J.; Gryphon, Coranth D.; Keegan, Jeff; Kniker, Catherine A.; Krolak, Patrick D.
1991-01-01
The Management and Security Expert (MASE) is a distributed expert system that monitors the operating systems and applications of a network. It is capable of gleaning the information provided by the different operating systems in order to optimize hardware and software performance; recognize potential hardware and/or software failure, and either repair the problem before it becomes an emergency, or notify the systems manager of the problem; and monitor applications and known security holes for indications of an intruder or virus. MASE can eradicate much of the guess work of system management.
NASA Astrophysics Data System (ADS)
Benson, R. B.
2007-05-01
The IRIS Data Management Center, located in Seattle, WA, is the largest openly accessible geophysical archive in the world, and has a unique perspective on data management and operational practices that gets the most out of your network. Networks scale broad domains in time and space, from finite needs to monitor bridges and dams to national and international networks like the GSN and the FDSN that establish a baseline for global monitoring and research, the requirements that go into creating a well-tuned DMC archive treat these the same, building a collaborative network of networks that generations of users rely on and adds value to the data. Funded by the National Science Foundation through the Division of Earth Sciences, IRIS is operated through member universities and in cooperation with the USGS, and the DMS facility is a bridge between a globally distributed collaboration of seismic networks and an equally distributed network of users that demand a high standard for data quality, completeness, and ease of access. I will describe the role that a perpetual archive has in the life cycle of data, and how hosting real-time data performs a dual role of being a hub for continuous data from approximately 59 real-time networks, and distributing these (along with other data from the 40-year library of available time-series data) to researchers, while simultaneously providing shared data back to networks in real- time that benefits monitoring activities. I will describe aspects of our quality-assurance framework that are both passively and actively performed on 1100 seismic stations, generating over 6,000 channels of regularly sampled data arriving daily, that data providers can use as aids in operating their network, and users can likewise use when requesting suitable data for research purposes. The goal of the DMC is to eliminate bottlenecks in data discovery and shortening the steps leading to analysis. This includes many challenges, including keeping metadata current, tools for evaluating and viewing them, along with measuring and creating databases of other performance metrics and how monitoring them closer to real- time helps reduce operation costs, creates a richer repository, and eliminates problems over generations of duty cycles of data usage. I will describe a new resource, called the Nominal Response Library, which hopes to provide accurate and representative examples of sensor and data logger configurations that are hosted at the DMC and constitute a high-graded subset for crafting your own metadata. Finally, I want to encourage all network operators who do not currently submit SEED format data to an archive to consider these benefits, and briefly discuss how robust transfer mechanisms that include Earthworm, LISS, Antelope, NRTS and SeisComp, to name a few, can assist you in contributing your network data and help create this enabling virtual network of networks. In this era of high performance Internet capacity, the process that enables others to share your data and allows you to utilize external sources of data is nearly seamless with your current mission of network operation.
Strategic rehabilitation planning of piped water networks using multi-criteria decision analysis.
Scholten, Lisa; Scheidegger, Andreas; Reichert, Peter; Maurer, Max; Mauer, Max; Lienert, Judit
2014-02-01
To overcome the difficulties of strategic asset management of water distribution networks, a pipe failure and a rehabilitation model are combined to predict the long-term performance of rehabilitation strategies. Bayesian parameter estimation is performed to calibrate the failure and replacement model based on a prior distribution inferred from three large water utilities in Switzerland. Multi-criteria decision analysis (MCDA) and scenario planning build the framework for evaluating 18 strategic rehabilitation alternatives under future uncertainty. Outcomes for three fundamental objectives (low costs, high reliability, and high intergenerational equity) are assessed. Exploitation of stochastic dominance concepts helps to identify twelve non-dominated alternatives and local sensitivity analysis of stakeholder preferences is used to rank them under four scenarios. Strategies with annual replacement of 1.5-2% of the network perform reasonably well under all scenarios. In contrast, the commonly used reactive replacement is not recommendable unless cost is the only relevant objective. Exemplified for a small Swiss water utility, this approach can readily be adapted to support strategic asset management for any utility size and based on objectives and preferences that matter to the respective decision makers. Copyright © 2013 Elsevier Ltd. All rights reserved.
Final Report for File System Support for Burst Buffers on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, W.; Mohror, K.
Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less
Economic, health, and environmental impacts of cyanobacteria and associated harmful algal blooms are increasingly recognized by policymakers, managers, and scientific researchers. However, spatially-distributed, long-term data on cyanobacteria blooms are largely unavailable. The ...
The future of managed care organization.
Robinson, J C
1999-01-01
This paper analyzes the transformation of the central organization in the managed care system: the multiproduct, multimarket health plan. It examines vertical disintegration, the shift from ownership to contractual linkages between plans and provider organizations, and horizontal integration--the consolidation of erstwhile indemnity carriers, Blue Cross plans, health maintenance organizations (HMOs), and specialty networks. Health care consumers differ widely in their preferences and willingness to pay for particular products and network characteristics, while providers differ widely in their willingness to adopt particular organization and financing structures. This heterogeneity creates an enduring role for health plans that are diversified into multiple networks, benefit products, distribution channels, and geographic regions. Diversification now is driving health plans toward being national, full-service corporations and away from being local, single-product organizations linked to particular providers and selling to particular consumer niches.
Markstrom, Steven L.
2012-01-01
A software program, called P2S, has been developed which couples the daily stream temperature simulation capabilities of the U.S. Geological Survey Stream Network Temperature model with the watershed hydrology simulation capabilities of the U.S. Geological Survey Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a modular, deterministic, distributed-parameter, physical-process watershed model that simulates hydrologic response to various combinations of climate and land use. Stream Network Temperature was developed to help aquatic biologists and engineers predict the effects of changes that hydrology and energy have on water temperatures. P2S will allow scientists and watershed managers to evaluate the effects of historical climate and projected climate change, landscape evolution, and resource management scenarios on watershed hydrology and in-stream water temperature.
NASA Astrophysics Data System (ADS)
Xiao, Jian; Zhang, Mingqiang; Tian, Haiping; Huang, Bo; Fu, Wenlong
2018-02-01
In this paper, a novel prognostics and health management system architecture for hydropower plant equipment was proposed based on fog computing and Docker container. We employed the fog node to improve the real-time processing ability of improving the cloud architecture-based prognostics and health management system and overcome the problems of long delay time, network congestion and so on. Then Storm-based stream processing of fog node was present and could calculate the health index in the edge of network. Moreover, the distributed micros-service and Docker container architecture of hydropower plants equipment prognostics and health management was also proposed. Using the micro service architecture proposed in this paper, the hydropower unit can achieve the goal of the business intercommunication and seamless integration of different equipment and different manufacturers. Finally a real application case is given in this paper.
C3PO - A Dynamic Data Placement Agent for ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Beermann, T.; Lassnig, M.; Barisits, M.; Serfon, C.; Garonne, V.; ATLAS Collaboration
2017-10-01
This paper introduces a new dynamic data placement agent for the ATLAS distributed data management system. This agent is designed to pre-place potentially popular data to make it more widely available. It therefore incorporates information from a variety of sources. Those include input datasets and sites workload information from the ATLAS workload management system, network metrics from different sources like FTS and PerfSonar, historical popularity data collected through a tracer mechanism and more. With this data it decides if, when and where to place new replicas that then can be used by the WMS to distribute the workload more evenly over available computing resources and then ultimately reduce job waiting times. This paper gives an overview of the architecture and the final implementation of this new agent. The paper also includes an evaluation of the placement algorithm by comparing the transfer times and the new replica usage.
Karanovic, Marinko; Muffels, Christopher T.; Tonkin, Matthew J.; Hunt, Randall J.
2012-01-01
Models of environmental systems have become increasingly complex, incorporating increasingly large numbers of parameters in an effort to represent physical processes on a scale approaching that at which they occur in nature. Consequently, the inverse problem of parameter estimation (specifically, model calibration) and subsequent uncertainty analysis have become increasingly computation-intensive endeavors. Fortunately, advances in computing have made computational power equivalent to that of dozens to hundreds of desktop computers accessible through a variety of alternate means: modelers have various possibilities, ranging from traditional Local Area Networks (LANs) to cloud computing. Commonly used parameter estimation software is well suited to take advantage of the availability of such increased computing power. Unfortunately, logistical issues become increasingly important as an increasing number and variety of computers are brought to bear on the inverse problem. To facilitate efficient access to disparate computer resources, the PESTCommander program documented herein has been developed to provide a Graphical User Interface (GUI) that facilitates the management of model files ("file management") and remote launching and termination of "slave" computers across a distributed network of computers ("run management"). In version 1.0 described here, PESTCommander can access and ascertain resources across traditional Windows LANs: however, the architecture of PESTCommander has been developed with the intent that future releases will be able to access computing resources (1) via trusted domains established in Wide Area Networks (WANs) in multiple remote locations and (2) via heterogeneous networks of Windows- and Unix-based operating systems. The design of PESTCommander also makes it suitable for extension to other computational resources, such as those that are available via cloud computing. Version 1.0 of PESTCommander was developed primarily to work with the parameter estimation software PEST; the discussion presented in this report focuses on the use of the PESTCommander together with Parallel PEST. However, PESTCommander can be used with a wide variety of programs and models that require management, distribution, and cleanup of files before or after model execution. In addition to its use with the Parallel PEST program suite, discussion is also included in this report regarding the use of PESTCommander with the Global Run Manager GENIE, which was developed simultaneously with PESTCommander.
Critical behaviour in charging of electric vehicles
NASA Astrophysics Data System (ADS)
Carvalho, Rui; Buzna, Lubos; Gibbens, Richard; Kelly, Frank
2015-09-01
The increasing penetration of electric vehicles over the coming decades, taken together with the high cost to upgrade local distribution networks and consumer demand for home charging, suggest that managing congestion on low voltage networks will be a crucial component of the electric vehicle revolution and the move away from fossil fuels in transportation. Here, we model the max-flow and proportional fairness protocols for the control of congestion caused by a fleet of vehicles charging on two real-world distribution networks. We show that the system undergoes a continuous phase transition to a congested state as a function of the rate of vehicles plugging to the network to charge. We focus on the order parameter and its fluctuations close to the phase transition, and show that the critical point depends on the choice of congestion protocol. Finally, we analyse the inequality in the charging times as the vehicle arrival rate increases, and show that charging times are considerably more equitable in proportional fairness than in max-flow.
A multi-agent intelligent environment for medical knowledge.
Vicari, Rosa M; Flores, Cecilia D; Silvestre, André M; Seixas, Louise J; Ladeira, Marcelo; Coelho, Helder
2003-03-01
AMPLIA is a multi-agent intelligent learning environment designed to support training of diagnostic reasoning and modelling of domains with complex and uncertain knowledge. AMPLIA focuses on the medical area. It is a system that deals with uncertainty under the Bayesian network approach, where learner-modelling tasks will consist of creating a Bayesian network for a problem the system will present. The construction of a network involves qualitative and quantitative aspects. The qualitative part concerns the network topology, that is, causal relations among the domain variables. After it is ready, the quantitative part is specified. It is composed of the distribution of conditional probability of the variables represented. A negotiation process (managed by an intelligent MediatorAgent) will treat the differences of topology and probability distribution between the model the learner built and the one built-in in the system. That negotiation process occurs between the agents that represent the expert knowledge domain (DomainAgent) and the agent that represents the learner knowledge (LearnerAgent).
Design of nodes for embedded and ultra low-power wireless sensor networks
NASA Astrophysics Data System (ADS)
Xu, Jun; You, Bo; Cui, Juan; Ma, Jing; Li, Xin
2008-10-01
Sensor network integrates sensor technology, MEMS (Micro-Electro-Mechanical system) technology, embedded computing, wireless communication technology and distributed information management technology. It is of great value to use it where human is quite difficult to reach. Power consumption and size are the most important consideration when nodes are designed for distributed WSN (wireless sensor networks). Consequently, it is of great importance to decrease the size of a node, reduce its power consumption and extend its life in network. WSN nodes have been designed using JN5121-Z01-M01 module produced by jennic company and IEEE 802.15.4/ZigBee technology. Its new features include support for CPU sleep modes and a long-term ultra low power sleep mode for the entire node. In low power configuration the node resembles existing small low power nodes. An embedded temperature sensor node has been developed to verify and explore our architecture. The experiment results indicate that the WSN has the characteristic of high reliability, good stability and ultra low power consumption.
Implementation of medical monitor system based on networks
NASA Astrophysics Data System (ADS)
Yu, Hui; Cao, Yuzhen; Zhang, Lixin; Ding, Mingshi
2006-11-01
In this paper, the development trend of medical monitor system is analyzed and portable trend and network function become more and more popular among all kinds of medical monitor devices. The architecture of medical network monitor system solution is provided and design and implementation details of medical monitor terminal, monitor center software, distributed medical database and two kind of medical information terminal are especially discussed. Rabbit3000 system is used in medical monitor terminal to implement security administration of data transfer on network, human-machine interface, power management and DSP interface while DSP chip TMS5402 is used in signal analysis and data compression. Distributed medical database is designed for hospital center according to DICOM information model and HL7 standard. Pocket medical information terminal based on ARM9 embedded platform is also developed to interactive with center database on networks. Two kernels based on WINCE are customized and corresponding terminal software are developed for nurse's routine care and doctor's auxiliary diagnosis. Now invention patent of the monitor terminal is approved and manufacture and clinic test plans are scheduled. Applications for invention patent are also arranged for two medical information terminals.
Coverage maximization under resource constraints using a nonuniform proliferating random walk.
Saha, Sudipta; Ganguly, Niloy
2013-02-01
Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.
Reusable Cryogenic Tank VHM Using Fiber Optic Distributed Sensing Technology
NASA Technical Reports Server (NTRS)
Bodan-Sanders, Patricia; Bouvier, Carl
1998-01-01
The reusable oxygen and hydrogen tanks are key systems for both the X-33 (sub-scale, sub-orbital technology demonstrator) and the commercial Reusable Launch Vehicle (RLV). The backbone of the X-33 Reusable Cryogenic Tank Vehicle Health Management (VHM) system lies in the optical network of distributed strain temperature and hydrogen sensors. This network of fiber sensors will create a global strain and temperature map for monitoring the health of the tank structure, cryogenic insulation, and Thermal Protection System. Lockheed Martin (Sanders and LMMSS) and NASA Langley have developed this sensor technology for the X-33 and have addressed several technical issues such as fiber bonding and laser performance in this harsh environment.
A portal for the ocean biogeographic information system
Zhang, Yunqing; Grassle, J. F.
2002-01-01
Since its inception in 1999 the Ocean Biogeographic Information System (OBIS) has developed into an international science program as well as a globally distributed network of biogeographic databases. An OBIS portal at Rutgers University provides the links and functional interoperability among member database systems. Protocols and standards have been established to support effective communication between the portal and these functional units. The portal provides distributed data searching, a taxonomy name service, a GIS with access to relevant environmental data, biological modeling, and education modules for mariners, students, environmental managers, and scientists. The portal will integrate Census of Marine Life field projects, national data archives, and other functional modules, and provides for network-wide analyses and modeling tools.
Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less
Distributed Prognostics and Health Management with a Wireless Network Architecture
NASA Technical Reports Server (NTRS)
Goebel, Kai; Saha, Sankalita; Sha, Bhaskar
2013-01-01
A heterogeneous set of system components monitored by a varied suite of sensors and a particle-filtering (PF) framework, with the power and the flexibility to adapt to the different diagnostic and prognostic needs, has been developed. Both the diagnostic and prognostic tasks are formulated as a particle-filtering problem in order to explicitly represent and manage uncertainties in state estimation and remaining life estimation. Current state-of-the-art prognostic health management (PHM) systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to a loss in functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become for a number of reasons somewhat ungainly for successful deployment, and efficient distributed architectures can be more beneficial. The distributed health management architecture is comprised of a network of smart sensor devices. These devices monitor the health of various subsystems or modules. They perform diagnostics operations and trigger prognostics operations based on user-defined thresholds and rules. The sensor devices, called computing elements (CEs), consist of a sensor, or set of sensors, and a communication device (i.e., a wireless transceiver beside an embedded processing element). The CE runs in either a diagnostic or prognostic operating mode. The diagnostic mode is the default mode where a CE monitors a given subsystem or component through a low-weight diagnostic algorithm. If a CE detects a critical condition during monitoring, it raises a flag. Depending on availability of resources, a networked local cluster of CEs is formed that then carries out prognostics and fault mitigation by efficient distribution of the tasks. It should be noted that the CEs are expected not to suspend their previous tasks in the prognostic mode. When the prognostics task is over, and after appropriate actions have been taken, all CEs return to their original default configuration. Wireless technology-based implementation would ensure more flexibility in terms of sensor placement. It would also allow more sensors to be deployed because the overhead related to weights of wired systems is not present. Distributed architectures are furthermore generally robust with regard to recovery from node failures.
Communication Optimizations for a Wireless Distributed Prognostic Framework
NASA Technical Reports Server (NTRS)
Saha, Sankalita; Saha, Bhaskar; Goebel, Kai
2009-01-01
Distributed architecture for prognostics is an essential step in prognostic research in order to enable feasible real-time system health management. Communication overhead is an important design problem for such systems. In this paper we focus on communication issues faced in the distributed implementation of an important class of algorithms for prognostics - particle filters. In spite of being computation and memory intensive, particle filters lend well to distributed implementation except for one significant step - resampling. We propose new resampling scheme called parameterized resampling that attempts to reduce communication between collaborating nodes in a distributed wireless sensor network. Analysis and comparison with relevant resampling schemes is also presented. A battery health management system is used as a target application. A new resampling scheme for distributed implementation of particle filters has been discussed in this paper. Analysis and comparison of this new scheme with existing resampling schemes in the context for minimizing communication overhead have also been discussed. Our proposed new resampling scheme performs significantly better compared to other schemes by attempting to reduce both the communication message length as well as number total communication messages exchanged while not compromising prediction accuracy and precision. Future work will explore the effects of the new resampling scheme in the overall computational performance of the whole system as well as full implementation of the new schemes on the Sun SPOT devices. Exploring different network architectures for efficient communication is an importance future research direction as well.
Do managed care plans' tiered networks lead to inequities in care for minority patients?
Brennan, Troyen A; Spettell, Claire M; Fernandes, Joaquim; Downey, Roberta L; Carrara, Lisa M
2008-01-01
This study examined whether specialists designated as meeting efficiency thresholds in an insurance company's performance network were less likely than non-designated specialists to treat minority patients insured by that company. Claims data were used to identify patients treated by specialists. Claimants' race/ethnicity status was self-reported to the insurer at enrollment. In large part, minority patients appeared to be evenly distributed across the performance network, with the exception of Asian/Pacific Islanders, who appeared to be more likely to be treated by nondesignated physicians than by designated "good-performing" specialists.
Joint Experiment on Scalable Parallel Processors (JESPP) Parallel Data Management
2006-05-01
management and analysis tool, called Simulation Data Grid ( SDG ). The design principles driving the design of SDG are: 1) minimize network communication...or SDG . In this report, an initial prototype implementation of this system is described. This project follows on earlier research, primarily...distributed logging system had some 2 limitations. These limitations will be described in this report, and how the SDG addresses these limitations. 3.0
INFORM Lab: a testbed for high-level information fusion and resource management
NASA Astrophysics Data System (ADS)
Valin, Pierre; Guitouni, Adel; Bossé, Eloi; Wehn, Hans; Happe, Jens
2011-05-01
DRDC Valcartier and MDA have created an advanced simulation testbed for the purpose of evaluating the effectiveness of Network Enabled Operations in a Coastal Wide Area Surveillance situation, with algorithms provided by several universities. This INFORM Lab testbed allows experimenting with high-level distributed information fusion, dynamic resource management and configuration management, given multiple constraints on the resources and their communications networks. This paper describes the architecture of INFORM Lab, the essential concepts of goals and situation evidence, a selected set of algorithms for distributed information fusion and dynamic resource management, as well as auto-configurable information fusion architectures. The testbed provides general services which include a multilayer plug-and-play architecture, and a general multi-agent framework based on John Boyd's OODA loop. The testbed's performance is demonstrated on 2 types of scenarios/vignettes for 1) cooperative search-and-rescue efforts, and 2) a noncooperative smuggling scenario involving many target ships and various methods of deceit. For each mission, an appropriate subset of Canadian airborne and naval platforms are dispatched to collect situation evidence, which is fused, and then used to modify the platform trajectories for the most efficient collection of further situation evidence. These platforms are fusion nodes which obey a Command and Control node hierarchy.
CyAN satellite-derived Cyanobacteria products in support of Public Health Protection
The timely distribution of satellite-derived cyanoHAB data is necessary for adaptive water quality management decision-making and for targeted deployment of existing government and non-government water quality monitoring resources. The Cyanobacteria Assessment Network (CyAN) is a...
Digital watermarking for secure and adaptive teleconferencing
NASA Astrophysics Data System (ADS)
Vorbrueggen, Jan C.; Thorwirth, Niels
2002-04-01
The EC-sponsored project ANDROID aims to develop a management system for secure active networks. Active network means allowing the network's customers to execute code (Java-based so-called proxylets) on parts of the network infrastructure. Secure means that the network operator nonetheless retains full control over the network and its resources, and that proxylets use ANDROID-developed facilities to provide secure applications. Management is based on policies and allows autonomous, distributed decisions and actions to be taken. Proxylets interface with the system via policies; among actions they can take is controlling execution of other proxylets or redirection of network traffic. Secure teleconferencing is used as the application to demonstrate the approach's advantages. A way to control a teleconference's data streams is to use digital watermarking of the video, audio and/or shared-whiteboard streams, providing an imperceptible and inseparable side channel that delivers information from originating or intermediate stations to downstream stations. Depending on the information carried by the watermark, these stations can take many different actions. Examples are forwarding decisions based on security classifications (possibly time-varying) at security boundaries, set-up and tear-down of virtual private networks, intelligent and adaptive transcoding, recorder or playback control (e.g., speaking off the record), copyright protection, and sender authentication.
Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph
NASA Astrophysics Data System (ADS)
Feng, Lv; Chunlin, Gao; Kaiyang, Ma
2017-05-01
With rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.
Distributed process manager for an engineering network computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gait, J.
1987-08-01
MP is a manager for systems of cooperating processes in a local area network of engineering workstations. MP supports transparent continuation by maintaining multiple copies of each process on different workstations. Computational bandwidth is optimized by executing processes in parallel on different workstations. Responsiveness is high because workstations compete among themselves to respond to requests. The technique is to select a master from among a set of replicates of a process by a competitive election between the copies. Migration of the master when a fault occurs or when response slows down is effected by inducing the election of a newmore » master. Competitive response stabilizes system behavior under load, so MP exhibits realtime behaviors.« less
Location-Aware Dynamic Session-Key Management for Grid-Based Wireless Sensor Networks
Chen, Chin-Ling; Lin, I-Hsien
2010-01-01
Security is a critical issue for sensor networks used in hostile environments. When wireless sensor nodes in a wireless sensor network are distributed in an insecure hostile environment, the sensor nodes must be protected: a secret key must be used to protect the nodes transmitting messages. If the nodes are not protected and become compromised, many types of attacks against the network may result. Such is the case with existing schemes, which are vulnerable to attacks because they mostly provide a hop-by-hop paradigm, which is insufficient to defend against known attacks. We propose a location-aware dynamic session-key management protocol for grid-based wireless sensor networks. The proposed protocol improves the security of a secret key. The proposed scheme also includes a key that is dynamically updated. This dynamic update can lower the probability of the key being guessed correctly. Thus currently known attacks can be defended. By utilizing the local information, the proposed scheme can also limit the flooding region in order to reduce the energy that is consumed in discovering routing paths. PMID:22163606
Location-aware dynamic session-key management for grid-based Wireless Sensor Networks.
Chen, Chin-Ling; Lin, I-Hsien
2010-01-01
Security is a critical issue for sensor networks used in hostile environments. When wireless sensor nodes in a wireless sensor network are distributed in an insecure hostile environment, the sensor nodes must be protected: a secret key must be used to protect the nodes transmitting messages. If the nodes are not protected and become compromised, many types of attacks against the network may result. Such is the case with existing schemes, which are vulnerable to attacks because they mostly provide a hop-by-hop paradigm, which is insufficient to defend against known attacks. We propose a location-aware dynamic session-key management protocol for grid-based wireless sensor networks. The proposed protocol improves the security of a secret key. The proposed scheme also includes a key that is dynamically updated. This dynamic update can lower the probability of the key being guessed correctly. Thus currently known attacks can be defended. By utilizing the local information, the proposed scheme can also limit the flooding region in order to reduce the energy that is consumed in discovering routing paths.
Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.
Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng
2014-06-01
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.
DAISY-DAMP: A distributed AI system for the dynamic allocation and management of power
NASA Technical Reports Server (NTRS)
Hall, Steven B.; Ohler, Peter C.
1988-01-01
One of the critical parameters that must be addressed when designing a loosely coupled Distributed AI SYstem (DAISY) has to do with the degree to which authority is centralized or decentralized. The decision to implement the Dynamic Allocation and Management of Power (DAMP) system as a network of cooperating agents mandated this study. The DAISY-DAMP problem is described; the component agents of the system are characterized; and the communication protocols system elucidated. The motivations and advantages in designing the system with authority decentralized is discussed. Progress in the area of Speech Act theory is proposed as playing a role in constructing decentralized systems.
Chesapeake Bay Program Water Quality Database
The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.
A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks
NASA Astrophysics Data System (ADS)
De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio
2016-05-01
This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.
Distributed wireless sensing for methane leak detection technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Levente; van Kesse, Theodor
Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less
Distributed wireless sensing for fugitive methane leak detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Levente J.; van Kessel, Theodore; Nair, Dhruv
Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less
Distributed wireless sensing for fugitive methane leak detection
Klein, Levente J.; van Kessel, Theodore; Nair, Dhruv; ...
2017-12-11
Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less
NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
Abreu, Liliana; Nunes, João Arriscado; Taylor, Peter; Silva, Susana
2018-01-01
This study embraces a patient-centred and narrative-oriented notion of health literacy, exploring how social networks and personal experiences constitute distributed health literacy (DHL) by mapping out health literacy mediators of each individual and how they enable self-management skills and knowledge of health conditions. Semi-structured interviews with 26 patients with type 2 diabetes were conducted in a Primary Care Center of Porto (Portugal) from October 2014 to December 2015. Data were collected based on McGill Illness Narrative Interview (MINI). Following the grounded theory, interviews were analysed as case-based and process-tracing-oriented. Three awareness narratives emerged: (i) a narrative of minimisation revealing minimal impact of diabetes in patients' lives and daily routines, resignation towards "inevitable" consequences of the diagnosis and dependence of a large network of health literacy mediators; (ii) a narrative of empathy, where patients tended to mention readjustments in their lives by following medical recommendations regarding medication without criticism and with few health literacy mediators; (iii) a narrative of disruption, with patients highlighting the huge impact of diabetes on their lives and their individual responsibility and autonomy with respect to the management of diabetes and the search for alternatives to medication, relying on a very restrictive network of mediators. Exploring meanings given to diagnosis, identifying health mediators and analysing the structure of social networks can contribute to understand the distributed nature of health literacy. Assessing DHL can assist health professionals and those providing care in the community in promoting health literacy and providing models for a more patient-centred health system. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Kolosionis, Konstantinos; Papadopoulou, Maria P.
2017-04-01
Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.
Documentary of MFENET, a national computer network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shuttleworth, B.O.
1977-06-01
The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less
NASA Astrophysics Data System (ADS)
Bertrand, Sophie; Díaz, Erich; Lengaigne, Matthieu
2008-10-01
Peruvian anchovy ( Engraulis ringens) stock abundance is tightly driven by the high and unpredictable variability of the Humboldt Current Ecosystem. Management of the fishery therefore cannot rely on mid- or long-term management policy alone but needs to be adaptive at relatively short time scales. Regular acoustic surveys are performed on the stock at intervals of 2 to 4 times a year, but there is a need for more time continuous monitoring indicators to ensure that management can respond at suitable time scales. Existing literature suggests that spatially explicit data on the location of fishing activities could be used as a proxy for target stock distribution. Spatially explicit commercial fishing data could therefore guide adaptive management decisions at shorter time scales than is possible through scientific stock surveys. In this study we therefore aim to (1) estimate the position of fishing operations for the entire fleet of Peruvian anchovy purse-seiners using the Peruvian satellite vessel monitoring system (VMS), and (2) quantify the extent to which the distribution of purse-seine sets describes anchovy distribution. To estimate fishing set positions from vessel tracks derived from VMS data we developed a methodology based on artificial neural networks (ANN) trained on a sample of fishing trips with known fishing set positions (exact fishing positions are known for approximately 1.5% of the fleet from an at-sea observer program). The ANN correctly identified 83% of the real fishing sets and largely outperformed comparative linear models. This network is then used to forecast fishing operations for those trips where no observers were onboard. To quantify the extent to which fishing set distribution was correlated to stock distribution we compared three metrics describing features of the distributions (the mean distance to the coast, the total area of distribution, and a clustering index) for concomitant acoustic survey observations and fishing set positions identified from VMS. For two of these metrics (mean distance to the coast and clustering index), fishing and survey data were significantly correlated. We conclude that the location of purse-seine fishing sets yields significant and valuable information on the distribution of the Peruvian anchovy stock and ultimately on its vulnerability to the fishery. For example, a high concentration of sets in the near coastal zone could potentially be used as a warning signal of high levels of stock vulnerability and trigger appropriate management measures aimed at reducing fishing effort.
NASA Technical Reports Server (NTRS)
Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen;
2012-01-01
For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.
Radiation detection and situation management by distributed sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jan, Frigo; Mielke, Angela; Cai, D Michael
Detection of radioactive materials in an urban environment usually requires large, portal-monitor-style radiation detectors. However, this may not be a practical solution in many transport scenarios. Alternatively, a distributed sensor network (DSN) could complement portal-style detection of radiological materials through the implementation of arrays of low cost, small heterogeneous sensors with the ability to detect the presence of radioactive materials in a moving vehicle over a specific region. In this paper, we report on the use of a heterogeneous, wireless, distributed sensor network for traffic monitoring in a field demonstration. Through wireless communications, the energy spectra from different radiation detectorsmore » are combined to improve the detection confidence. In addition, the DSN exploits other sensor technologies and algorithms to provide additional information about the vehicle, such as its speed, location, class (e.g. car, truck), and license plate number. The sensors are in-situ and data is processed in real-time at each node. Relevant information from each node is sent to a base station computer which is used to assess the movement of radioactive materials.« less
Parallel Computation of Unsteady Flows on a Network of Workstations
NASA Technical Reports Server (NTRS)
1997-01-01
Parallel computation of unsteady flows requires significant computational resources. The utilization of a network of workstations seems an efficient solution to the problem where large problems can be treated at a reasonable cost. This approach requires the solution of several problems: 1) the partitioning and distribution of the problem over a network of workstation, 2) efficient communication tools, 3) managing the system efficiently for a given problem. Of course, there is the question of the efficiency of any given numerical algorithm to such a computing system. NPARC code was chosen as a sample for the application. For the explicit version of the NPARC code both two- and three-dimensional problems were studied. Again both steady and unsteady problems were investigated. The issues studied as a part of the research program were: 1) how to distribute the data between the workstations, 2) how to compute and how to communicate at each node efficiently, 3) how to balance the load distribution. In the following, a summary of these activities is presented. Details of the work have been presented and published as referenced.
Cybercare 2.0: meeting the challenge of the global burden of disease in 2030.
Rosen, Joseph M; Kun, Luis; Mosher, Robyn E; Grigg, Elliott; Merrell, Ronald C; Macedonia, Christian; Klaudt-Moreau, Julien; Price-Smith, Andrew; Geiling, James
In this paper, we propose to advance and transform today's healthcare system using a model of networked health care called Cybercare. Cybercare means "health care in cyberspace" - for example, doctors consulting with patients via videoconferencing across a distributed network; or patients receiving care locally - in neighborhoods, "minute clinics," and homes - using information technologies such as telemedicine, smartphones, and wearable sensors to link to tertiary medical specialists. This model contrasts with traditional health care, in which patients travel (often a great distance) to receive care from providers in a central hospital. The Cybercare model shifts health care provision from hospital to home; from specialist to generalist; and from treatment to prevention. Cybercare employs advanced technology to deliver services efficiently across the distributed network - for example, using telemedicine, wearable sensors and cell phones to link patients to specialists and upload their medical data in near-real time; using information technology (IT) to rapidly detect, track, and contain the spread of a global pandemic; or using cell phones to manage medical care in a disaster situation. Cybercare uses seven "pillars" of technology to provide medical care: genomics; telemedicine; robotics; simulation, including virtual and augmented reality; artificial intelligence (AI), including intelligent agents; the electronic medical record (EMR); and smartphones. All these technologies are evolving and blending. The technologies are integrated functionally because they underlie the Cybercare network, and/or form part of the care for patients using that distributed network. Moving health care provision to a networked, distributed model will save money, improve outcomes, facilitate access, improve security, increase patient and provider satisfaction, and may mitigate the international global burden of disease. In this paper we discuss how Cybercare is being implemented now, and envision its growth by 2030.
[Research of regional medical consumables reagent logistics system in the modern hospital].
Wu, Jingjiong; Zhang, Yanwen; Luo, Xiaochen; Zhang, Qing; Zhu, Jianxin
2013-09-01
To explore the modern hospital and regional medical consumable reagents logistics system management. The characteristics of regional logistics, through cooperation between medical institutions within the region, and organize a wide range of special logistics activities, to make reasonable of the regional medical consumable reagents logistics. To set the regional management system, dynamic management systems, supply chain information management system, after-sales service system and assessment system. By the research of existing medical market and medical resources, to establish the regional medical supplies reagents directory and the initial data. The emphasis is centralized dispatch of medical supplies reagents, to introduce qualified logistics company for dispatching, to improve the modern hospital management efficiency, to costs down. Regional medical center and regional community health service centers constitute a regional logistics network, the introduction of medical consumable reagents logistics services, fully embodies integrity level, relevance, purpose, environmental adaptability of characteristics by the medical consumable reagents regional logistics distribution. Modern logistics distribution systems can increase the area of medical consumables reagent management efficiency and reduce costs.
How does network design constrain optimal operation of intermittent water supply?
NASA Astrophysics Data System (ADS)
Lieb, Anna; Wilkening, Jon; Rycroft, Chris
2015-11-01
Urban water distribution systems do not always supply water continuously or reliably. As pipes fill and empty, pressure transients may contribute to degraded infrastructure and poor water quality. To help understand and manage this undesirable side effect of intermittent water supply--a phenomenon affecting hundreds of millions of people in cities around the world--we study the relative contributions of fixed versus dynamic properties of the network. Using a dynamical model of unsteady transition pipe flow, we study how different elements of network design, such as network geometry, pipe material, and pipe slope, contribute to undesirable pressure transients. Using an optimization framework, we then investigate to what extent network operation decisions such as supply timing and inflow rate may mitigate these effects. We characterize some aspects of network design that make them more or less amenable to operational optimization.
Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian
2016-10-27
To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads.
Bao, Xu; Li, Haijian; Qin, Lingqiao; Xu, Dongwei; Ran, Bin; Rong, Jian
2016-01-01
To obtain adequate traffic information, the density of traffic sensors should be sufficiently high to cover the entire transportation network. However, deploying sensors densely over the entire network may not be realistic for practical applications due to the budgetary constraints of traffic management agencies. This paper describes several possible spatial distributions of traffic information credibility and proposes corresponding different sensor information credibility functions to describe these spatial distribution properties. A maximum benefit model and its simplified model are proposed to solve the traffic sensor location problem. The relationships between the benefit and the number of sensors are formulated with different sensor information credibility functions. Next, expanding models and algorithms in analytic results are performed. For each case, the maximum benefit, the optimal number and spacing of sensors are obtained and the analytic formulations of the optimal sensor locations are derived as well. Finally, a numerical example is proposed to verify the validity and availability of the proposed models for solving a network sensor location problem. The results show that the optimal number of sensors of segments with different model parameters in an entire freeway network can be calculated. Besides, it can also be concluded that the optimal sensor spacing is independent of end restrictions but dependent on the values of model parameters that represent the physical conditions of sensors and roads. PMID:27801794
A distributed monitoring system for photovoltaic arrays based on a two-level wireless sensor network
NASA Astrophysics Data System (ADS)
Su, F. P.; Chen, Z. C.; Zhou, H. F.; Wu, L. J.; Lin, P. J.; Cheng, S. Y.; Li, Y. F.
2017-11-01
In this paper, a distributed on-line monitoring system based on a two-level wireless sensor network (WSN) is proposed for real time status monitoring of photovoltaic (PV) arrays to support the fine management and maintenance of PV power plants. The system includes the sensing nodes installed on PV modules (PVM), sensing and routing nodes installed on combiner boxes of PV sub-arrays (PVA), a sink node and a data management centre (DMC) running on a host computer. The first level WSN is implemented by the low-cost wireless transceiver nRF24L01, and it is used to achieve single hop communication between the PVM nodes and their corresponding PVA nodes. The second level WSN is realized by the CC2530 based ZigBee network for multi-hop communication among PVA nodes and the sink node. The PVM nodes are used to monitor the PVM working voltage and backplane temperature, and they send the acquired data to their PVA node via the nRF24L01 based first level WSN. The PVA nodes are used to monitor the array voltage, PV string current and environment irradiance, and they send the acquired and received data to the DMC via the ZigBee based second level WSN. The DMC is designed using the MATLAB GUIDE and MySQL database. Laboratory experiment results show that the system can effectively acquire, display, store and manage the operating and environment parameters of PVA in real time.
1985-11-01
McAuto) Transaction Manager Subsystem during 1984/1985 period. On-Line Software Responsible for programming the International (OSI) Communications...Network Transaction Manager (NTM) in 1981/1984 period. Software Performance Responsible for directing the Engineering (SPE) work on performance...computer software Contained herein are theoretical and/or SCAN Project 1prierity sao referenoes that In so way reflect Air Forceowmed or -developed $62 LO
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
Hybrid evolutionary computing model for mobile agents of wireless Internet multimedia
NASA Astrophysics Data System (ADS)
Hortos, William S.
2001-03-01
The ecosystem is used as an evolutionary paradigm of natural laws for the distributed information retrieval via mobile agents to allow the computational load to be added to server nodes of wireless networks, while reducing the traffic on communication links. Based on the Food Web model, a set of computational rules of natural balance form the outer stage to control the evolution of mobile agents providing multimedia services with a wireless Internet protocol WIP. The evolutionary model shows how mobile agents should behave with the WIP, in particular, how mobile agents can cooperate, compete and learn from each other, based on an underlying competition for radio network resources to establish the wireless connections to support the quality of service QoS of user requests. Mobile agents are also allowed to clone themselves, propagate and communicate with other agents. A two-layer model is proposed for agent evolution: the outer layer is based on the law of natural balancing, the inner layer is based on a discrete version of a Kohonen self-organizing feature map SOFM to distribute network resources to meet QoS requirements. The former is embedded in the higher OSI layers of the WIP, while the latter is used in the resource management procedures of Layer 2 and 3 of the protocol. Algorithms for the distributed computation of mobile agent evolutionary behavior are developed by adding a learning state to the agent evolution state diagram. When an agent is in an indeterminate state, it can communicate to other agents. Computing models can be replicated from other agents. Then the agents transitions to the mutating state to wait for a new information-retrieval goal. When a wireless terminal or station lacks a network resource, an agent in the suspending state can change its policy to submit to the environment before it transitions to the searching state. The agents learn the facts of agent state information entered into an external database. In the cloning process, two agents on a host station sharing a common goal can be merged or married to compose a new agent. Application of the two-layer set of algorithms for mobile agent evolution, performed in a distributed processing environment, is made to the QoS management functions of the IP multimedia IM sub-network of the third generation 3G Wideband Code-division Multiple Access W-CDMA wireless network.
NASA Technical Reports Server (NTRS)
Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.
2015-01-01
NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.
Open source system OpenVPN in a function of Virtual Private Network
NASA Astrophysics Data System (ADS)
Skendzic, A.; Kovacic, B.
2017-05-01
Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angel, L.K.; Bower, J.C.; Burnett, R.A.
1999-06-29
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
NASA Astrophysics Data System (ADS)
Niknam, Taher; Kavousifard, Abdollah; Tabatabaei, Sajad; Aghaei, Jamshid
2011-10-01
In this paper a new multiobjective modified honey bee mating optimization (MHBMO) algorithm is presented to investigate the distribution feeder reconfiguration (DFR) problem considering renewable energy sources (RESs) (photovoltaics, fuel cell and wind energy) connected to the distribution network. The objective functions of the problem to be minimized are the electrical active power losses, the voltage deviations, the total electrical energy costs and the total emissions of RESs and substations. During the optimization process, the proposed algorithm finds a set of non-dominated (Pareto) optimal solutions which are stored in an external memory called repository. Since the objective functions investigated are not the same, a fuzzy clustering algorithm is utilized to handle the size of the repository in the specified limits. Moreover, a fuzzy-based decision maker is adopted to select the 'best' compromised solution among the non-dominated optimal solutions of multiobjective optimization problem. In order to see the feasibility and effectiveness of the proposed algorithm, two standard distribution test systems are used as case studies.
Competitive energy consumption under transmission constraints in a multi-supplier power grid system
NASA Astrophysics Data System (ADS)
Popov, Ivan; Krylatov, Alexander; Zakharov, Victor; Ivanov, Dmitry
2017-04-01
Power grid architectures need to be revised in order to manage the increasing number of producers and, more generally, the decentralisation of energy production and distribution. In this work, we describe a multi-supplier multi-consumer congestion model of a power grid, where the costs of consumers depend on the congestion in nodes and arcs of the power supply network. The consumer goal is both to meet their energy demand and to minimise the costs. We show that the methods of non-atomic routing can be applied in this model in order to describe current distribution in the network. We formulate a consumer cost minimisation game for this setting, and discuss the challenges arising in equilibrium search for this game.
Integrating Data Distribution and Data Assimilation Between the OOI CI and the NOAA DIF
NASA Astrophysics Data System (ADS)
Meisinger, M.; Arrott, M.; Clemesha, A.; Farcas, C.; Farcas, E.; Im, T.; Schofield, O.; Krueger, I.; Klacansky, I.; Orcutt, J.; Peach, C.; Chave, A.; Raymer, D.; Vernon, F.
2008-12-01
The Ocean Observatories Initiative (OOI) is an NSF funded program to establish the ocean observing infrastructure of the 21st century benefiting research and education. It is currently approaching final design and promises to deliver cyber and physical observatory infrastructure components as well as substantial core instrumentation to study environmental processes of the ocean at various scales, from coastal shelf-slope exchange processes to the deep ocean. The OOI's data distribution network lies at the heart of its cyber- infrastructure, which enables a multitude of science and education applications, ranging from data analysis, to processing, visualization and ontology supported query and mediation. In addition, it fundamentally supports a class of applications exploiting the knowledge gained from analyzing observational data for objective-driven ocean observing applications, such as automatically triggered response to episodic environmental events and interactive instrument tasking and control. The U.S. Department of Commerce through NOAA operates the Integrated Ocean Observing System (IOOS) providing continuous data in various formats, rates and scales on open oceans and coastal waters to scientists, managers, businesses, governments, and the public to support research and inform decision-making. The NOAA IOOS program initiated development of the Data Integration Framework (DIF) to improve management and delivery of an initial subset of ocean observations with the expectation of achieving improvements in a select set of NOAA's decision-support tools. Both OOI and NOAA through DIF collaborate on an effort to integrate the data distribution, access and analysis needs of both programs. We present details and early findings from this collaboration; one part of it is the development of a demonstrator combining web-based user access to oceanographic data through ERDDAP, efficient science data distribution, and scalable, self-healing deployment in a cloud computing environment. ERDDAP is a web-based front-end application integrating oceanographic data sources of various formats, for instance CDF data files as aggregated through NcML or presented using a THREDDS server. The OOI-designed data distribution network provides global traffic management and computational load balancing for observatory resources; it makes use of the OpenDAP Data Access Protocol (DAP) for efficient canonical science data distribution over the network. A cloud computing strategy is the basis for scalable, self-healing organization of an observatory's computing and storage resources, independent of the physical location and technical implementation of these resources.
Digital Libraries Are Much More than Digitized Collections.
ERIC Educational Resources Information Center
Peters, Peter Evan
1995-01-01
The digital library encompasses the application of high-performance computers and networks to the production, distribution, management, and use of knowledge in research and education. A joint project by three federal agencies, which is investing in digital library initiatives at six universities, is discussed. A sidebar provides issues to consider…
Mitigating Distributed Denial of Service Attacks with Dynamic Resource Pricing
2001-10-01
should be nearly comparable to a system that does not use the payment mechanisms. There is prior work on how pricing can be used to influence consumer ... behavior , how to integrate pricing mechanisms with OS and network resource management mechanisms. In this paper, we instead focus on how pricing
Architectural and Functional Design of an Environmental Information Network.
1984-04-30
study was accomplished under contract F08635-83-C-013(,, Task 83- 2 for Headquarters Air Force Engineering and Services Center, Engineering and Services...election Procedure ............................... 11 2 General Architecture of Distributed Data Management System...o.......60 A-1 Schema Architecture .......... o-.................. .... 74 A- 2 MULTIBASE Component Architecture
Links, Amanda E.; Draper, David; Lee, Elizabeth; Guzman, Jessica; Valivullah, Zaheer; Maduro, Valerie; Lebedev, Vlad; Didenko, Maxim; Tomlin, Garrick; Brudno, Michael; Girdea, Marta; Dumitriu, Sergiu; Haendel, Melissa A.; Mungall, Christopher J.; Smedley, Damian; Hochheiser, Harry; Arnold, Andrew M.; Coessens, Bert; Verhoeven, Steven; Bone, William; Adams, David; Boerkoel, Cornelius F.; Gahl, William A.; Sincan, Murat
2016-01-01
The National Institutes of Health Undiagnosed Diseases Program (NIH UDP) applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similar complex problems are resolvable through process management and the distributed cognition of communities. The team, therefore, built the NIH UDP integrated collaboration system (UDPICS) to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement. PMID:27785453
Imaging structural and functional brain networks in temporal lobe epilepsy.
Bernhardt, Boris C; Hong, Seokjun; Bernasconi, Andrea; Bernasconi, Neda
2013-10-01
Early imaging studies in temporal lobe epilepsy (TLE) focused on the search for mesial temporal sclerosis, as its surgical removal results in clinically meaningful improvement in about 70% of patients. Nevertheless, a considerable subgroup of patients continues to suffer from post-operative seizures. Although the reasons for surgical failure are not fully understood, electrophysiological and imaging data suggest that anomalies extending beyond the temporal lobe may have negative impact on outcome. This hypothesis has revived the concept of human epilepsy as a disorder of distributed brain networks. Recent methodological advances in non-invasive neuroimaging have led to quantify structural and functional networks in vivo. While structural networks can be inferred from diffusion MRI tractography and inter-regional covariance patterns of structural measures such as cortical thickness, functional connectivity is generally computed based on statistical dependencies of neurophysiological time-series, measured through functional MRI or electroencephalographic techniques. This review considers the application of advanced analytical methods in structural and functional connectivity analyses in TLE. We will specifically highlight findings from graph-theoretical analysis that allow assessing the topological organization of brain networks. These studies have provided compelling evidence that TLE is a system disorder with profound alterations in local and distributed networks. In addition, there is emerging evidence for the utility of network properties as clinical diagnostic markers. Nowadays, a network perspective is considered to be essential to the understanding of the development, progression, and management of epilepsy.
Technology Developments Integrating a Space Network Communications Testbed
NASA Technical Reports Server (NTRS)
Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee
2006-01-01
As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enable its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions. It can simulate entire networks and can interface with external (testbed) systems. The key technology developments enabling the integration of MACHETE into a distributed testbed are the Monitor and Control module and the QualNet IP Network Emulator module. Specifically, the Monitor and Control module establishes a standard interface mechanism to centralize the management of each testbed component. The QualNet IP Network Emulator module allows externally generated network traffic to be passed through MACHETE to experience simulated network behaviors such as propagation delay, data loss, orbital effects and other communications characteristics, including entire network behaviors. We report a successful integration of MACHETE with a space communication testbed modeling a lunar exploration scenario. This document is the viewgraph slides of the presentation.
Distributed health care imaging information systems
NASA Astrophysics Data System (ADS)
Thompson, Mary R.; Johnston, William E.; Guojun, Jin; Lee, Jason; Tierney, Brian; Terdiman, Joseph F.
1997-05-01
We have developed an ATM network-based system to collect and catalogue cardio-angiogram videos from the source at a Kaiser central facility and make them available for viewing by doctors at primary care Kaiser facilities. This an example of the general problem of diagnostic data being generated at tertiary facilities, while the images, or other large data objects they produce, need to be used from a variety of other locations such as doctor's offices or local hospitals. We describe the use of a highly distributed computing and storage architecture to provide all aspects of collecting, storing, analyzing, and accessing such large data-objects in a metropolitan area ATM network. Our large data-object management system provides network interface between the object sources, the data management system and the user of the data. As the data is being stored, a cataloguing system automatically creates and stores condensed versions of the data, textural metadata and pointers to the original data. The catalogue system provides a Web-based graphical interface to the data. The user is able the view the low-resolution data with a standard Internet connection and Web browser. If high-resolution is required, a high-speed connection and special application programs can be used to view the high-resolution original data.
Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.
Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan
2016-01-01
Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.
Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing
Zhang, Min; Sun, Yan
2016-01-01
Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network. PMID:28030553
BIO-Plex Information System Concept
NASA Technical Reports Server (NTRS)
Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)
1999-01-01
This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.
NASA Astrophysics Data System (ADS)
Yu, Xu; Shao, Quanqin; Zhu, Yunhai; Deng, Yuejin; Yang, Haijun
2006-10-01
With the development of informationization and the separation between data management departments and application departments, spatial data sharing becomes one of the most important objectives for the spatial information infrastructure construction, and spatial metadata management system, data transmission security and data compression are the key technologies to realize spatial data sharing. This paper discusses the key technologies for metadata based on data interoperability, deeply researches the data compression algorithms such as adaptive Huffman algorithm, LZ77 and LZ78 algorithm, studies to apply digital signature technique to encrypt spatial data, which can not only identify the transmitter of spatial data, but also find timely whether the spatial data are sophisticated during the course of network transmission, and based on the analysis of symmetric encryption algorithms including 3DES,AES and asymmetric encryption algorithm - RAS, combining with HASH algorithm, presents a improved mix encryption method for spatial data. Digital signature technology and digital watermarking technology are also discussed. Then, a new solution of spatial data network distribution is put forward, which adopts three-layer architecture. Based on the framework, we give a spatial data network distribution system, which is efficient and safe, and also prove the feasibility and validity of the proposed solution.
Multiscale analysis of river networks using the R package linbin
Welty, Ethan Z.; Torgersen, Christian E.; Brenkman, Samuel J.; Duda, Jeffrey J.; Armstrong, Jonathan B.
2015-01-01
Analytical tools are needed in riverine science and management to bridge the gap between GIS and statistical packages that were not designed for the directional and dendritic structure of streams. We introduce linbin, an R package developed for the analysis of riverscapes at multiple scales. With this software, riverine data on aquatic habitat and species distribution can be scaled and plotted automatically with respect to their position in the stream network or—in the case of temporal data—their position in time. The linbin package aggregates data into bins of different sizes as specified by the user. We provide case studies illustrating the use of the software for (1) exploring patterns at different scales by aggregating variables at a range of bin sizes, (2) comparing repeat observations by aggregating surveys into bins of common coverage, and (3) tailoring analysis to data with custom bin designs. Furthermore, we demonstrate the utility of linbin for summarizing patterns throughout an entire stream network, and we analyze the diel and seasonal movements of tagged fish past a stationary receiver to illustrate how linbin can be used with temporal data. In short, linbin enables more rapid analysis of complex data sets by fisheries managers and stream ecologists and can reveal underlying spatial and temporal patterns of fish distribution and habitat throughout a riverscape.
Flood quantile estimation at ungauged sites by Bayesian networks
NASA Astrophysics Data System (ADS)
Mediero, L.; Santillán, D.; Garrote, L.
2012-04-01
Estimating flood quantiles at a site for which no observed measurements are available is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. The most common technique used is the multiple regression analysis, which relates physical and climatic basin characteristic to flood quantiles. Regression equations are fitted from flood frequency data and basin characteristics at gauged sites. Regression equations are a rigid technique that assumes linear relationships between variables and cannot take the measurement errors into account. In addition, the prediction intervals are estimated in a very simplistic way from the variance of the residuals in the estimated model. Bayesian networks are a probabilistic computational structure taken from the field of Artificial Intelligence, which have been widely and successfully applied to many scientific fields like medicine and informatics, but application to the field of hydrology is recent. Bayesian networks infer the joint probability distribution of several related variables from observations through nodes, which represent random variables, and links, which represent causal dependencies between them. A Bayesian network is more flexible than regression equations, as they capture non-linear relationships between variables. In addition, the probabilistic nature of Bayesian networks allows taking the different sources of estimation uncertainty into account, as they give a probability distribution as result. A homogeneous region in the Tagus Basin was selected as case study. A regression equation was fitted taking the basin area, the annual maximum 24-hour rainfall for a given recurrence interval and the mean height as explanatory variables. Flood quantiles at ungauged sites were estimated by Bayesian networks. Bayesian networks need to be learnt from a huge enough data set. As observational data are reduced, a stochastic generator of synthetic data was developed. Synthetic basin characteristics were randomised, keeping the statistical properties of observed physical and climatic variables in the homogeneous region. The synthetic flood quantiles were stochastically generated taking the regression equation as basis. The learnt Bayesian network was validated by the reliability diagram, the Brier Score and the ROC diagram, which are common measures used in the validation of probabilistic forecasts. Summarising, the flood quantile estimations through Bayesian networks supply information about the prediction uncertainty as a probability distribution function of discharges is given as result. Therefore, the Bayesian network model has application as a decision support for water resources and planning management.
Liukko, Jyri; Kuuva, Niina
2017-07-01
This article explores which concrete factors hinder or facilitate the cooperation of return-to-work (RTW) professionals in a complex system of multiple stakeholders. The empirical material consists of in-depth interviews with 24 RTW professionals from various organizations involved in work disability management in Finland. The interviews were analyzed using thematic content analysis. The study revealed several kinds of challenges in the cooperation of the professionals. These were related to two partly interrelated themes: communication and distribution of responsibility. The most difficult problems were connected to the cooperation between public employment offices and other stakeholders. However, the study distinguished notable regional differences depending primarily on the scale of the local network. The main areas of improvement proposed by the interviewees were related to better networking of case managers and expansion of expertise. The article argues for the importance of systematic networking and stresses the role of public employment services in the multi-actor management of work disabilities. The article contributes to existing work disability case management models by suggesting the employment administration system as an important component in addition to health care, workplace and insurance systems. The study also highlights the need for expansion of expertise in the field. Implications for Rehabilitation Cooperation between RTW professionals in public employment offices and other organizations involved in work disability management was considered inadequate. In order to improve the cooperation of RTW professionals, the stakeholders need to create more systematic ways of communication and networking with professionals in other organizations. There is a need to expand the expertise in work disability management and rehabilitation, partly by increasing the role of other professionals than physicians.
Distributed Virtual System (DIVIRS) Project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1994-01-01
As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, Clifford B.
1995-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Popularity Prediction Tool for ATLAS Distributed Data Management
NASA Astrophysics Data System (ADS)
Beermann, T.; Maettig, P.; Stewart, G.; Lassnig, M.; Garonne, V.; Barisits, M.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration
2014-06-01
This paper describes a popularity prediction tool for data-intensive data management systems, such as ATLAS distributed data management (DDM). It is fed by the DDM popularity system, which produces historical reports about ATLAS data usage, providing information about files, datasets, users and sites where data was accessed. The tool described in this contribution uses this historical information to make a prediction about the future popularity of data. It finds trends in the usage of data using a set of neural networks and a set of input parameters and predicts the number of accesses in the near term future. This information can then be used in a second step to improve the distribution of replicas at sites, taking into account the cost of creating new replicas (bandwidth and load on the storage system) compared to gain of having new ones (faster access of data for analysis). To evaluate the benefit of the redistribution a grid simulator is introduced that is able replay real workload on different data distributions. This article describes the popularity prediction method and the simulator that is used to evaluate the redistribution.
Distributed Virtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Baldziki, Mike; Brown, Jeff; Chan, Hungching; Cheetham, T Craig; Conn, Thomas; Daniel, Gregory W; Hendrickson, Mark; Hilbrich, Lutz; Johnson, Ayanna; Miller, Steven B; Moore, Tom; Motheral, Brenda; Priddy, Sarah A; Raebel, Marsha A; Randhawa, Gurvaneet; Surratt, Penny; Walraven, Cheryl; White, T Jeff; Bruns, Kevin; Carden, Mary Jo; Dragovich, Charlie; Eichelberger, Bernadette; Rosato, Edith; Sega, Todd
2015-01-01
The Biologics Price Competition and Innovation Act, introduced as part of the Affordable Care Act, directed the FDA to create an approval pathway for biologic products shown to be biosimilar or interchangeable with an FDA-approved innovator drug. These biosimilars will not be chemically identical to the reference agent. Investigational studies conducted with biosimilar agents will likely provide limited real-world evidence of their effectiveness and safety. How do we best monitor effectiveness and safety of biosimilar products once approved by the FDA and used more extensively by patients? To determine the feasibility of developing a distributed research network that will use health insurance plan and health delivery system data to detect biosimilar safety and effectiveness signals early and be able to answer important managed care pharmacy questions from both the government and managed care organizations. Twenty-one members of the AMCP Task Force on Biosimilar Collective Intelligence Systems met November 12, 2013, to discuss issues involved in designing this consortium and to explore next steps. The task force concluded that a managed care biosimilars research consortium would be of significant value. Task force members agreed that it is best to use a distributed research network structurally similar to existing DARTNet, HMO Research Network, and Mini-Sentinel consortia. However, for some surveillance projects that it undertakes, the task force recognizes it may need supplemental data from managed care and other sources (i.e., a "hybrid" structure model). The task force believes that AMCP is well positioned to lead the biosimilar-monitoring effort and that the next step to developing a biosimilar-innovator collective intelligence system is to convene an advisory council to address organizational governance.
Telescience - Optimizing aerospace science return through geographically distributed operations
NASA Technical Reports Server (NTRS)
Rasmussen, Daryl N.; Mian, Arshad M.
1990-01-01
The paper examines the objectives and requirements of teleoperations, defined as the means and process for scientists, NASA operations personnel, and astronauts to conduct payload operations as if these were colocated. This process is described in terms of Space Station era platforms. Some of the enabling technologies are discussed, including open architecture workstations, distributed computing, transaction management, expert systems, and high-speed networks. Recent testbedding experiments are surveyed to highlight some of the human factors requirements.
Enterprise Management Network Architecture Distributed Knowledge Base Support
1990-11-01
Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse
Co-evolution of electric and telecommunications networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivkin, S.R.
1998-05-01
There are potentially significant societal benefits in co-evolution between electricity and telecommunications in the areas of common infrastructure, accelerated deployment of distributed energy, tighter integration of information flow for energy management and distribution, and improved customer care. With due regard for natural processes that are more potent than any regulation and more real than any ideology, the gains from co-evolution would far outweigh the attenuated and speculative savings from restructuring of electricity that is too simplistic.
Towards a Cloud Based Smart Traffic Management Framework
NASA Astrophysics Data System (ADS)
Rahimi, M. M.; Hakimpour, F.
2017-09-01
Traffic big data has brought many opportunities for traffic management applications. However several challenges like heterogeneity, storage, management, processing and analysis of traffic big data may hinder their efficient and real-time applications. All these challenges call for well-adapted distributed framework for smart traffic management that can efficiently handle big traffic data integration, indexing, query processing, mining and analysis. In this paper, we present a novel, distributed, scalable and efficient framework for traffic management applications. The proposed cloud computing based framework can answer technical challenges for efficient and real-time storage, management, process and analyse of traffic big data. For evaluation of the framework, we have used OpenStreetMap (OSM) real trajectories and road network on a distributed environment. Our evaluation results indicate that speed of data importing to this framework exceeds 8000 records per second when the size of datasets is near to 5 million. We also evaluate performance of data retrieval in our proposed framework. The data retrieval speed exceeds 15000 records per second when the size of datasets is near to 5 million. We have also evaluated scalability and performance of our proposed framework using parallelisation of a critical pre-analysis in transportation applications. The results show that proposed framework achieves considerable performance and efficiency in traffic management applications.
Use of DHCP to provide essential information for care and management of HIV patients.
Pfeil, C N; Ivey, J L; Hoffman, J D; Kuhn, I M
1991-01-01
The Department of Veterans' Affairs (VA) has reported over 10,000 Acquired Immune Deficiency Syndrome (AIDS) cases since the beginning of the epidemic. These cases were distributed throughout 152 of the VA's network of 172 medical centers and outpatient clinics. This network of health care facilities presents a unique opportunity to provide computer based information systems for clinical care and resource monitoring for these patients. The VA further facilitates such a venture through its commitment to the Decentralized Hospital Computer Program (DHCP). This paper describes a new application within DHCP known as the VA's HIV Registry. This project addresses the need to support clinical information as well as the added need to manage the resources necessary to care for HIV patients.
NASA Astrophysics Data System (ADS)
Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi
2018-02-01
The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.
A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN
NASA Astrophysics Data System (ADS)
Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.
2015-12-01
In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.
A multi-period distribution network design model under demand uncertainty
NASA Astrophysics Data System (ADS)
Tabrizi, Babak H.; Razmi, Jafar
2013-05-01
Supply chain management is taken into account as an inseparable component in satisfying customers' requirements. This paper deals with the distribution network design (DND) problem which is a critical issue in achieving supply chain accomplishments. A capable DND can guarantee the success of the entire network performance. However, there are many factors that can cause fluctuations in input data determining market treatment, with respect to short-term planning, on the one hand. On the other hand, network performance may be threatened by the changes that take place within practicing periods, with respect to long-term planning. Thus, in order to bring both kinds of changes under control, we considered a new multi-period, multi-commodity, multi-source DND problem in circumstances where the network encounters uncertain demands. The fuzzy logic is applied here as an efficient tool for controlling the potential customers' demand risk. The defuzzifying framework leads the practitioners and decision-makers to interact with the solution procedure continuously. The fuzzy model is then validated by a sensitivity analysis test, and a typical problem is solved in order to illustrate the implementation steps. Finally, the formulation is tested by some different-sized problems to show its total performance.
NASA Astrophysics Data System (ADS)
Balcas, J.; Hendricks, T. W.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
The SDN Next Generation Integrated Architecture (SDN-NGeNIA) project addresses some of the key challenges facing the present and next generations of science programs in HEP, astrophysics, and other fields, whose potential discoveries depend on their ability to distribute, process and analyze globally distributed Petascale to Exascale datasets. The SDN-NGenIA system under development by Caltech and partner HEP and network teams is focused on the coordinated use of network, computing and storage infrastructures, through a set of developments that build on the experience gained in recently completed and previous projects that use dynamic circuits with bandwidth guarantees to support major network flows, as demonstrated across LHC Open Network Environment [1] and in large scale demonstrations over the last three years, and recently integrated with PhEDEx and Asynchronous Stage Out data management applications of the CMS experiment at the Large Hadron Collider. In addition to the general program goals of supporting the network needs of the LHC and other science programs with similar needs, a recent focus is the use of the Leadership HPC facility at Argonne National Lab (ALCF) for data intensive applications.
El-Chakhtoura, Joline; Prest, Emmanuelle; Saikaly, Pascal; van Loosdrecht, Mark; Hammes, Frederik; Vrouwenvelder, Hans
2015-05-01
Understanding the biological stability of drinking water distribution systems is imperative in the framework of process control and risk management. The objective of this research was to examine the dynamics of the bacterial community during drinking water distribution at high temporal resolution. Water samples (156 in total) were collected over short time-scales (minutes/hours/days) from the outlet of a treatment plant and a location in its corresponding distribution network. The drinking water is treated by biofiltration and disinfectant residuals are absent during distribution. The community was analyzed by 16S rRNA gene pyrosequencing and flow cytometry as well as conventional, culture-based methods. Despite a random dramatic event (detected with pyrosequencing and flow cytometry but not with plate counts), the bacterial community profile at the two locations did not vary significantly over time. A diverse core microbiome was shared between the two locations (58-65% of the taxa and 86-91% of the sequences) and found to be dependent on the treatment strategy. The bacterial community structure changed during distribution, with greater richness detected in the network and phyla such as Acidobacteria and Gemmatimonadetes becoming abundant. The rare taxa displayed the highest dynamicity, causing the major change during water distribution. This change did not have hygienic implications and is contingent on the sensitivity of the applied methods. The concept of biological stability therefore needs to be revised. Biostability is generally desired in drinking water guidelines but may be difficult to achieve in large-scale complex distribution systems that are inherently dynamic. Copyright © 2015 Elsevier Ltd. All rights reserved.
Internet Portal For A Distributed Management of Groundwater
NASA Astrophysics Data System (ADS)
Meissner, U. F.; Rueppel, U.; Gutzke, T.; Seewald, G.; Petersen, M.
The management of groundwater resources for the supply of German cities and sub- urban areas has become a matter of public interest during the last years. Negative headlines in the Rhein-Main-Area dealt with cracks in buildings as well as damaged woodlands and inundated agriculture areas as an effect of varying groundwater levels. Usually a holistic management of groundwater resources is not existent because of the complexity of the geological system, the large number of involved groups and their divergent interests and a lack of essential information. The development of a network- based information system for an efficient groundwater management was the target of the project: ?Grundwasser-Online?[1]. The management of groundwater resources has to take into account various hydro- geological, climatic, water-economical, chemical and biological interrelations [2]. Thus, the traditional approaches in information retrieval, which are characterised by a high personnel and time expenditure, are not sufficient. Furthermore, the efficient control of the groundwater cultivation requires a direct communication between the different water supply companies, the consultant engineers, the scientists, the govern- mental agencies and the public, by using computer networks. The presented groundwater information system consists of different components, especially for the collection, storage, evaluation and visualisation of groundwater- relevant information. Network-based technologies are used [3]. For the collection of time-dependant groundwater-relevant information, modern technologies of Mobile Computing have been analysed in order to provide an integrated approach in the man- agement of large groundwater systems. The aggregated information is stored within a distributed geo-scientific database system which enables a direct integration of simu- lation programs for the evaluation of interactions in groundwater systems. Thus, even a prognosis for the evolution of groundwater states can be given. In order to gener- ate reports automatically, technologies are utilised. The visualisation of geo-scientific databases in the internet considering their geographic reference is performed with internet map servers. According to the communication of the map server with the un- derlying geo-scientific database, it is necessary that the demanded data can be filtered interactively in the internet browser using chronological and logical criteria. With re- gard to public use the security aspects within the described distributed system are of 1 major importance. Therefore, security methods for the modelling of access rights in combination with digital signatures have been analysed and implemented in order to provide a secure data exchange and communication between the different partners in the network 2
Resistance Genes in Global Crop Breeding Networks.
Garrett, K A; Andersen, K F; Asche, F; Bowden, R L; Forbes, G A; Kulakow, P A; Zhou, B
2017-10-01
Resistance genes are a major tool for managing crop diseases. The networks of crop breeders who exchange resistance genes and deploy them in varieties help to determine the global landscape of resistance and epidemics, an important system for maintaining food security. These networks function as a complex adaptive system, with associated strengths and vulnerabilities, and implications for policies to support resistance gene deployment strategies. Extensions of epidemic network analysis can be used to evaluate the multilayer agricultural networks that support and influence crop breeding networks. Here, we evaluate the general structure of crop breeding networks for cassava, potato, rice, and wheat. All four are clustered due to phytosanitary and intellectual property regulations, and linked through CGIAR hubs. Cassava networks primarily include public breeding groups, whereas others are more mixed. These systems must adapt to global change in climate and land use, the emergence of new diseases, and disruptive breeding technologies. Research priorities to support policy include how best to maintain both diversity and redundancy in the roles played by individual crop breeding groups (public versus private and global versus local), and how best to manage connectivity to optimize resistance gene deployment while avoiding risks to the useful life of resistance genes. [Formula: see text] Copyright © 2017 The Author(s). This is an open access article distributed under the CC BY 4.0 International license .
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System.more » Selected papers were processed separately for inclusion in the Energy Science and Technology Database.« less
NASA Astrophysics Data System (ADS)
Pop, Florin; Dobre, Ciprian; Mocanu, Bogdan-Costel; Citoteanu, Oana-Maria; Xhafa, Fatos
2016-11-01
Managing the large dimensions of data processed in distributed systems that are formed by datacentres and mobile devices has become a challenging issue with an important impact on the end-user. Therefore, the management process of such systems can be achieved efficiently by using uniform overlay networks, interconnected through secure and efficient routing protocols. The aim of this article is to advance our previous work with a novel trust model based on a reputation metric that actively uses the social links between users and the model of interaction between them. We present and evaluate an adaptive model for the trust management in structured overlay networks, based on a Mobile Cloud architecture and considering a honeycomb overlay. Such a model can be useful for supporting advanced mobile market-share e-Commerce platforms, where users collaborate and exchange reliable information about, for example, products of interest and supporting ad-hoc business campaigns
Enterprise-wide worklist management.
Locko, Roberta C; Blume, Hartwig; Goble, John C
2002-01-01
Radiologists in multi-facility health care delivery networks must serve not only their own departments but also departments of associated clinical facilities. We describe our experience with a picture archiving and communication system (PACS) implementation that provides a dynamic view of relevant radiological workload across multiple facilities. We implemented a distributed query system that permits management of enterprise worklists based on modality, body part, exam status, and other criteria that span multiple compatible PACSs. Dynamic worklists, with lesser flexibility, can be constructed if the incompatible PACSs support specific DICOM functionality. Enterprise-wide worklists were implemented across Generations Plus/Northern Manhattan Health Network, linking radiology departments of three hospitals (Harlem, Lincoln, and Metropolitan) with 1465 beds and 4260 ambulatory patients per day. Enterprise-wide, dynamic worklist management improves utilization of radiologists and enhances the quality of care across large multi-facility health care delivery organizations. Integration of other workflow-related components remain a significant challenge.
Fault-tolerant battery system employing intra-battery network architecture
Hagen, Ronald A.; Chen, Kenneth W.; Comte, Christophe; Knudson, Orlin B.; Rouillard, Jean
2000-01-01
A distributed energy storing system employing a communications network is disclosed. A distributed battery system includes a number of energy storing modules, each of which includes a processor and communications interface. In a network mode of operation, a battery computer communicates with each of the module processors over an intra-battery network and cooperates with individual module processors to coordinate module monitoring and control operations. The battery computer monitors a number of battery and module conditions, including the potential and current state of the battery and individual modules, and the conditions of the battery's thermal management system. An over-discharge protection system, equalization adjustment system, and communications system are also controlled by the battery computer. The battery computer logs and reports various status data on battery level conditions which may be reported to a separate system platform computer. A module transitions to a stand-alone mode of operation if the module detects an absence of communication connectivity with the battery computer. A module which operates in a stand-alone mode performs various monitoring and control functions locally within the module to ensure safe and continued operation.
Jurdak, Raja
2013-01-01
Understanding the drivers of urban mobility is vital for epidemiology, urban planning, and communication networks. Human movements have so far been studied by observing people's positions in a given space and time, though most recent models only implicitly account for expected costs and returns for movements. This paper explores the explicit impact of cost and network topology on mobility dynamics, using data from 2 city-wide public bicycle share systems in the USA. User mobility is characterized through the distribution of trip durations, while network topology is characterized through the pairwise distances between stations and the popularity of stations and routes. Despite significant differences in station density and physical layout between the 2 cities, trip durations follow remarkably similar distributions that exhibit cost sensitive trends around pricing point boundaries, particularly with long-term users of the system. Based on the results, recommendations for dynamic pricing and incentive schemes are provided to positively influence mobility patterns and guide improved planning and management of public bicycle systems to increase uptake.
Drug procurement and management.
Salhotra, V S
2003-03-01
A strong drug procurement and management system under the RNTCP is critical to programme success. Significant improvements in manufacturing, inspection, supply, storage and quality control practices and procedures have been achieved due to an intensive RNTCP network. Drugs used in RNTCP are rifampicin, isoniazid, ethambutol, pyrazinamide and streptomycin. Patients of TB are categorised into I, II and III and each category has a different standarised treatment. Procurement, distribution system and quality assurance of drugs are narrated in brief in this article.
Smart Grid Constraint Violation Management for Balancing and Regulating Purposes
Bhattarai, Bishnu; Kouzelis, Konstantinos; Mendaza, Iker; ...
2017-03-29
The gradual active load penetration in low voltage distribution grids is expected to challenge their network capacity in the near future. Distribution system operators should for this reason resort to either costly grid reinforcements or to demand side management mechanisms. Since demand side management implementation is usually cheaper, it is also the favorable solution. To this end, this article presents a framework for handling grid limit violations, both voltage and current, to ensure a secure and qualitative operation of the distribution grid. This framework consists of two steps, namely a proactive centralized and subsequently a reactive decentralized control scheme. Themore » former is employed to balance the one hour ahead load while the latter aims at regulating the consumption in real-time. In both cases, the importance of fair use of electricity demand flexibility is emphasized. Thus, it is demonstrated that this methodology aids in keeping the grid status within preset limits while utilizing flexibility from all flexibility participants.« less
NASA Astrophysics Data System (ADS)
Cristofoletti, P.; Esposito, A.; Anzidei, M.
2003-04-01
This paper presents the methodologies and issues involved in the use of GIS techniques to manage geodetic information derived from networks in seismic and volcanic areas. Organization and manipulation of different geodetical, geological and seismic database, give us a new challenge in interpretation of information that has several dimensions, including spatial and temporal variations, also the flexibility and brand range of tools available in GeoNetGIS, make it an attractive platform for earthquake risk assessment. During the last decade the use of geodetic networks based on the Global Positioning System, devoted to geophysical applications, especially for crustal deformation monitoring in seismic and volcanic areas, increased dramatically. The large amount of data provided by these networks, combined with different and independent observations, such as epicentre distribution of recent and historical earthquakes, geological and structural data, photo interpretation of aerial and satellite images, can aid for the detection and parameterization of seismogenic sources. In particular we applied our geodetic oriented GIS to a new GPS network recently set up and surveyed in the Central Apennine region: the CA-GeoNet. GeoNetGIS is designed to analyze in three and four dimensions GPS sources and to improve crustal deformation analysis and interpretation related with tectonic structures and seismicity. It manages many database (DBMS) consisting of different classes, such as Geodesy, Topography, Seismicity, Geology, Geography and Raster Images, administrated according to Thematic Layers. GeoNetGIS represents a powerful research tool allowing to join the analysis of all data layers to integrate the different data base which aid for the identification of the activity of known faults or structures and suggesting the new evidences of active tectonics. A new approach to data integration given by GeoNetGIS capabilities, allow us to create and deliver a wide range of maps, digital and 3-dimensional environment data analysis applications for geophysical users and civil defense companies, also distributing them on the World Wide Web or in wireless connection realized by PDA computer. It runs on powerful PC platform under Win2000 Prof OS © and based on ArcGIS 8.2 ESRI © software.
Limitations of demand- and pressure-driven modeling for large deficient networks
NASA Astrophysics Data System (ADS)
Braun, Mathias; Piller, Olivier; Deuerlein, Jochen; Mortazavi, Iraj
2017-10-01
The calculation of hydraulic state variables for a network is an important task in managing the distribution of potable water. Over the years the mathematical modeling process has been improved by numerous researchers for utilization in new computer applications and the more realistic modeling of water distribution networks. But, in spite of these continuous advances, there are still a number of physical phenomena that may not be tackled correctly by current models. This paper will take a closer look at the two modeling paradigms given by demand- and pressure-driven modeling. The basic equations are introduced and parallels are drawn with the optimization formulations from electrical engineering. These formulations guarantee the existence and uniqueness of the solution. One of the central questions of the French and German research project ResiWater is the investigation of the network resilience in the case of extreme events or disasters. Under such extraordinary conditions where models are pushed beyond their limits, we talk about deficient network models. Examples of deficient networks are given by highly regulated flow, leakage or pipe bursts and cases where pressure falls below the vapor pressure of water. These examples will be presented and analyzed on the solvability and physical correctness of the solution with respect to demand- and pressure-driven models.
Assessing urban strategies for reducing the impacts of extreme weather on infrastructure networks.
Pregnolato, Maria; Ford, Alistair; Robson, Craig; Glenis, Vassilis; Barr, Stuart; Dawson, Richard
2016-05-01
Critical infrastructure networks, including transport, are crucial to the social and economic function of urban areas but are at increasing risk from natural hazards. Minimizing disruption to these networks should form part of a strategy to increase urban resilience. A framework for assessing the disruption from flood events to transport systems is presented that couples a high-resolution urban flood model with transport modelling and network analytics to assess the impacts of extreme rainfall events, and to quantify the resilience value of different adaptation options. A case study in Newcastle upon Tyne in the UK shows that both green roof infrastructure and traditional engineering interventions such as culverts or flood walls can reduce transport disruption from flooding. The magnitude of these benefits depends on the flood event and adaptation strategy, but for the scenarios considered here 3-22% improvements in city-wide travel times are achieved. The network metric of betweenness centrality, weighted by travel time, is shown to provide a rapid approach to identify and prioritize the most critical locations for flood risk management intervention. Protecting just the top ranked critical location from flooding provides an 11% reduction in person delays. A city-wide deployment of green roofs achieves a 26% reduction, and although key routes still flood, the benefits of this strategy are more evenly distributed across the transport network as flood depths are reduced across the model domain. Both options should form part of an urban flood risk management strategy, but this method can be used to optimize investment and target limited resources at critical locations, enabling green infrastructure strategies to be gradually implemented over the longer term to provide city-wide benefits. This framework provides a means of prioritizing limited financial resources to improve resilience. This is particularly important as flood management investments must typically exceed a far higher benefit-cost threshold than transport infrastructure investments. By capturing the value to the transport network from flood management interventions, it is possible to create new business models that provide benefits to, and enhance the resilience of, both transport and flood risk management infrastructures. Further work will develop the framework to consider other hazards and infrastructure networks.
Assessing urban strategies for reducing the impacts of extreme weather on infrastructure networks
Pregnolato, Maria; Ford, Alistair; Robson, Craig; Glenis, Vassilis; Barr, Stuart; Dawson, Richard
2016-01-01
Critical infrastructure networks, including transport, are crucial to the social and economic function of urban areas but are at increasing risk from natural hazards. Minimizing disruption to these networks should form part of a strategy to increase urban resilience. A framework for assessing the disruption from flood events to transport systems is presented that couples a high-resolution urban flood model with transport modelling and network analytics to assess the impacts of extreme rainfall events, and to quantify the resilience value of different adaptation options. A case study in Newcastle upon Tyne in the UK shows that both green roof infrastructure and traditional engineering interventions such as culverts or flood walls can reduce transport disruption from flooding. The magnitude of these benefits depends on the flood event and adaptation strategy, but for the scenarios considered here 3–22% improvements in city-wide travel times are achieved. The network metric of betweenness centrality, weighted by travel time, is shown to provide a rapid approach to identify and prioritize the most critical locations for flood risk management intervention. Protecting just the top ranked critical location from flooding provides an 11% reduction in person delays. A city-wide deployment of green roofs achieves a 26% reduction, and although key routes still flood, the benefits of this strategy are more evenly distributed across the transport network as flood depths are reduced across the model domain. Both options should form part of an urban flood risk management strategy, but this method can be used to optimize investment and target limited resources at critical locations, enabling green infrastructure strategies to be gradually implemented over the longer term to provide city-wide benefits. This framework provides a means of prioritizing limited financial resources to improve resilience. This is particularly important as flood management investments must typically exceed a far higher benefit–cost threshold than transport infrastructure investments. By capturing the value to the transport network from flood management interventions, it is possible to create new business models that provide benefits to, and enhance the resilience of, both transport and flood risk management infrastructures. Further work will develop the framework to consider other hazards and infrastructure networks. PMID:27293781
NASA Technical Reports Server (NTRS)
Price, Kent M.; Holdridge, Mark; Odubiyi, Jide; Jaworski, Allan; Morgan, Herbert K.
1991-01-01
The results are summarized of an unattended network operations technology assessment study for the Space Exploration Initiative (SEI). The scope of the work included: (1) identified possible enhancements due to the proposed Mars communications network; (2) identified network operations on Mars; (3) performed a technology assessment of possible supporting technologies based on current and future approaches to network operations; and (4) developed a plan for the testing and development of these technologies. The most important results obtained are as follows: (1) addition of a third Mars Relay Satellite (MRS) and MRS cross link capabilities will enhance the network's fault tolerance capabilities through improved connectivity; (2) network functions can be divided into the six basic ISO network functional groups; (3) distributed artificial intelligence technologies will augment more traditional network management technologies to form the technological infrastructure of a virtually unattended network; and (4) a great effort is required to bring the current network technology levels for manned space communications up to the level needed for an automated fault tolerance Mars communications network.
Application of new type of distributed multimedia databases to networked electronic museum
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1999-01-01
Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.
Building AN International Polar Data Coordination Network
NASA Astrophysics Data System (ADS)
Pulsifer, P. L.; Yarmey, L.; Manley, W. F.; Gaylord, A. G.; Tweedie, C. E.
2013-12-01
In the spirit of the World Data Center system developed to manage data resulting from the International Geophysical Year of 1957-58, the International Polar Year 2007-2009 (IPY) resulted in significant progress towards establishing an international polar data management network. However, a sustained international network is still evolving. In this paper we argue that the fundamental building blocks for such a network exist and that the time is right to move forward. We focus on the Arctic component of such a network with linkages to Antarctic network building activities. A review of an important set of Network building blocks is presented: i) the legacy of the IPY data and information service; ii) global data management services with a polar component (e.g. World Data System); iii) regional systems (e.g. Arctic Observing Viewer; iv) nationally focused programs (e.g. Arctic Observing Viewer, Advanced Cooperative Arctic Data and Information Service, Polar Data Catalogue, Inuit Knowledge Centre); v) programs focused on the local (e.g. Exchange for Local Observations and Knowledge of the Arctic, Geomatics and Cartographic Research Centre). We discuss current activities and results with respect to three priority areas needed to establish a strong and effective Network. First, a summary of network building activities reports on a series of productive meetings, including the Arctic Observing Summit and the Polar Data Forum, that have resulted in a core set of Network nodes and participants and a refined vision for the Network. Second, we recognize that interoperability for information sharing fundamentally relies on the creation and adoption of community-based data description standards and data delivery mechanisms. There is a broad range of interoperability frameworks and specifications available; however, these need to be adapted for polar community needs. Progress towards Network interoperability is reviewed, and a prototype distributed data systems is demonstrated. We discuss remaining challenges. Lastly, to establish a sustainable Arctic Data Coordination Network (ADCN) as part of a broader polar Network will require adequate continued resources. We conclude by outlining proposed business models for the emerging Arctic Data Coordination Network and a broader polar Network.
A Scalable QoS-Aware VoD Resource Sharing Scheme for Next Generation Networks
NASA Astrophysics Data System (ADS)
Huang, Chenn-Jung; Luo, Yun-Cheng; Chen, Chun-Hua; Hu, Kai-Wen
In network-aware concept, applications are aware of network conditions and are adaptable to the varying environment to achieve acceptable and predictable performance. In this work, a solution for video on demand service that integrates wireless and wired networks by using the network aware concepts is proposed to reduce the blocking probability and dropping probability of mobile requests. Fuzzy logic inference system is employed to select appropriate cache relay nodes to cache published video streams and distribute them to different peers through service oriented architecture (SOA). SIP-based control protocol and IMS standard are adopted to ensure the possibility of heterogeneous communication and provide a framework for delivering real-time multimedia services over an IP-based network to ensure interoperability, roaming, and end-to-end session management. The experimental results demonstrate that effectiveness and practicability of the proposed work.
Evaluation of AL-FEC performance for IP television services QoS
NASA Astrophysics Data System (ADS)
Mammi, E.; Russo, G.; Neri, A.
2010-01-01
The IP television services quality is a critical issue because of the nature of transport infrastructure. Packet loss is the main cause of service degradation in such kind of network platforms. The use of forward error correction (FEC) techniques in the application layer (AL-FEC), between the source of TV service (video server) and the user terminal, seams to be an efficient strategy to counteract packet losses alternatively or in addiction to suitable traffic management policies (only feasible in "managed networks"). A number of AL-FEC techniques have been discussed in literature and proposed for inclusion in TV over IP international standards. In this paper a performance evaluation of the AL-FEC defined in SMPTE 2022-1 standard is presented. Different typical events occurring in IP networks causing different types (in terms of statistic distribution) of IP packet losses have been studied and AL-FEC performance to counteract these kind of losses have been evaluated. The performed analysis has been carried out in view of fulfilling the TV services QoS requirements that are usually very demanding. For managed networks, this paper envisages a strategy to combine the use of AL-FEC with the set-up of a transport quality based on FEC packets prioritization. Promising results regard this kind of strategy have been obtained.
A spatial exploration of informal trail networks within Great Falls Park, VA
Wimpey, Jeremy; Marion, Jeffrey L.
2011-01-01
Informal (visitor-created) trails represent a threat to the natural resources of protected natural areas around the globe. These trails can remove vegetation, displace wildlife, alter hydrology, alter habitat, spread invasive species, and fragment landscapes. This study examines informal and formal trails within Great Falls Park, VA, a sub-unit of the George Washington Memorial Parkway, managed by the U.S. National Park Service. This study sought to answer three specific questions: 1) Are the physical characteristics and topographic alignments of informal trails significantly different from formal trails, 2) Can landscape fragmentation metrics be used to summarize the relative impacts of formal and informal trail networks on a protected natural area? and 3) What can we learn from examining the spatial distribution of the informal trails within protected natural areas? Statistical comparisons between formal and informal trails in this park indicate that informal trails have less sustainable topographic alignments than their formal counterparts. Spatial summaries of the lineal and areal extent and fragmentation associated with the trail networks by park management zones compare park management goals to the assessed attributes. Hot spot analyses highlight areas of high trail density within the park and findings provide insights regarding potential causes for development of dense informal trail networks.
A Multi-Agent System Architecture for Sensor Networks
Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo
2009-01-01
The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work. PMID:22303172
A multi-agent system architecture for sensor networks.
Fuentes-Fernández, Rubén; Guijarro, María; Pajares, Gonzalo
2009-01-01
The design of the control systems for sensor networks presents important challenges. Besides the traditional problems about how to process the sensor data to obtain the target information, engineers need to consider additional aspects such as the heterogeneity and high number of sensors, and the flexibility of these networks regarding topologies and the sensors in them. Although there are partial approaches for resolving these issues, their integration relies on ad hoc solutions requiring important development efforts. In order to provide an effective approach for this integration, this paper proposes an architecture based on the multi-agent system paradigm with a clear separation of concerns. The architecture considers sensors as devices used by an upper layer of manager agents. These agents are able to communicate and negotiate services to achieve the required functionality. Activities are organized according to roles related with the different aspects to integrate, mainly sensor management, data processing, communication and adaptation to changes in the available devices and their capabilities. This organization largely isolates and decouples the data management from the changing network, while encouraging reuse of solutions. The use of the architecture is facilitated by a specific modelling language developed through metamodelling. A case study concerning a generic distributed system for fire fighting illustrates the approach and the comparison with related work.
Choi, Inyoung; Choi, Ran; Lee, Jonghyun
2010-01-01
Objectives The objective of this research is to introduce the unique approach of the Catholic Medical Center (CMC) integrate network hospitals with organizational and technical methodologies adopted for seamless implementation. Methods The Catholic Medical Center has developed a new hospital information system to connect network hospitals and adopted new information technology architecture which uses single source for multiple distributed hospital systems. Results The hospital information system of the CMC was developed to integrate network hospitals adopting new system development principles; one source, one route and one management. This information architecture has reduced the cost for system development and operation, and has enhanced the efficiency of the management process. Conclusions Integrating network hospital through information system was not simple; it was much more complicated than single organization implementation. We are still looking for more efficient communication channel and decision making process, and also believe that our new system architecture will be able to improve CMC health care system and provide much better quality of health care service to patients and customers. PMID:21818432
Leveraging socially networked mobile ICT platforms for the last-mile delivery problem.
Suh, Kyo; Smith, Timothy; Linhoff, Michelle
2012-09-04
Increasing numbers of people are managing their social networks on mobile information and communication technology (ICT) platforms. This study materializes these social relationships by leveraging spatial and networked information for sharing excess capacity to reduce the environmental impacts associated with "last-mile" package delivery systems from online purchases, particularly in low population density settings. Alternative package pickup location systems (PLS), such as a kiosk on a public transit platform or in a grocery store, have been suggested as effective strategies for reducing package travel miles and greenhouse gas emissions, compared to current door-to-door delivery models (CDS). However, our results suggest that a pickup location delivery system operating in a suburban setting may actually increase travel miles and emissions. Only once a social network is employed to assist in package pickup (SPLS) are significant reductions in the last-mile delivery distance and carbon emissions observed across both urban and suburban settings. Implications for logistics management's decades-long focus on improving efficiencies of dedicated distribution systems through specialization, as well as for public policy targeting carbon emissions of the transport sector are discussed.
Publications of the Jet Propulsion Laboratory, 1981
NASA Technical Reports Server (NTRS)
1982-01-01
Over 500 externally distributed technical reports released during 1981 that resulted from scientific and engineering work performed, or managed by Jet Propulsion Laboratory are listed by primary author. Of the total number of entries, 311 are from the bimonthly Deep Space Network Progress Report, and its successor, the Telecommunications and Data Acquisition Progress Report.
2010-01-01
service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-05
... Science Center's Social Sciences Branch seeks to collect data on distribution networks and business... also seeks to collect data on business disruptions due to Hurricane Sandy for those firms. The data collected will improve research and analysis on the economic impacts of potential fishery management actions...
System-wide power management control via clock distribution network
Coteus, Paul W.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Reed, Don D.
2015-05-19
An apparatus, method and computer program product for automatically controlling power dissipation of a parallel computing system that includes a plurality of processors. A computing device issues a command to the parallel computing system. A clock pulse-width modulator encodes the command in a system clock signal to be distributed to the plurality of processors. The plurality of processors in the parallel computing system receive the system clock signal including the encoded command, and adjusts power dissipation according to the encoded command.
Addressing tomorrow's DMO technical challenges today
NASA Astrophysics Data System (ADS)
Milligan, James R.
2009-05-01
Distributed Mission Operations (DMO) is essentially a type of networked training that pulls in participants from all the armed services and, increasingly, allies to permit them to "game" and rehearse highly complex campaigns, using a mix of local, distant, and virtual players. The United States Air Force Research Laboratory (AFRL) is pursuing Science and Technology (S&T) solutions to address technical challenges associated with distributed communications and information management as DMO continues to progressively scale up the number, diversity, and geographic dispersal of participants in training and rehearsal exercises.
Fiber-optic technology for transport aircraft
NASA Astrophysics Data System (ADS)
1993-07-01
A development status evaluation is presented for fiber-optic devices that are advantageously applicable to commercial aircraft. Current developmental efforts at a major U.S. military and commercial aircraft manufacturer encompass installation techniques and data distribution practices, as well as the definition and refinement of an optical propulsion management interface system, environmental sensing systems, and component-qualification criteria. Data distribution is the most near-term implementable of fiber-optic technologies aboard commercial aircraft in the form of onboard local-area networks for intercomputer connections and passenger entertainment.
Fisher, Jason C.
2013-01-01
Long-term groundwater monitoring networks can provide essential information for the planning and management of water resources. Budget constraints in water resource management agencies often mean a reduction in the number of observation wells included in a monitoring network. A network design tool, distributed as an R package, was developed to determine which wells to exclude from a monitoring network because they add little or no beneficial information. A kriging-based genetic algorithm method was used to optimize the monitoring network. The algorithm was used to find the set of wells whose removal leads to the smallest increase in the weighted sum of the (1) mean standard error at all nodes in the kriging grid where the water table is estimated, (2) root-mean-squared-error between the measured and estimated water-level elevation at the removed sites, (3) mean standard deviation of measurements across time at the removed sites, and (4) mean measurement error of wells in the reduced network. The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The network design tool was applied to optimize two observation well networks monitoring the water table of the eastern Snake River Plain aquifer, Idaho; these networks include the 2008 Federal-State Cooperative water-level monitoring network (Co-op network) with 166 observation wells, and the 2008 U.S. Geological Survey-Idaho National Laboratory water-level monitoring network (USGS-INL network) with 171 wells. Each water-level monitoring network was optimized five times: by removing (1) 10, (2) 20, (3) 40, (4) 60, and (5) 80 observation wells from the original network. An examination of the trade-offs associated with changes in the number of wells to remove indicates that 20 wells can be removed from the Co-op network with a relatively small degradation of the estimated water table map, and 40 wells can be removed from the USGS-INL network before the water table map degradation accelerates. The optimal network designs indicate the robustness of the network design tool. Observation wells were removed from high well-density areas of the network while retaining the spatial pattern of the existing water-table map.
Water leakage management by district metered areas at water distribution networks.
Özdemir, Özgür
2018-03-01
The aim of this study is to design a district metered area (DMA) at water distribution network (WDN) for determination and reduction of water losses in the city of Malatya, Turkey. In the application area, a pilot DMA zone was built by analyzing the existing WDN, topographic map, length of pipes, number of customers, service connections, and valves. In the DMA, International Water Association standard water balance was calculated considering inflow rates and billing records. The ratio of water losses in DMAs was determined as 82%. Moreover, 3124 water meters of 2805 customers were examined while 50% of water meters were detected as faulty. This study revealed that DMA application is useful for the determination of water loss rate in WDNs and identify a cost-effective leakage reduction program.
Network Communication as a Service-Oriented Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, William; Johnston, William; Metzger, Joe
2008-01-08
In widely distributed systems generally, and in science-oriented Grids in particular, software, CPU time, storage, etc., are treated as"services" -- they can be allocated and used with service guarantees that allows them to be integrated into systems that perform complex tasks. Network communication is currently not a service -- it is provided, in general, as a"best effort" capability with no guarantees and only statistical predictability. In order for Grids (and most types of systems with widely distributed components) to be successful in performing the sustained, complex tasks of large-scale science -- e.g., the multi-disciplinary simulation of next generation climate modelingmore » and management and analysis of the petabytes of data that will come from the next generation of scientific instrument (which is very soon for the LHC at CERN) -- networks must provide communication capability that is service-oriented: That is it must be configurable, schedulable, predictable, and reliable. In order to accomplish this, the research and education network community is undertaking a strategy that involves changes in network architecture to support multiple classes of service; development and deployment of service-oriented communication services, and; monitoring and reporting in a form that is directly useful to the application-oriented system so that it may adapt to communications failures. In this paper we describe ESnet's approach to each of these -- an approach that is part of an international community effort to have intra-distributed system communication be based on a service-oriented capability.« less
NaradaBrokering as Middleware Fabric for Grid-based Remote Visualization Services
NASA Astrophysics Data System (ADS)
Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.
2003-12-01
Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. The simplicity in these approaches is offset by problems such as single-point-of-failures, scaling and availability. Furthermore, as the complexity, scale and scope of the services hosted on this paradigm increase, this approach becomes increasingly unsuitable. We propose a scheme based on top of a distributed brokering infrastructure, NaradaBrokering, which comprises a distributed network of broker nodes. These broker nodes are organized in a cluster-based architecture that can scale to very large sizes. The broker network is resilient to broker failures and efficiently routes interactions to entities that expressed an interest in them. In our approach to RVS, services advertise their capabilities to the broker network, which manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. Among the features provided in our approach are efficient discovery of services and asynchronous interactions between services and service requestors (which could themselves be other services). Entities need not be online during the execution of the service request. The system also ensures that entities can be notified about task executions, partial results and failures that might have taken place during service execution. The system also facilitates specification of task overrides, distribution of execution results to alternate devices (which were not used to originally request service execution) and to multiple users. These RVS services could of course be either OGSA (Open Grid Services Architecture) based Grid services or traditional Web services. The brokering infrastructure will manage the service advertisements and the invocation of these services. This scheme ensures that the fundamental Grid computing concept is met - provide computing capabilities of those that are willing to provide it to those that seek the same. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org
Hughes, Jane; Scrimshire, Ashley; Steinberg, Laura; Yiannoullou, Petros; Newton, Katherine; Hall, Claire; Pearce, Lyndsay; Macdonald, Andrew
2017-05-01
The management of blunt splenic injuries (BSI) has evolved toward strategies that avoid splenectomy. There is growing adoption of interventional radiology (IR) techniques in non-operative management of BSI, with evidence suggesting a corresponding reduction in emergency laparotomy requirements and increased splenic preservation rates. Currently there are no UK national guidelines for the management of blunt splenic injury. This may lead to variations in management, despite the reorganisation of trauma services in England in 2012. A survey was distributed through the British Society of Interventional Radiologists to all UK members aiming to identify availability of IR services in England, radiologists' practice, and attitudes toward management of BSI. 116 responses from respondents working in 23 of the 26 Regional Trauma Networks in England were received. 79% provide a single dedicated IR service but over 50% cover more than one hospital within the network. All offer arterial embolisation for BSI. Only 25% follow guidelines. In haemodynamically stable patients, an increasing trend for embolisation was seen as grade of splenic injury increased from 1 to 4 (12.5%-82.14%, p<0.01). In unstable patients or those with radiological evidence of bleeding, significantly more respondents offer embolisation for grade 1-3 injuries (p<0.01), compared to stable patients. Significantly fewer respondents offer embolisation for grade 5 versus 4 injuries in unstable patients or with evidence of bleeding. Splenic embolisation is offered for a variety of injury grades, providing the patient remains stable. Variation in interventional radiology services remain despite the introduction of regional trauma networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Caballero, Víctor; Vernet, David; Zaballos, Agustín; Corral, Guiomar
2018-01-30
Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid's Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction.
Vernet, David; Corral, Guiomar
2018-01-01
Sensor networks and the Internet of Things have driven the evolution of traditional electric power distribution networks towards a new paradigm referred to as Smart Grid. However, the different elements that compose the Information and Communication Technologies (ICTs) layer of a Smart Grid are usually conceived as isolated systems that typically result in rigid hardware architectures which are hard to interoperate, manage, and to adapt to new situations. If the Smart Grid paradigm has to be presented as a solution to the demand for distributed and intelligent energy management system, it is necessary to deploy innovative IT infrastructures to support these smart functions. One of the main issues of Smart Grids is the heterogeneity of communication protocols used by the smart sensor devices that integrate them. The use of the concept of the Web of Things is proposed in this work to tackle this problem. More specifically, the implementation of a Smart Grid’s Web of Things, coined as the Web of Energy is introduced. The purpose of this paper is to propose the usage of Web of Energy by means of the Actor Model paradigm to address the latent deployment and management limitations of Smart Grids. Smart Grid designers can use the Actor Model as a design model for an infrastructure that supports the intelligent functions demanded and is capable of grouping and converting the heterogeneity of traditional infrastructures into the homogeneity feature of the Web of Things. Conducted experimentations endorse the feasibility of this solution and encourage practitioners to point their efforts in this direction. PMID:29385748
Coordinated single-phase control scheme for voltage unbalance reduction in low voltage network.
Pullaguram, Deepak; Mishra, Sukumar; Senroy, Nilanjan
2017-08-13
Low voltage (LV) distribution systems are typically unbalanced in nature due to unbalanced loading and unsymmetrical line configuration. This situation is further aggravated by single-phase power injections. A coordinated control scheme is proposed for single-phase sources, to reduce voltage unbalance. A consensus-based coordination is achieved using a multi-agent system, where each agent estimates the averaged global voltage and current magnitudes of individual phases in the LV network. These estimated values are used to modify the reference power of individual single-phase sources, to ensure system-wide balanced voltages and proper power sharing among sources connected to the same phase. Further, the high X / R ratio of the filter, used in the inverter of the single-phase source, enables control of reactive power, to minimize voltage unbalance locally. The proposed scheme is validated by simulating a LV distribution network with multiple single-phase sources subjected to various perturbations.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).
Microprocessor control and networking for the amps breadboard
NASA Technical Reports Server (NTRS)
Floyd, Stephen A.
1987-01-01
Future space missions will require more sophisticated power systems, implying higher costs and more extensive crew and ground support involvement. To decrease this human involvement, as well as to protect and most efficiently utilize this important resource, NASA has undertaken major efforts to promote progress in the design and development of autonomously managed power systems. Two areas being actively pursued are autonomous power system (APS) breadboards and knowledge-based expert system (KBES) applications. The former are viewed as a requirement for the timely development of the latter. Not only will they serve as final testbeds for the various KBES applications, but will play a major role in the knowledge engineering phase of their development. The current power system breadboard designs are of a distributed microprocessor nature. The distributed nature, plus the need to connect various external computer capabilities (i.e., conventional host computers and symbolic processors), places major emphasis on effective networking. The communications and networking technologies for the first power system breadboard/test facility are described.
Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco
2016-01-20
This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.
ROADNET: A Real-time Data Aware System for Earth, Oceanographic, and Environmental Applications
NASA Astrophysics Data System (ADS)
Vernon, F.; Hansen, T.; Lindquist, K.; Ludascher, B.; Orcutt, J.; Rajasekar, A.
2003-12-01
The Real-time Observatories, Application, and Data management Network (ROADNet) Program aims to develop an integrated, seamless, and transparent environmental information network that will deliver geophysical, oceanographic, hydrological, ecological, and physical data to a variety of users in real-time. ROADNet is a multidisciplinary, multinational partnership of researchers, policymakers, natural resource managers, educators, and students who aim to use the data to advance our understanding and management of coastal, ocean, riparian, and terrestrial Earth systems in Southern California, Mexico, and well off shore. To date, project activity and funding have focused on the design and deployment of network linkages and on the exploratory development of the real-time data management system. We are currently adapting powerful "Data Grid" technologies to the unique challenges associated with the management and manipulation of real-time data. Current "Grid" projects deal with static data files, and significant technical innovation is required to address fundamental problems of real-time data processing, integration, and distribution. The technologies developed through this research will create a system that dynamically adapt downstream processing, cataloging, and data access interfaces when sensors are added or removed from the system; provide for real-time processing and monitoring of data streams--detecting events, and triggering computations, sensor and logger modifications, and other actions; integrate heterogeneous data from multiple (signal) domains; and provide for large-scale archival and querying of "consolidated" data. The software tools which must be developed do not exist, although limited prototype systems are available. This research has implications for the success of large-scale NSF initiatives in the Earth sciences (EarthScope), ocean sciences (OOI- Ocean Observatories Initiative), biological sciences (NEON - National Ecological Observatory Network) and civil engineering (NEES - Network for Earthquake Engineering Simulation). Each of these large scale initiatives aims to collect real-time data from thousands of sensors, and each will require new technologies to process, manage, and communicate real-time multidisciplinary environmental data on regional, national, and global scales.
Autonomic Intelligent Cyber Sensor to Support Industrial Control Network Awareness
Vollmer, Todd; Manic, Milos; Linda, Ondrej
2013-06-01
The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of Autonomic computing and a SOAP based IF-MAP external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, self-managed framework. The contribution of this paper is two-fold: 1) A flexible two level communication layer based on Autonomic computing and Service Oriented Architecture is detailed and 2) Three complementary modules that dynamically reconfiguremore » in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific Operating System and port configurations. Additionally the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.« less
Overview of NASA Glenn Aero/Mobile Communications Demonstrations
NASA Technical Reports Server (NTRS)
Brooks, David; Hoder, Doug; Wilkins, Ryan
2004-01-01
The Glenn Research Center at Lewis Field (GRC) has been involved with several other NASA field centers on various networking and RF communications demonstrations and experiments since 1998. These collaborative experiments investigated communications technologies new to aviation, such as wideband Ku satcom, L-band narrowband satcom, and IP (Internet Protocol), using commercial off-the-shelf (COTS) components These technologies can be used to distribute weather and hazard data, air traffic management and airline fleet management information, and passenger cabin Internet service.
Overview of NASA Glenn Aero/Mobile Communication Demonstrations
NASA Technical Reports Server (NTRS)
Brooks, David; Hoder, Doug; Wilkins, Ryan
2004-01-01
The Glenn Research Center at Lewis Field (GRC) has been involved with several other NASA field centers on various networking and RF communications demonstrations and experiments since 1998. These collaborative experiments investigated communications technologies new to aviation, such as wideband Ku satcom, L-band narrowband satcom, and IP (Internet Protocol), using commercial off-the-shelf (COTS) components These technologies can be used to distribute weather and hazard data, air traffic management and airline fleet management information, and passenger cabin Internet service.
Planning for Information Resource Management by OSD.
1987-12-01
9,129 Objective 1.2- DSS 21,648 23,702 3,893 4,090 4,309 Objective 1.3- Computing 8,197 7,590 435 459 483 Objective 1.4- Distribute messages 75 213 29 ...System DCOAR 26 Office Automation Computer 1,961 3,483 Systems DCOAR 27 Network Performance and 125 454 Capital Budget DCOAR 29 Requirements Analysis...Defense Personnel Analysis 124 167 System ASD(FM&P) 15 Mobilization Training Base 29 63 Cap ASD(FM&P) 16 Manpower Management/ i 8 33 Program Review 01
Tools for the IDL widget set within the X-windows environment
NASA Technical Reports Server (NTRS)
Turgeon, B.; Aston, A.
1992-01-01
New tools using the IDL widget set are presented. In particular, a utility allowing the easy creation and update of slide presentations, XSlideManager, is explained in detail and examples of its application are shown. In addition to XSlideManager, other mini-utilities are discussed. These various pieces of software follow the philosophy of the X-Windows distribution system and are made available to anyone within the Internet network. Acquisition procedures through anonymous ftp are clearly explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Bower, J.C.; Burnett, R.A.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
Federal Emergency Management Information System (FEMIS), Installation Guide for FEMIS 1.4.6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
Services supporting collaborative alignment of engineering networks
NASA Astrophysics Data System (ADS)
Jansson, Kim; Uoti, Mikko; Karvonen, Iris
2015-08-01
Large-scale facilities such as power plants, process factories, ships and communication infrastructures are often engineered and delivered through geographically distributed operations. The competencies required are usually distributed across several contributing organisations. In these complicated projects, it is of key importance that all partners work coherently towards a common goal. VTT and a number of industrial organisations in the marine sector have participated in a national collaborative research programme addressing these needs. The main output of this programme was development of the Innovation and Engineering Maturity Model for Marine-Industry Networks. The recently completed European Union Framework Programme 7 project COIN developed innovative solutions and software services for enterprise collaboration and enterprise interoperability. One area of focus in that work was services for collaborative project management. This article first addresses a number of central underlying research themes and previous research results that have influenced the development work mentioned above. This article presents two approaches for the development of services that support distributed engineering work. Experience from use of the services is analysed, and potential for development is identified. This article concludes with a proposal for consolidation of the two above-mentioned methodologies. This article outlines the characteristics and requirements of future services supporting collaborative alignment of engineering networks.
Research on mixed network architecture collaborative application model
NASA Astrophysics Data System (ADS)
Jing, Changfeng; Zhao, Xi'an; Liang, Song
2009-10-01
When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.
NASA Astrophysics Data System (ADS)
Chaerani, D.; Lesmana, E.; Tressiana, N.
2018-03-01
In this paper, an application of Robust Optimization in agricultural water resource management problem under gross margin and water demand uncertainty is presented. Water resource management is a series of activities that includes planning, developing, distributing and managing the use of water resource optimally. Water resource management for agriculture can be one of the efforts to optimize the benefits of agricultural output. The objective function of agricultural water resource management problem is to maximizing total benefits by water allocation to agricultural areas covered by the irrigation network in planning horizon. Due to gross margin and water demand uncertainty, we assume that the uncertain data lies within ellipsoidal uncertainty set. We employ robust counterpart methodology to get the robust optimal solution.
QoS Challenges and Opportunities in Wireless Sensor/Actuator Networks
Xia, Feng
2008-01-01
A wireless sensor/actuator network (WSAN) is a group of sensors and actuators that are geographically distributed and interconnected by wireless networks. Sensors gather information about the state of physical world. Actuators react to this information by performing appropriate actions. WSANs thus enable cyber systems to monitor and manipulate the behavior of the physical world. WSANs are growing at a tremendous pace, just like the exploding evolution of Internet. Supporting quality of service (QoS) will be of critical importance for pervasive WSANs that serve as the network infrastructure of diverse applications. To spark new research and development interests in this field, this paper examines and discusses the requirements, critical challenges, and open research issues on QoS management in WSANs. A brief overview of recent progress is given. PMID:27879755
Holve, Erin; Segal, Courtney
2014-11-01
The 11 big health data networks participating in the AcademyHealth Electronic Data Methods Forum represent cutting-edge efforts to harness the power of big health data for research and quality improvement. This paper is a comparative case study based on site visits conducted with a subset of these large infrastructure grants funded through the Recovery Act, in which four key issues emerge that can inform the evolution of learning health systems, including the importance of acknowledging the challenges of scaling specialized expertise needed to manage and run CER networks; the delicate balance between privacy protections and the utility of distributed networks; emerging community engagement strategies; and the complexities of developing a robust business model for multi-use networks.
Imaging structural and functional brain networks in temporal lobe epilepsy
Bernhardt, Boris C.; Hong, SeokJun; Bernasconi, Andrea; Bernasconi, Neda
2013-01-01
Early imaging studies in temporal lobe epilepsy (TLE) focused on the search for mesial temporal sclerosis, as its surgical removal results in clinically meaningful improvement in about 70% of patients. Nevertheless, a considerable subgroup of patients continues to suffer from post-operative seizures. Although the reasons for surgical failure are not fully understood, electrophysiological and imaging data suggest that anomalies extending beyond the temporal lobe may have negative impact on outcome. This hypothesis has revived the concept of human epilepsy as a disorder of distributed brain networks. Recent methodological advances in non-invasive neuroimaging have led to quantify structural and functional networks in vivo. While structural networks can be inferred from diffusion MRI tractography and inter-regional covariance patterns of structural measures such as cortical thickness, functional connectivity is generally computed based on statistical dependencies of neurophysiological time-series, measured through functional MRI or electroencephalographic techniques. This review considers the application of advanced analytical methods in structural and functional connectivity analyses in TLE. We will specifically highlight findings from graph-theoretical analysis that allow assessing the topological organization of brain networks. These studies have provided compelling evidence that TLE is a system disorder with profound alterations in local and distributed networks. In addition, there is emerging evidence for the utility of network properties as clinical diagnostic markers. Nowadays, a network perspective is considered to be essential to the understanding of the development, progression, and management of epilepsy. PMID:24098281
Impact of Network Activity Levels on the Performance of Passive Network Service Dependency Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, Thomas E.; Chikkagoudar, Satish; Arthur-Durett, Kristine M.
Network services often do not operate alone, but instead, depend on other services distributed throughout a network to correctly function. If a service fails, is disrupted, or degraded, it is likely to impair other services. The web of dependencies can be surprisingly complex---especially within a large enterprise network---and evolve with time. Acquiring, maintaining, and understanding dependency knowledge is critical for many network management and cyber defense activities. While automation can improve situation awareness for network operators and cyber practitioners, poor detection accuracy reduces their confidence and can complicate their roles. In this paper we rigorously study the effects of networkmore » activity levels on the detection accuracy of passive network-based service dependency discovery methods. The accuracy of all except for one method was inversely proportional to network activity levels. Our proposed cross correlation method was particularly robust to the influence of network activity. The proposed experimental treatment will further advance a more scientific evaluation of methods and provide the ability to determine their operational boundaries.« less
Roost networks of northern myotis (Myotis septentrionalis) in a managed landscape
Johnson, J.B.; Mark, Ford W.; Edwards, J.W.
2012-01-01
Maternity groups of many bat species conform to fission-fusion models and movements among diurnal roost trees and individual bats belonging to these groups use networks of roost trees. Forest disturbances may alter roost networks and characteristics of roost trees. Therefore, at the Fernow Experimental Forest in West Virginia, we examined roost tree networks of northern myotis (Myotis septentrionalis) in forest stands subjected to prescribed fire and in unmanipulated control treatments in 2008 and 2009. Northern myotis formed social groups whose roost areas and roost tree networks overlapped to some extent. Roost tree networks largely resembled scale-free network models, as 61% had a single central node roost tree. In control treatments, central node roost trees were in early stages of decay and surrounded by greater basal area than other trees within the networks. In prescribed fire treatments, central node roost trees were small in diameter, low in the forest canopy, and surrounded by low basal area compared to other trees in networks. Our results indicate that forest disturbances, including prescribed fire, can affect availability and distribution of roosts within roost tree networks. ?? 2011 Elsevier B.V.
Body area network--a key infrastructure element for patient-centered telemedicine.
Norgall, Thomas; Schmidt, Robert; von der Grün, Thomas
2004-01-01
The Body Area Network (BAN) extends the range of existing wireless network technologies by an ultra-low range, ultra-low power network solution optimised for long-term or continuous healthcare applications. It enables wireless radio communication between several miniaturised, intelligent Body Sensor (or actor) Units (BSU) and a single Body Central Unit (BCU) worn at the human body. A separate wireless transmission link from the BCU to a network access point--using different technology--provides for online access to BAN components via usual network infrastructure. The BAN network protocol maintains dynamic ad-hoc network configuration scenarios and co-existence of multiple networks.BAN is expected to become a basic infrastructure element for electronic health services: By integrating patient-attached sensors and mobile actor units, distributed information and data processing systems, the range of medical workflow can be extended to include applications like wireless multi-parameter patient monitoring and therapy support. Beyond clinical use and professional disease management environments, private personal health assistance scenarios (without financial reimbursement by health agencies / insurance companies) enable a wide range of applications and services in future pervasive computing and networking environments.
Towards an integrated EU data system within AtlantOS project
NASA Astrophysics Data System (ADS)
Pouliquen, Sylvie; Harscoat, Valerie; Waldmann, Christoph; Koop-Jakobsen, ketill
2017-04-01
The H2020 AtlantOS project started in June 2015 and aims to optimise and enhance the Integrated Atlantic Ocean Observing Systems (IAOOS). One goal is to ensure that data from different and diverse in-situ observing networks are readily accessible and useable to the wider community, international ocean science community and other stakeholders in this field. To achieve that, the strategy is to move towards an integrated data system within AtlantOS that harmonises work flows, data processing and distribution across the in-situ observing network systems, and integrates in-situ observations in existing European and international data infrastructures (Copernicus marine service, SeaDataNet NODCs, EMODnet, OBIS, GEOSS) so called Integrators. The targeted integrated system will deal with data management challenges for efficient and reliable data service to users: • Quality control commons for heterogeneous and nearly real time data • Standardisation of mandatory metadata for efficient data exchange • Interoperability of network and integrator data management systems Presently the situation is that the data acquired by the different in situ observing networks contributing to the AtlantOS project are processed and distributed using different methodologies and means. Depending on the network data management organization, the data are either processed following recommendations elaborated y the network teams and accessible through a unique portal (FTP or Web), or are processed by individual scientific researchers and made available through National Data Centres or directly at institution level. Some datasets are available through Integrators, such as Copernicus or EMODnet, but connected through ad-hoc links. To facilitate the access to the Atlantic observations and avoid "mixing pears with apples", it has been necessary to agree on (1) the EOVs list and definition across the Networks, (2) a minimum set of common vocabularies for metadata and data description to be used by all the Networks, and (3) a minimum level of Near Real Time Quality Control Procedures for selected EOVs. Then a data exchange backbone has been defined and is being setting up to facilitate discovery, viewing and downloading by the users. Some tools will be recommended to help Network plugging their data on this backbone and facilitate integration in the Integrators. Finally, existing services to the users for data discovery, viewing and downloading will be enhanced to ease access to existing observations. An initial working phase relying on existing international standards and protocols, involving data providers, both Networks and Integrators, and dealing with data harmonisation and integration objectives, has led to agreements and recommendations .The setup phase has started, both on Networks and Integrators sides, to adapt the existing systems in order to move toward this integrated EU data system within AtlantOS as well as collaboration with international partners arpound the ATlantic Ocean.
Use of DHCP to provide essential information for care and management of HIV patients.
Pfeil, C. N.; Ivey, J. L.; Hoffman, J. D.; Kuhn, I. M.
1991-01-01
The Department of Veterans' Affairs (VA) has reported over 10,000 Acquired Immune Deficiency Syndrome (AIDS) cases since the beginning of the epidemic. These cases were distributed throughout 152 of the VA's network of 172 medical centers and outpatient clinics. This network of health care facilities presents a unique opportunity to provide computer based information systems for clinical care and resource monitoring for these patients. The VA further facilitates such a venture through its commitment to the Decentralized Hospital Computer Program (DHCP). This paper describes a new application within DHCP known as the VA's HIV Registry. This project addresses the need to support clinical information as well as the added need to manage the resources necessary to care for HIV patients. PMID:1807575
Wireless Sensor Network Handles Image Data
NASA Technical Reports Server (NTRS)
2008-01-01
To relay data from remote locations for NASA s Earth sciences research, Goddard Space Flight Center contributed to the development of "microservers" (wireless sensor network nodes), which are now used commercially as a quick and affordable means to capture and distribute geographical information, including rich sets of aerial and street-level imagery. NASA began this work out of a necessity for real-time recovery of remote sensor data. These microservers work much like a wireless office network, relaying information between devices. The key difference, however, is that instead of linking workstations within one office, the interconnected microservers operate miles away from one another. This attribute traces back to the technology s original use: The microservers were originally designed for seismology on remote glaciers and ice streams in Alaska, Greenland, and Antarctica-acquiring, storing, and relaying data wirelessly between ground sensors. The microservers boast three key attributes. First, a researcher in the field can establish a "managed network" of microservers and rapidly see the data streams (recovered wirelessly) on a field computer. This rapid feedback permits the researcher to reconfigure the network for different purposes over the course of a field campaign. Second, through careful power management, the microservers can dwell unsupervised in the field for up to 2 years, collecting tremendous amounts of data at a research location. The third attribute is the exciting potential to deploy a microserver network that works in synchrony with robotic explorers (e.g., providing ground truth validation for satellites, supporting rovers as they traverse the local environment). Managed networks of remote microservers that relay data unsupervised for up to 2 years can drastically reduce the costs of field instrumentation and data rec
Networks of Firms and the Ridge in the Production Space
NASA Astrophysics Data System (ADS)
Souma, Wataru
We develop complex networks that represent activities in the economy. The network in this study is constructed from firms and the relationships between firms, i.e., shareholding, interlocking directors, transactions, and joint applications for patents. Thus, the network is regarded as a multigraph, and it is also regarded as a weighted network. By calculating various network indices, we clarify the characteristics of the network. We also consider the dynamics of firms in the production space that are characterized by capital stock, employment, and profit. Each firm moves within this space to maximize their profit by using controlling of capital stock and employment. We show that the dynamics of rational firms can be described using a ridge equation. We analytically solve this equation by assuming the extensive Cobb-Douglas production function, and thereby obtain a solution. By comparing the distribution of firms and this solution, we find that almost all of the 1,100 firms listed on the first section of the Tokyo stock exchange and belonging to the manufacturing sector are managed efficiently.
Dynamic spectrum management: an impact on EW systems
NASA Astrophysics Data System (ADS)
Gajewski, P.; Łopatka, J.; Suchanski, M.
2017-04-01
Rapid evolution of wireless systems caused an enormous growth of data streams transmitted through the networks and, as a consequence, an accompanying demand concerning spectrum resources (SR). An avoidance of advisable disturbances is one of the main demands in military communications. To solve the interference problems, dynamic spectrum management (DSM) techniques can be used. Two main techniques are possible: centralized Coordinated Dynamic Spectrum Access (CDSA) and distributed Opportunistic Spectrum Access (OSA). CDSA enables the wireless networks planning automation, and systems dynamic reaction to random changes of Radio Environment (RE). For OSA, cognitive radio (CR) is the most promising technology that enables avoidance of interference with the other spectrum users due to CR's transmission parameters adaptation to the current radio situation, according to predefined Radio Policies rules. If DSM techniques are used, the inherent changes in EW systems are also needed. On one hand, new techniques of jamming should be elaborated, on the other hand, the rules and protocols of cooperation between communication network and EW systems should be developed.
NASA Technical Reports Server (NTRS)
Shariq, Syed Z.; Kutler, Paul (Technical Monitor)
1997-01-01
The emergence of rapidly expanding technologies for distribution and dissemination of information and knowledge has brought to focus the opportunities for development of knowledge-based networks, knowledge dissemination and knowledge management technologies and their potential applications for enhancing productivity of knowledge work. The challenging and complex problems of the future can be best addressed by developing the knowledge management as a new discipline based on an integrative synthesis of hard and soft sciences. A knowledge management professional society can provide a framework for catalyzing the development of proposed synthesis as well as serve as a focal point for coordination of professional activities in the strategic areas of education, research and technology development. Preliminary concepts for the development of the knowledge management discipline and the professional society are explored. Within this context of knowledge management discipline and the professional society, potential opportunities for application of information technologies for more effectively delivering or transferring information and knowledge (i.e., resulting from the NASA's Mission to Planet Earth) for the development of policy options in critical areas of national and global importance (i.e., policy decisions in economic and environmental areas) can be explored, particularly for those policy areas where a global collaborative knowledge network is likely to be critical to the acceptance of the policies.
Ancillary study management systems: a review of needs
2013-01-01
Background The valuable clinical data, specimens, and assay results collected during a primary clinical trial or observational study can enable researchers to answer additional, pressing questions with relatively small investments in new measurements. However, management of such follow-on, “ancillary” studies is complex. It requires coordinating across institutions, sites, repositories, and approval boards, as well as distributing, integrating, and analyzing diverse data types. General-purpose software systems that simplify the management of ancillary studies have not yet been explored in the research literature. Methods We have identified requirements for ancillary study management primarily as part of our ongoing work with a number of large research consortia. These organizations include the Center for HIV/AIDS Vaccine Immunology (CHAVI), the Immune Tolerance Network (ITN), the HIV Vaccine Trials Network (HVTN), the U.S. Military HIV Research Program (MHRP), and the Network for Pancreatic Organ Donors with Diabetes (nPOD). We also consulted with researchers at a range of other disease research organizations regarding their workflows and data management strategies. Lastly, to enhance breadth, we reviewed process documents for ancillary study management from other organizations. Results By exploring characteristics of ancillary studies, we identify differentiating requirements and scenarios for ancillary study management systems (ASMSs). Distinguishing characteristics of ancillary studies may include the collection of additional measurements (particularly new analyses of existing specimens); the initiation of studies by investigators unaffiliated with the original study; cross-protocol data pooling and analysis; pre-existing participant consent; and pre-existing data context and provenance. For an ASMS to address these characteristics, it would need to address both operational requirements (e.g., allocating existing specimens) and data management requirements (e.g., securely distributing and integrating primary and ancillary data). Conclusions The scenarios and requirements we describe can help guide the development of systems that make conducting ancillary studies easier, less expensive, and less error-prone. Given the relatively consistent characteristics and challenges of ancillary study management, general-purpose ASMSs are likely to be useful to a wide range of organizations. Using the requirements identified in this paper, we are currently developing an open-source, general-purpose ASMS based on LabKey Server (http://www.labkey.org) in collaboration with CHAVI, the ITN and nPOD. PMID:23294514
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
A knowledge base architecture for distributed knowledge agents
NASA Technical Reports Server (NTRS)
Riedesel, Joel; Walls, Bryan
1990-01-01
A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.
Advanced Energy Storage Management in Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ceylan, Oguzhan; Xiao, Bailu
2016-01-01
With increasing penetration of distributed generation (DG) in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. In this paper, an iterative mixed integer quadratic constrained quadratic programming model to optimize the operation of a three phase unbalanced distribution system with high penetration of Photovoltaic (PV) panels, DG and energy storage (ES) is developed. The proposed model minimizes not only the operating cost, including fuel cost and purchasing cost, but also voltage deviations and power loss. The optimization model is based on the linearized sensitivity coefficients between state variables (e.g., node voltages) andmore » control variables (e.g., real and reactive power injections of DG and ES). To avoid slow convergence when close to the optimum, a golden search method is introduced to control the step size and accelerate the convergence. The proposed algorithm is demonstrated on modified IEEE 13 nodes test feeders with multiple PV panels, DG and ES. Numerical simulation results validate the proposed algorithm. Various scenarios of system configuration are studied and some critical findings are concluded.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elizondo, Marcelo A.; Samaan, Nader A.; Makarov, Yuri V.
Voltage and reactive power system control is generally performed following usual patterns of loads, based on off-line studies for daily and seasonal operations. This practice is currently challenged by the inclusion of distributed renewable generation, such as solar. There has been focus on resolving this problem at the distribution level; however, the transmission and sub-transmission levels have received less attention. This paper provides a literature review of proposed methods and solution approaches to coordinate and optimize voltage control and reactive power management, with an emphasis on applications at transmission and sub-transmission level. The conclusion drawn from the survey is thatmore » additional research is needed in the areas of optimizing switch shunt actions and coordinating all available resources to deal with uncertain patterns from increasing distributed renewable generation in the operational time frame. These topics are not deeply explored in the literature.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattarai, Bishnu; Kouzelis, Konstantinos; Mendaza, Iker
The gradual active load penetration in low voltage distribution grids is expected to challenge their network capacity in the near future. Distribution system operators should for this reason resort to either costly grid reinforcements or to demand side management mechanisms. Since demand side management implementation is usually cheaper, it is also the favorable solution. To this end, this article presents a framework for handling grid limit violations, both voltage and current, to ensure a secure and qualitative operation of the distribution grid. This framework consists of two steps, namely a proactive centralized and subsequently a reactive decentralized control scheme. Themore » former is employed to balance the one hour ahead load while the latter aims at regulating the consumption in real-time. In both cases, the importance of fair use of electricity demand flexibility is emphasized. Thus, it is demonstrated that this methodology aids in keeping the grid status within preset limits while utilizing flexibility from all flexibility participants.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnstad, H.
The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less
A Routing Mechanism for Cloud Outsourcing of Medical Imaging Repositories.
Godinho, Tiago Marques; Viana-Ferreira, Carlos; Bastião Silva, Luís A; Costa, Carlos
2016-01-01
Web-based technologies have been increasingly used in picture archive and communication systems (PACS), in services related to storage, distribution, and visualization of medical images. Nowadays, many healthcare institutions are outsourcing their repositories to the cloud. However, managing communications between multiple geo-distributed locations is still challenging due to the complexity of dealing with huge volumes of data and bandwidth requirements. Moreover, standard methodologies still do not take full advantage of outsourced archives, namely because their integration with other in-house solutions is troublesome. In order to improve the performance of distributed medical imaging networks, a smart routing mechanism was developed. This includes an innovative cache system based on splitting and dynamic management of digital imaging and communications in medicine objects. The proposed solution was successfully deployed in a regional PACS archive. The results obtained proved that it is better than conventional approaches, as it reduces remote access latency and also the required cache storage space.
A Content Markup Language for Data Services
NASA Astrophysics Data System (ADS)
Noviello, C.; Acampa, P.; Mango Furnari, M.
Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.
NASA Astrophysics Data System (ADS)
Kaniyantethu, Shaji
2011-06-01
This paper discusses the many features and composed technologies in Firestorm™ - a Distributed Collaborative Fires and Effects software. Modern response management systems capitalize on the capabilities of a plethora of sensors and its output for situational awareness. Firestorm utilizes a unique networked lethality approach by integrating unmanned air and ground vehicles to provide target handoff and sharing of data between humans and sensors. The system employs Bayesian networks for track management of sensor data, and distributed auction algorithms for allocating targets and delivering the right effect without information overload to the Warfighter. Firestorm Networked Effects Component provides joint weapon-target pairing, attack guidance, target selection standards, and other fires and effects components. Moreover, the open and modular architecture allows for easy integration with new data sources. Versatility and adaptability of the application enable it to devise and dispense a suitable response to a wide variety of scenarios. Recently, this application was used for detecting and countering a vehicle intruder with the help of radio frequency spotter sensor, command driven cameras, remote weapon system, portable vehicle arresting barrier, and an unmanned aerial vehicle - which confirmed the presence of the intruder, as well as provided lethal/non-lethal response and battle damage assessment. The completed demonstrations have proved Firestorm's™ validity and feasibility to predict, detect, neutralize, and protect key assets and/or area against a variety of possible threats. The sensors and responding assets can be deployed with numerous configurations to cover the various terrain and environmental conditions, and can be integrated to a number of platforms.
Citizen Science to Support Community-based Flood Early Warning and Resilience Building
NASA Astrophysics Data System (ADS)
Paul, J. D.; Buytaert, W.; Allen, S.; Ballesteros-Cánovas, J. A.; Bhusal, J.; Cieslik, K.; Clark, J.; Dewulf, A.; Dhital, M. R.; Hannah, D. M.; Liu, W.; Nayaval, J. L.; Schiller, A.; Smith, P. J.; Stoffel, M.; Supper, R.
2017-12-01
In Disaster Risk Management, an emerging shift has been noted from broad-scale, top-down assessments towards more participatory, community-based, bottom-up approaches. Combined with technologies for robust and low-cost sensor networks, a citizen science approach has recently emerged as a promising direction in the provision of extensive, real-time information for flood early warning systems. Here we present the framework and initial results of a major new international project, Landslide EVO, aimed at increasing local resilience against hydrologically induced disasters in western Nepal by exploiting participatory approaches to knowledge generation and risk governance. We identify three major technological developments that strongly support our approach to flood early warning and resilience building in Nepal. First, distributed sensor networks, participatory monitoring, and citizen science hold great promise in complementing official monitoring networks and remote sensing by generating site-specific information with local buy-in, especially in data-scarce regions. Secondly, the emergence of open source, cloud-based risk analysis platforms supports the construction of a modular, distributed, and potentially decentralised data processing workflow. Finally, linking data analysis platforms to social computer networks and ICT (e.g. mobile phones, tablets) allows tailored interfaces and people-centred decision- and policy-support systems to be built. Our proposition is that maximum impact is created if end-users are involved not only in data collection, but also over the entire project life-cycle, including the analysis and provision of results. In this context, citizen science complements more traditional knowledge generation practices, and also enhances multi-directional information provision, risk management, early-warning systems and local resilience building.
GUEST EDITORS' INTRODUCTION: Guest Editors' introduction
NASA Astrophysics Data System (ADS)
Guerraoui, Rachid; Vinoski, Steve
1997-09-01
The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.
Pearce, Christopher; Shearer, Marianne; Gardner, Karina; Kelly, Jill; Xu, Tony Baixian
2012-01-01
This paper describes how the Melbourne East General Practice Network supports general practice to enable quality of care, it describes the challenges and enablers of change, and the evidence of practice capacity building and improved quality of care. Primary care is well known as a place where quality, relatively inexpensive medical care occurs. General practice is made up of multiple small sites with fragmented systems and a funding system that challenges a whole-of-practice approach to clinical care. General Practice Networks support GPs to synthesise complexity and crystallise solutions that enhance general practice beyond current capacity. Through a culture of change management, GP Networks create the link between the practice and the big picture of the whole health system and reduce the isolation of general practice. They distribute information (evidence-based learning and resources) and provide individualised support, responding to practice need and capacity.
Cloud-based image sharing network for collaborative imaging diagnosis and consultation
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo
2018-03-01
In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.
Overview of the Smart Network Element Architecture and Recent Innovations
NASA Technical Reports Server (NTRS)
Perotti, Jose M.; Mata, Carlos T.; Oostdyk, Rebecca L.
2008-01-01
In industrial environments, system operators rely on the availability and accuracy of sensors to monitor processes and detect failures of components and/or processes. The sensors must be networked in such a way that their data is reported to a central human interface, where operators are tasked with making real-time decisions based on the state of the sensors and the components that are being monitored. Incorporating health management functions at this central location aids the operator by automating the decision-making process to suggest, and sometimes perform, the action required by current operating conditions. Integrated Systems Health Management (ISHM) aims to incorporate data from many sources, including real-time and historical data and user input, and extract information and knowledge from that data to diagnose failures and predict future failures of the system. By distributing health management processing to lower levels of the architecture, there is less bandwidth required for ISHM, enhanced data fusion, make systems and processes more robust, and improved resolution for the detection and isolation of failures in a system, subsystem, component, or process. The Smart Network Element (SNE) has been developed at NASA Kennedy Space Center to perform intelligent functions at sensors and actuators' level in support of ISHM.
NASA Technical Reports Server (NTRS)
Falke, Stefan; Husar, Rudolf
2011-01-01
The goal of this REASoN applications and technology project is to deliver and use Earth Science Enterprise (ESE) data and tools in support of air quality management. Its scope falls within the domain of air quality management and aims to develop a federated air quality information sharing network that includes data from NASA, EPA, US States and others. Project goals were achieved through a access of satellite and ground observation data, web services information technology, interoperability standards, and air quality community collaboration. In contributing to a network of NASA ESE data in support of particulate air quality management, the project will develop access to distributed data, build Web infrastructure, and create tools for data processing and analysis. The key technologies used in the project include emerging web services for developing self describing and modular data access and processing tools, and service oriented architecture for chaining web services together to assemble customized air quality management applications. The technology and tools required for this project were developed within DataFed.net, a shared infrastructure that supports collaborative atmospheric data sharing and processing web services. Much of the collaboration was facilitated through community interactions through the Federation of Earth Science Information Partners (ESIP) Air Quality Workgroup. The main activities during the project that successfully advanced DataFed, enabled air quality applications and established community-oriented infrastructures were: develop access to distributed data (surface and satellite), build Web infrastructure to support data access, processing and analysis create tools for data processing and analysis foster air quality community collaboration and interoperability.
Plant conservation progress in the United States
Kayri Havens; Andrea Kramer; Ed. Guerrant
2017-01-01
Effective national plant conservation has several basic needs, including: 1) accessible, up-to-date information on species distribution and rarity; 2) research and management capacity to mitigate the impact of threats that make plants rare; 3) effective networks for conserving species in situ and ex situ; 4) education and training to make sure the right people are...
Naval Research Laboratory Fact Book 2012
2012-11-01
Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion
2008-01-01
Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR
An Efficient Resource Management System for a Streaming Media Distribution Network
ERIC Educational Resources Information Center
Cahill, Adrian J.; Sreenan, Cormac J.
2006-01-01
This paper examines the design and evaluation of a TV on Demand (TVoD) system, consisting of a globally accessible storage architecture where all TV content broadcast over a period of time is made available for streaming. The proposed architecture consists of idle Internet Service Provider (ISP) servers that can be rented and released dynamically…
Automatic, time-interval traffic counts for recreation area management planning
D. L. Erickson; C. J. Liu; H. K. Cordell
1980-01-01
Automatic, time-interval recorders were used to count directional vehicular traffic on a multiple entry/exit road network in the Red River Gorge Geological Area, Daniel Boone National Forest. Hourly counts of entering and exiting traffic differed according to recorder location, but an aggregated distribution showed a delayed peak in exiting traffic thought to be...
Enterprise-class Digital Imaging and Communications in Medicine (DICOM) image infrastructure.
York, G; Wortmann, J; Atanasiu, R
2001-06-01
Most current picture archiving and communication systems (PACS) are designed for a single department or a single modality. Few PACS installations have been deployed that support the needs of the hospital or the entire Integrated Delivery Network (IDN). The authors propose a new image management architecture that can support a large, distributed enterprise.