A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks
NASA Technical Reports Server (NTRS)
Cui, Zhenqian
1999-01-01
With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.
2010-09-01
IMPROVING THE QUALITY OF SERVICE AND SECURITY OF MILITARY NETWORKS WITH A NETWORK TASKING ORDER...United States. AFIT/DCS/ENG/10-09 IMPROVING THE QUALITY OF SERVICE AND SECURITY OF MILITARY NETWORKS WITH A NETWORK TASKING ORDER PROCESS...USAF September 2010 APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT/DCS/ENG/10-09 IMPROVING THE QUALITY OF SERVICE AND
Integrated Distributed Directory Service for KSC
NASA Technical Reports Server (NTRS)
Ghansah, Isaac
1997-01-01
This paper describes an integrated distributed directory services (DDS) architecture as a fundamental component of KSC distributed computing systems. Specifically, an architecture for an integrated directory service based on DNS and X.500/LDAP has been suggested. The architecture supports using DNS in its traditional role as a name service and X.500 for other services. Specific designs were made in the integration of X.500 DDS for Public Key Certificates, Kerberos Security Services, Network-wide Login, Electronic Mail, WWW URLS, Servers, and other diverse network objects. Issues involved in incorporating the emerging Microsoft Active Directory Service MADS in KSC's X.500 were discussed.
A novel strategy for load balancing of distributed medical applications.
Logeswaran, Rajasvaran; Chen, Li-Choo
2012-04-01
Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.
The NUONCE engine for LEO networks
NASA Technical Reports Server (NTRS)
Lo, Martin W.; Estabrook, Polly
1995-01-01
Typical LEO networks use constellations which provide a uniform coverage. However, the demand for telecom service is dynamic and unevenly distributed around the world. We examine a more efficient and cost effective design by matching the satellite coverage with the cyclical demand for service around the world. Our approach is to use a non-uniform satellite distribution for the network. We have named this constellation design NUONCE for Non Uniform Optimal Network Communications Engine.
75 FR 9343 - Nomenclature Change Relating to the Network Distribution Center Transition
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-02
... POSTAL SERVICE 39 CFR Parts 111 and 121 Nomenclature Change Relating to the Network Distribution... (BMC) to network distribution centers (NDC), by replacing all text references to ``BMC'' with ``NDC...: Background: The BMC network was established in the 1970s to process Parcel Post[supreg], Bound Printed Matter...
An evolving model for the lodging-service network in a tourism destination
NASA Astrophysics Data System (ADS)
Hernández, Juan M.; González-Martel, Christian
2017-09-01
Tourism is a complex dynamic system including multiple actors which are related each other composing an evolving social network. This paper presents a growing model that explains how part of the supply components in a tourism system forms a social network. Specifically, the lodgings and services in a destination are the network nodes and a link between them appears if a representative tourist hosted in the lodging visits/consumes the service during his/her stay. The specific link between both categories are determined by a random and preferential attachment rule. The analytic results show that the long-term degree distribution of services follows a shifted power-law distribution. The numerical simulations show slight disagreements with the theoretical results in the case of the one-mode degree distribution of services, due to the low order of convergence to zero of X-motifs. The model predictions are compared with real data coming from a popular tourist destination in Gran Canaria, Spain, showing a good agreement between analytical and empirical data for the degree distribution of services. The theoretical model was validated assuming four type of perturbations in the real data.
Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing
NASA Astrophysics Data System (ADS)
Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.
2006-05-01
Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.
Final Report - Cloud-Based Management Platform for Distributed, Multi-Domain Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhury, Pulak; Mukherjee, Biswanath
2017-11-03
In this Department of Energy (DOE) Small Business Innovation Research (SBIR) Phase II project final report, Ennetix presents the development of a solution for end-to-end monitoring, analysis, and visualization of network performance for distributed networks. This solution benefits enterprises of all sizes, operators of distributed and federated networks, and service providers.
NASA Astrophysics Data System (ADS)
Ma, Xuejiao; Gan, Chaoqin; Deng, Shiqi; Huang, Yan
2011-11-01
A survivable wavelength division multiplexing passive optical network enabling both point-to-point service and broadcast service is presented and demonstrated. This architecture provides an automatic traffic recovery against feeder and distribution fiber link failure, respectively. In addition, it also simplifies the protection design for multiple services transmission in wavelength division multiplexing passive optical networks.
Ethernet access network based on free-space optic deployment technology
NASA Astrophysics Data System (ADS)
Gebhart, Michael; Leitgeb, Erich; Birnbacher, Ulla; Schrotter, Peter
2004-06-01
The satisfaction of all communication needs from single households and business companies over a single access infrastructure is probably the most challenging topic in communications technology today. But even though the so-called "Last Mile Access Bottleneck" is well known since more than ten years and many distribution technologies have been tried out, the optimal solution has not yet been found and paying commercial access networks offering all service classes are still rare today. Conventional services like telephone, radio and TV, as well as new and emerging services like email, web browsing, online-gaming, video conferences, business data transfer or external data storage can all be transmitted over the well known and cost effective Ethernet networking protocol standard. Key requirements for the deployment technology driven by the different services are high data rates to the single customer, security, moderate deployment costs and good scalability to number and density of users, quick and flexible deployment without legal impediments and high availability, referring to the properties of optical and wireless communication. We demonstrate all elements of an Ethernet Access Network based on Free Space Optic distribution technology. Main physical parts are Central Office, Distribution Network and Customer Equipment. Transmission of different services, as well as configuration, service upgrades and remote control of the network are handled by networking features over one FSO connection. All parts of the network are proven, the latest commercially available technology. The set up is flexible and can be adapted to any more specific need if required.
Regulation of distribution network business
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roman, J.; Gomez, T.; Munoz, A.
1999-04-01
The traditional distribution function actually comprises two separate activities: distribution network and retailing. Retailing, which is also termed supply, consists of trading electricity at the wholesale level and selling it to the end users. The distribution network business, or merely distribution, is a natural monopoly and it must be regulated. Increasing attention is presently being paid to the regulation of distribution pricing. Distribution pricing, comprises two major tasks: global remuneration of the distribution utility and tariff setting by allocation of the total costs among all the users of the network services. In this paper, the basic concepts for establishing themore » global remuneration of a distribution utility are presented. A remuneration scheme which recognizes adequate investment and operation costs, promotes losses reduction and incentivates the control of the quality of service level is proposed. Efficient investment and operation costs are calculated by using different types of strategic planning and regression analysis models. Application examples that have been used during the distribution regulation process in Spain are also presented.« less
Network Communication as a Service-Oriented Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, William; Johnston, William; Metzger, Joe
2008-01-08
In widely distributed systems generally, and in science-oriented Grids in particular, software, CPU time, storage, etc., are treated as"services" -- they can be allocated and used with service guarantees that allows them to be integrated into systems that perform complex tasks. Network communication is currently not a service -- it is provided, in general, as a"best effort" capability with no guarantees and only statistical predictability. In order for Grids (and most types of systems with widely distributed components) to be successful in performing the sustained, complex tasks of large-scale science -- e.g., the multi-disciplinary simulation of next generation climate modelingmore » and management and analysis of the petabytes of data that will come from the next generation of scientific instrument (which is very soon for the LHC at CERN) -- networks must provide communication capability that is service-oriented: That is it must be configurable, schedulable, predictable, and reliable. In order to accomplish this, the research and education network community is undertaking a strategy that involves changes in network architecture to support multiple classes of service; development and deployment of service-oriented communication services, and; monitoring and reporting in a form that is directly useful to the application-oriented system so that it may adapt to communications failures. In this paper we describe ESnet's approach to each of these -- an approach that is part of an international community effort to have intra-distributed system communication be based on a service-oriented capability.« less
The Johns Hopkins Medical Institutions' Premise Distribution Plan
Barta, Wendy; Buckholtz, Howard; Johnston, Mark; Lenhard, Raymond; Tolchin, Stephen; Vienne, Donald
1987-01-01
A Premise Distribution Plan is being developed to address the growing voice and data communications needs at Johns Hopkins Medical Institutions. More specifically, the use of a rapidly expanding Ethernet computer network and a new Integrated Services Digital Network (ISDN) Digital Centrex system must be planned to provide easy, reliable and cost-effective data and voice communications services. Existing Premise Distribution Systems are compared along with voice and data technologies which would use them.
Performance Evaluation of a SLA Negotiation Control Protocol for Grid Networks
NASA Astrophysics Data System (ADS)
Cergol, Igor; Mirchandani, Vinod; Verchere, Dominique
A framework for an autonomous negotiation control protocol for service delivery is crucial to enable the support of heterogeneous service level agreements (SLAs) that will exist in distributed environments. We have first given a gist of our augmented service negotiation protocol to support distinct service elements. The augmentations also encompass related composition of the services and negotiation with several service providers simultaneously. All the incorporated augmentations will enable to consolidate the service negotiation operations for telecom networks, which are evolving towards Grid networks. Furthermore, our autonomous negotiation protocol is based on a distributed multi-agent framework to create an open market for Grid services. Second, we have concisely presented key simulation results of our work in progress. The results exhibit the usefulness of our negotiation protocol for realistic scenarios that involves different background traffic loading, message sizes and traffic flow asymmetry between background and negotiation traffics.
NASA Astrophysics Data System (ADS)
Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang
2017-11-01
To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.
Delivery of video-on-demand services using local storages within passive optical networks.
Abeywickrama, Sandu; Wong, Elaine
2013-01-28
At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.
Planning Considerations for Secure Network Protocols
1999-03-01
distribution / management ) requirements needed to support network security services are examined. The thesis concludes by identifying tactical user network requirements and suggests security issues to be considered in concert with network
Optimal service distribution in WSN service system subject to data security constraints.
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong
2014-08-04
Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm.
Optimal Service Distribution in WSN Service System Subject to Data Security Constraints
Wu, Zhao; Xiong, Naixue; Huang, Yannong; Gu, Qiong
2014-01-01
Services composition technology provides a flexible approach to building Wireless Sensor Network (WSN) Service Applications (WSA) in a service oriented tasking system for WSN. Maintaining the data security of WSA is one of the most important goals in sensor network research. In this paper, we consider a WSN service oriented tasking system in which the WSN Services Broker (WSB), as the resource management center, can map the service request from user into a set of atom-services (AS) and send them to some independent sensor nodes (SN) for parallel execution. The distribution of ASs among these SNs affects the data security as well as the reliability and performance of WSA because these SNs can be of different and independent specifications. By the optimal service partition into the ASs and their distribution among SNs, the WSB can provide the maximum possible service reliability and/or expected performance subject to data security constraints. This paper proposes an algorithm of optimal service partition and distribution based on the universal generating function (UGF) and the genetic algorithm (GA) approach. The experimental analysis is presented to demonstrate the feasibility of the suggested algorithm. PMID:25093346
Distributed run of a one-dimensional model in a regional application using SOAP-based web services
NASA Astrophysics Data System (ADS)
Smiatek, Gerhard
This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.
Distributed Common Ground System-Navy Increment 2 (DCGS-N Inc 2)
2016-03-01
15 minutes Enter and be Managed in the Network: Reference SvcV-7, Consolidated Afloat Networks and Enterprise Services ( CANES ) CDD, DCGS-N Inc 2...Red, White , Gray Data and Tracks to Command and Control System. Continuous Stream from SCI Common Intelligence Picture to General Service (GENSER...AIS - Automatic Information System AOC - Air Operations Command CANES - Consolidated Afloat Networks and Enterprise Services CID - Center for
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo; Backhaus, Scott N.; Edwards, Brian K.
Here, we develop a microeconomic model of a distribution-level electricity market that takes explicit account of residential photovoltaics (PV) adoption. The model allows us to study the consequences of most tariffs on PV adoption and the consequences of increased residential PV adoption under the assumption of economic sustainability for electric utilities. We also validated the model using U.S. data and extend it to consider different pricing schemes for operation and maintenance costs of the distribution network and for ancillary services. Results show that net metering promotes more environmental benefits and social welfare than other tariffs. But, if costs to operatemore » the distribution network increase, net metering will amplify the unequal distribution of surplus among households. In conclusion, maintaining the economic sustainability of electric utilities under net metering may become extremely difficult unless the uneven distribution of surplus is legitimated by environmental benefits.« less
Boero, Riccardo; Backhaus, Scott N.; Edwards, Brian K.
2016-11-12
Here, we develop a microeconomic model of a distribution-level electricity market that takes explicit account of residential photovoltaics (PV) adoption. The model allows us to study the consequences of most tariffs on PV adoption and the consequences of increased residential PV adoption under the assumption of economic sustainability for electric utilities. We also validated the model using U.S. data and extend it to consider different pricing schemes for operation and maintenance costs of the distribution network and for ancillary services. Results show that net metering promotes more environmental benefits and social welfare than other tariffs. But, if costs to operatemore » the distribution network increase, net metering will amplify the unequal distribution of surplus among households. In conclusion, maintaining the economic sustainability of electric utilities under net metering may become extremely difficult unless the uneven distribution of surplus is legitimated by environmental benefits.« less
Large-scale P2P network based distributed virtual geographic environment (DVGE)
NASA Astrophysics Data System (ADS)
Tan, Xicheng; Yu, Liang; Bian, Fuling
2007-06-01
Virtual Geographic Environment has raised full concern as a kind of software information system that helps us understand and analyze the real geographic environment, and it has also expanded to application service system in distributed environment--distributed virtual geographic environment system (DVGE), and gets some achievements. However, limited by the factor of the mass data of VGE, the band width of network, as well as numerous requests and economic, etc. DVGE still faces some challenges and problems which directly cause the current DVGE could not provide the public with high-quality service under current network mode. The Rapid development of peer-to-peer network technology has offered new ideas of solutions to the current challenges and problems of DVGE. Peer-to-peer network technology is able to effectively release and search network resources so as to realize efficient share of information. Accordingly, this paper brings forth a research subject on Large-scale peer-to-peer network extension of DVGE as well as a deep study on network framework, routing mechanism, and DVGE data management on P2P network.
Wireless cellular networks with Pareto-distributed call holding times
NASA Astrophysics Data System (ADS)
Rodriguez-Dagnino, Ramon M.; Takagi, Hideaki
2001-07-01
Nowadays, there is a growing interest in providing internet to mobile users. For instance, NTT DoCoMo in Japan deploys an important mobile phone network with that offers the Internet service, named 'i-mode', to more than 17 million subscribers. Internet traffic measurements show that the session duration of Call Holding Time (CHT) has probability distributions with heavy-tails, which tells us that they depart significantly from the traffic statistics of traditional voice services. In this environment, it is particularly important to know the number of handovers during a call for a network designer to make an appropriate dimensioning of virtual circuits for a wireless cell. The handover traffic has a direct impact on the Quality of Service (QoS); e.g. the service disruption due to the handover failure may significantly degrade the specified QoS of time-constrained services. In this paper, we first study the random behavior of the number of handovers during a call, where we assume that the CHT are Pareto distributed (heavy-tail distribution), and the Cell Residence Times (CRT) are exponentially distributed. Our approach is based on renewal theory arguments. We present closed-form formulae for the probability mass function (pmf) of the number of handovers during a Pareto distributed CHT, and obtain the probability of call completion as well as handover rates. Most of the formulae are expressed in terms of the Whittaker's function. We compare the Pareto case with cases of $k(subscript Erlang and hyperexponential distributions for the CHT.
Distributed Computer Networks in Support of Complex Group Practices
Wess, Bernard P.
1978-01-01
The economics of medical computer networks are presented in context with the patient care and administrative goals of medical networks. Design alternatives and network topologies are discussed with an emphasis on medical network design requirements in distributed data base design, telecommunications, satellite systems, and software engineering. The success of the medical computer networking technology is predicated on the ability of medical and data processing professionals to design comprehensive, efficient, and virtually impenetrable security systems to protect data bases, network access and services, and patient confidentiality.
A global distributed storage architecture
NASA Technical Reports Server (NTRS)
Lionikis, Nemo M.; Shields, Michael F.
1996-01-01
NSA architects and planners have come to realize that to gain the maximum benefit from, and keep pace with, emerging technologies, we must move to a radically different computing architecture. The compute complex of the future will be a distributed heterogeneous environment, where, to a much greater extent than today, network-based services are invoked to obtain resources. Among the rewards of implementing the services-based view are that it insulates the user from much of the complexity of our multi-platform, networked, computer and storage environment and hides its diverse underlying implementation details. In this paper, we will describe one of the fundamental services being built in our envisioned infrastructure; a global, distributed archive with near-real-time access characteristics. Our approach for adapting mass storage services to this infrastructure will become clear as the service is discussed.
Services supporting collaborative alignment of engineering networks
NASA Astrophysics Data System (ADS)
Jansson, Kim; Uoti, Mikko; Karvonen, Iris
2015-08-01
Large-scale facilities such as power plants, process factories, ships and communication infrastructures are often engineered and delivered through geographically distributed operations. The competencies required are usually distributed across several contributing organisations. In these complicated projects, it is of key importance that all partners work coherently towards a common goal. VTT and a number of industrial organisations in the marine sector have participated in a national collaborative research programme addressing these needs. The main output of this programme was development of the Innovation and Engineering Maturity Model for Marine-Industry Networks. The recently completed European Union Framework Programme 7 project COIN developed innovative solutions and software services for enterprise collaboration and enterprise interoperability. One area of focus in that work was services for collaborative project management. This article first addresses a number of central underlying research themes and previous research results that have influenced the development work mentioned above. This article presents two approaches for the development of services that support distributed engineering work. Experience from use of the services is analysed, and potential for development is identified. This article concludes with a proposal for consolidation of the two above-mentioned methodologies. This article outlines the characteristics and requirements of future services supporting collaborative alignment of engineering networks.
Impact of Network Activity Levels on the Performance of Passive Network Service Dependency Discovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, Thomas E.; Chikkagoudar, Satish; Arthur-Durett, Kristine M.
Network services often do not operate alone, but instead, depend on other services distributed throughout a network to correctly function. If a service fails, is disrupted, or degraded, it is likely to impair other services. The web of dependencies can be surprisingly complex---especially within a large enterprise network---and evolve with time. Acquiring, maintaining, and understanding dependency knowledge is critical for many network management and cyber defense activities. While automation can improve situation awareness for network operators and cyber practitioners, poor detection accuracy reduces their confidence and can complicate their roles. In this paper we rigorously study the effects of networkmore » activity levels on the detection accuracy of passive network-based service dependency discovery methods. The accuracy of all except for one method was inversely proportional to network activity levels. Our proposed cross correlation method was particularly robust to the influence of network activity. The proposed experimental treatment will further advance a more scientific evaluation of methods and provide the ability to determine their operational boundaries.« less
NASA Astrophysics Data System (ADS)
Du, Jian; Sheng, Wanxing; Lin, Tao; Lv, Guangxian
2018-05-01
Nowadays, the smart distribution network has made tremendous progress, and the business visualization becomes even more significant and indispensable. Based on the summarization of traditional visualization technologies and demands of smart distribution network, a panoramic visualization application is proposed in this paper. The overall architecture, integrated architecture and service architecture of panoramic visualization application is firstly presented. Then, the architecture design and main functions of panoramic visualization system are elaborated in depth. In addition, the key technologies related to the application is discussed briefly. At last, two typical visualization scenarios in smart distribution network, which are risk warning and fault self-healing, proves that the panoramic visualization application is valuable for the operation and maintenance of the distribution network.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744
ERIC Educational Resources Information Center
Maule, R. William
1993-01-01
Discussion of the role of new computer communications technologies in education focuses on modern networking systems, including fiber distributed data interface and Integrated Services Digital Network; strategies for implementing networked-based communication; and public online information resources for the classroom, including Bitnet, Internet,…
Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems
NASA Astrophysics Data System (ADS)
Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof
The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.
SDN architecture for optical packet and circuit integrated networks
NASA Astrophysics Data System (ADS)
Furukawa, Hideaki; Miyazawa, Takaya
2016-02-01
We have been developing an optical packet and circuit integrated (OPCI) network, which realizes dynamic optical path, high-density packet multiplexing, and flexible wavelength resource allocation. In the OPCI networks, a best-effort service and a QoS-guaranteed service are provided by employing optical packet switching (OPS) and optical circuit switching (OCS) respectively, and users can select these services. Different wavelength resources are assigned for OPS and OCS links, and the amount of their wavelength resources are dynamically changed in accordance with the service usage conditions. To apply OPCI networks into wide-area (core/metro) networks, we have developed an OPCI node with a distributed control mechanism. Moreover, our OPCI node works with a centralized control mechanism as well as a distributed one. It is therefore possible to realize SDN-based OPCI networks, where resource requests and a centralized configuration are carried out. In this paper, we show our SDN architecture for an OPS system that configures mapping tables between IP addresses and optical packet addresses and switching tables according to the requests from multiple users via a web interface. While OpenFlow-based centralized control protocol is coming into widespread use especially for single-administrative, small-area (LAN/data-center) networks. Here, we also show an interworking mechanism between OpenFlow-based networks (OFNs) and the OPCI network for constructing a wide-area network, and a control method of wavelength resource selection to automatically transfer diversified flows from OFNs to the OPCI network.
On service differentiation in mobile Ad Hoc networks.
Zhang, Shun-liang; Ye, Cheng-qing
2004-09-01
A network model is proposed to support service differentiation for mobile Ad Hoc networks by combining a fully distributed admission control approach and the DIFS based differentiation mechanism of IEEE802.11. It can provide different kinds of QoS (Quality of Service) for various applications. Admission controllers determine a committed bandwidth based on the reserved bandwidth of flows and the source utilization of networks. Packets are marked when entering into networks by markers according to the committed rate. By the mark in the packet header, intermediate nodes handle the received packets in different manners to provide applications with the QoS corresponding to the pre-negotiated profile. Extensive simulation experiments showed that the proposed mechanism can provide QoS guarantee to assured service traffic and increase the channel utilization of networks.
Advanced Optical Burst Switched Network Concepts
NASA Astrophysics Data System (ADS)
Nejabati, Reza; Aracil, Javier; Castoldi, Piero; de Leenheer, Marc; Simeonidou, Dimitra; Valcarenghi, Luca; Zervas, Georgios; Wu, Jian
In recent years, as the bandwidth and the speed of networks have increased significantly, a new generation of network-based applications using the concept of distributed computing and collaborative services is emerging (e.g., Grid computing applications). The use of the available fiber and DWDM infrastructure for these applications is a logical choice offering huge amounts of cheap bandwidth and ensuring global reach of computing resources [230]. Currently, there is a great deal of interest in deploying optical circuit (wavelength) switched network infrastructure for distributed computing applications that require long-lived wavelength paths and address the specific needs of a small number of well-known users. Typical users are particle physicists who, due to their international collaborations and experiments, generate enormous amounts of data (Petabytes per year). These users require a network infrastructures that can support processing and analysis of large datasets through globally distributed computing resources [230]. However, providing wavelength granularity bandwidth services is not an efficient and scalable solution for applications and services that address a wider base of user communities with different traffic profiles and connectivity requirements. Examples of such applications may be: scientific collaboration in smaller scale (e.g., bioinformatics, environmental research), distributed virtual laboratories (e.g., remote instrumentation), e-health, national security and defense, personalized learning environments and digital libraries, evolving broadband user services (i.e., high resolution home video editing, real-time rendering, high definition interactive TV). As a specific example, in e-health services and in particular mammography applications due to the size and quantity of images produced by remote mammography, stringent network requirements are necessary. Initial calculations have shown that for 100 patients to be screened remotely, the network would have to securely transport 1.2 GB of data every 30 s [230]. According to the above explanation it is clear that these types of applications need a new network infrastructure and transport technology that makes large amounts of bandwidth at subwavelength granularity, storage, computation, and visualization resources potentially available to a wide user base for specified time durations. As these types of collaborative and network-based applications evolve addressing a wide range and large number of users, it is infeasible to build dedicated networks for each application type or category. Consequently, there should be an adaptive network infrastructure able to support all application types, each with their own access, network, and resource usage patterns. This infrastructure should offer flexible and intelligent network elements and control mechanism able to deploy new applications quickly and efficiently.
ERIC Educational Resources Information Center
Severiens, Thomas; Hohlfeld, Michael; Zimmermann, Kerstin; Hilf, Eberhard R.; von Ossietzky, Carl; Weibel, Stuart L.; Koch, Traugott; Hughes, Carol Ann; Bearman, David
2000-01-01
Includes four articles that discuss a variety to topics, including a distributed network of physics institutions documents called PhysDocs which harvests information from the local Web-servers of professional physics institutions; the Dublin Core metadata initiative; information services for higher education in a competitive environment; and…
First Lessons From The Biarritz Trial Network [1
NASA Astrophysics Data System (ADS)
Touyarot, P.; Marc, B.; de Panafieu, A.
1986-07-01
Opened for commercial operation in 1984, the trial optical fiber network at Biarritz in south-west France gives 1,500 subscribers access to a whole range of broadband services - videophony, audiovisual databases, TV and stereo sound program distribution, and an on-line TV program library - in addition to conventional narrow-band services like telephony and videotex. The Biarritz network is an outstanding technology and engineering testbed. It is also a sociological testing ground for new services, unique in the world, with results of particular relevance to the interactive cable TV and visual communications networks of the future.
Hammoudeh, Mohammad; Newman, Robert; Dennett, Christopher; Mount, Sarah; Aldabbas, Omar
2015-01-01
This paper presents a distributed information extraction and visualisation service, called the mapping service, for maximising information return from large-scale wireless sensor networks. Such a service would greatly simplify the production of higher-level, information-rich, representations suitable for informing other network services and the delivery of field information visualisations. The mapping service utilises a blend of inductive and deductive models to map sense data accurately using externally available knowledge. It utilises the special characteristics of the application domain to render visualisations in a map format that are a precise reflection of the concrete reality. This service is suitable for visualising an arbitrary number of sense modalities. It is capable of visualising from multiple independent types of the sense data to overcome the limitations of generating visualisations from a single type of sense modality. Furthermore, the mapping service responds dynamically to changes in the environmental conditions, which may affect the visualisation performance by continuously updating the application domain model in a distributed manner. Finally, a distributed self-adaptation function is proposed with the goal of saving more power and generating more accurate data visualisation. We conduct comprehensive experimentation to evaluate the performance of our mapping service and show that it achieves low communication overhead, produces maps of high fidelity, and further minimises the mapping predictive error dynamically through integrating the application domain model in the mapping service. PMID:26378539
Ni, Jianhua; Qian, Tianlu; Xi, Changbai; Rui, Yikang; Wang, Jiechen
2016-08-18
The spatial distribution of urban service facilities is largely constrained by the road network. In this study, network point pattern analysis and correlation analysis were used to analyze the relationship between road network and healthcare facility distribution. The weighted network kernel density estimation method proposed in this study identifies significant differences between the outside and inside areas of the Ming city wall. The results of network K-function analysis show that private hospitals are more evenly distributed than public hospitals, and pharmacy stores tend to cluster around hospitals along the road network. After computing the correlation analysis between different categorized hospitals and street centrality, we find that the distribution of these hospitals correlates highly with the street centralities, and that the correlations are higher with private and small hospitals than with public and large hospitals. The comprehensive analysis results could help examine the reasonability of existing urban healthcare facility distribution and optimize the location of new healthcare facilities.
A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN
NASA Astrophysics Data System (ADS)
Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.
2015-12-01
In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.
Meeting the Challenge of Distributed Real-Time & Embedded (DRE) Systems
2012-05-10
IP RTOS Middleware Middleware Services DRE Applications Operating Sys & Protocols Hardware & Networks Middleware Middleware Services DRE...Services COTS & standards-based middleware, language, OS , network, & hardware platforms • Real-time CORBA (TAO) middleware • ADAPTIVE Communication...SPLs) F-15 product variant A/V 8-B product variant F/A 18 product variant UCAV product variant Software Produce-Line Hardware (CPU, Memory, I/O) OS
Yi, Meng; Chen, Qingkui; Xiong, Neal N
2016-11-03
This paper considers the distributed access and control problem of massive wireless sensor networks' data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate.
NASA Astrophysics Data System (ADS)
Han, Keesook J.; Hodge, Matthew; Ross, Virginia W.
2011-06-01
For monitoring network traffic, there is an enormous cost in collecting, storing, and analyzing network traffic datasets. Data mining based network traffic analysis has a growing interest in the cyber security community, but is computationally expensive for finding correlations between attributes in massive network traffic datasets. To lower the cost and reduce computational complexity, it is desirable to perform feasible statistical processing on effective reduced datasets instead of on the original full datasets. Because of the dynamic behavior of network traffic, traffic traces exhibit mixtures of heavy tailed statistical distributions or overdispersion. Heavy tailed network traffic characterization and visualization are important and essential tasks to measure network performance for the Quality of Services. However, heavy tailed distributions are limited in their ability to characterize real-time network traffic due to the difficulty of parameter estimation. The Entropy-Based Heavy Tailed Distribution Transformation (EHTDT) was developed to convert the heavy tailed distribution into a transformed distribution to find the linear approximation. The EHTDT linearization has the advantage of being amenable to characterize and aggregate overdispersion of network traffic in realtime. Results of applying the EHTDT for innovative visual analytics to real network traffic data are presented.
Overview of hybrid fiber-coaxial network deployment in the deregulated UK environment
NASA Astrophysics Data System (ADS)
Cox, Alan L.
1995-11-01
Cable operators in the U.K. enjoy unprecedented license to construct networks and operate cable TV and telecommunications services within their franchise areas. In general, operators have built hybrid-fiber-coax (HFC) networks for cable TV in parallel with fiber-copper-pair networks for telephony. The commonly used network architectures are reviewed, together with their present and future capacities. Despite this dual-technology approach, there is considerable interest in the integration of telephony services onto the HFC network and the development of new interactive services for which HFC may be more suitable than copper pairs. Certain technological and commercial developments may have considerable significance for HFC networks and their operators. These include the digitalization of TV distribution and the rising demand for high-rate digital access lines. Possible scenarios are discussed.
Cascaded clocks measurement and simulation findings
NASA Technical Reports Server (NTRS)
Chislow, Don; Zampetti, George
1994-01-01
This paper will examine aspects related to network synchronization distribution and the cascading of timing elements. Methods of timing distribution have become a much debated topic in standards forums and among network service providers (both domestically and internationally). Essentially these concerns focus on the need to migrate their existing network synchronization plans (and capabilities) to those required for the next generation of transport technologies (namely, the Synchronous Digital Hierarchy (SDH), Synchronous Optical Networks (SONET), and Asynchronous Transfer Mode (ATM). The particular choices for synchronization distribution network architectures are now being evaluated and are demonstrating that they can indeed have a profound effect on the overall service performance levels that will be delivered to the customer. The salient aspects of these concerns reduce to the following: (1) identifying that the devil is in the details of the timing element specifications and the distribution of timing information (i.e., small design choices can have a large performance impact); (2) developing a standardized method of performance verification that will yield unambiguous results; and (3) presentation of those results. Specifically, this will be done for two general cases: an ideal input, and a noisy input to a cascaded chain of slave clocks.
NaradaBrokering as Middleware Fabric for Grid-based Remote Visualization Services
NASA Astrophysics Data System (ADS)
Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.
2003-12-01
Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. The simplicity in these approaches is offset by problems such as single-point-of-failures, scaling and availability. Furthermore, as the complexity, scale and scope of the services hosted on this paradigm increase, this approach becomes increasingly unsuitable. We propose a scheme based on top of a distributed brokering infrastructure, NaradaBrokering, which comprises a distributed network of broker nodes. These broker nodes are organized in a cluster-based architecture that can scale to very large sizes. The broker network is resilient to broker failures and efficiently routes interactions to entities that expressed an interest in them. In our approach to RVS, services advertise their capabilities to the broker network, which manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. Among the features provided in our approach are efficient discovery of services and asynchronous interactions between services and service requestors (which could themselves be other services). Entities need not be online during the execution of the service request. The system also ensures that entities can be notified about task executions, partial results and failures that might have taken place during service execution. The system also facilitates specification of task overrides, distribution of execution results to alternate devices (which were not used to originally request service execution) and to multiple users. These RVS services could of course be either OGSA (Open Grid Services Architecture) based Grid services or traditional Web services. The brokering infrastructure will manage the service advertisements and the invocation of these services. This scheme ensures that the fundamental Grid computing concept is met - provide computing capabilities of those that are willing to provide it to those that seek the same. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org
Survey: National Environmental Satellite Service
NASA Technical Reports Server (NTRS)
1977-01-01
The national Environmental Satellite Service (NESS) receives data at periodic intervals from satellites of the Synchronous Meteorological Satellite/Geostationary Operational Environmental Satellite series and from the Improved TIROS (Television Infrared Observational Satellite) Operational Satellite. Within the conterminous United States, direct readout and processed products are distributed to users over facsimile networks from a central processing and data distribution facility. In addition, the NESS Satellite Field Stations analyze, interpret, and distribute processed geostationary satellite products to regional weather service activities.
Resource Management In Peer-To-Peer Networks: A Nadse Approach
NASA Astrophysics Data System (ADS)
Patel, R. B.; Garg, Vishal
2011-12-01
This article presents a common solution to Peer-to-Peer (P2P) network problems and distributed computing with the help of "Neighbor Assisted Distributed and Scalable Environment" (NADSE). NADSE supports both device and code mobility. In this article mainly we focus on the NADSE based resource management technique. How information dissemination and searching is speedup when using the NADSE service provider node in large network. Results show that performance of the NADSE network is better in comparison to Gnutella, and Freenet.
Research on the exponential growth effect on network topology: Theoretical and empirical analysis
NASA Astrophysics Data System (ADS)
Li, Shouwei; You, Zongjun
Integrated circuit (IC) industry network has been built in Yangtze River Delta with the constant expansion of IC industry. The IC industry network grows exponentially with the establishment of new companies and the establishment of contacts with old firms. Based on preferential attachment and exponential growth, the paper presents the analytical results in which the vertices degree of scale-free network follows power-law distribution p(k)˜k‑γ (γ=2β+1) and parameter β satisfies 0.5≤β≤1. At the same time, we find that the preferential attachment takes place in a dynamic local world and the size of the dynamic local world is in direct proportion to the size of whole networks. The paper also gives the analytical results of no-preferential attachment and exponential growth on random networks. The computer simulated results of the model illustrate these analytical results. Through some investigations on the enterprises, this paper at first presents the distribution of IC industry, composition of industrial chain and service chain firstly. Then, the correlative network and its analysis of industrial chain and service chain are presented. The correlative analysis of the whole IC industry is also presented at the same time. Based on the theory of complex network, the analysis and comparison of industrial chain network and service chain network in Yangtze River Delta are provided in the paper.
Distributed network scheduling
NASA Technical Reports Server (NTRS)
Clement, Bradley J.; Schaffer, Steven R.
2004-01-01
Distributed Network Scheduling is the scheduling of future communications of a network by nodes in the network. This report details software for doing this onboard spacecraft in a remote network. While prior work on distributed scheduling has been applied to remote spacecraft networks, the software reported here focuses on modeling communication activities in greater detail and including quality of service constraints. Our main results are based on a Mars network of spacecraft and include identifying a maximum opportunity of improving traverse exploration rate a factor of three; a simulation showing reduction in one-way delivery times from a rover to Earth from as much as 5 to 1.5 hours; simulated response to unexpected events averaging under an hour onboard; and ground schedule generation ranging from seconds to 50 minutes for 15 to 100 communication goals.
Coverage maximization under resource constraints using a nonuniform proliferating random walk.
Saha, Sudipta; Ganguly, Niloy
2013-02-01
Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.
DNET: A communications facility for distributed heterogeneous computing
NASA Technical Reports Server (NTRS)
Tole, John; Nagappan, S.; Clayton, J.; Ruotolo, P.; Williamson, C.; Solow, H.
1989-01-01
This document describes DNET, a heterogeneous data communications networking facility. DNET allows programs operating on hosts on dissimilar networks to communicate with one another without concern for computer hardware, network protocol, or operating system differences. The overall DNET network is defined as the collection of host machines/networks on which the DNET software is operating. Each underlying network is considered a DNET 'domain'. Data communications service is provided between any two processes on any two hosts on any of the networks (domains) that may be reached via DNET. DNET provides protocol transparent, reliable, streaming data transmission between hosts (restricted, initially to DECnet and TCP/IP networks). DNET also provides variable length datagram service with optional return receipts.
Newland, Jamee; Newman, Christy; Treloar, Carla
2016-08-01
In Australia, sterile needles and syringes are distributed to people who inject drugs (PWID) through formal services for the purposes of preventing blood borne viruses (BBV). Peer distribution involves people acquiring needles from formal services and redistributing them to others. This paper investigates the dynamics of the distribution of sterile injecting equipment among networks of people who inject drugs in four sites in New South Wales (NSW), Australia. Qualitative data exploring the practice of peer distribution were collected through in-depth, semi-structured interviews and participatory social network mapping. These interviews explored injecting equipment demand, access to services, relationship pathways through which peer distribution occurred, an estimate of the size of the different peer distribution roles and participants' understanding of the illegality of peer distribution in NSW. Data were collected from 32 participants, and 31 (98%) reported participating in peer distribution in the months prior to interview. Of those 31 participants, five reported large-scale formal distribution, with an estimated volume of 34,970 needles and syringes annually. Twenty-two participated in reciprocal exchange, where equipment was distributed and received on an informal basis that appeared dependent on context and circumstance and four participants reported recipient peer distribution as their only access to sterile injecting equipment. Most (n=27) were unaware that it was illegal to distribute injecting equipment to their peers. Peer distribution was almost ubiquitous amongst the PWID participating in the study, and although five participants reported taking part in the highly organised, large-scale distribution of injecting equipment for altruistic reasons, peer distribution was more commonly reported to take place in small networks of friends and/or partners for reasons of convenience. The law regarding the illegality of peer distribution needs to change so that NSPs can capitalise on peer distribution to increase the options available to PWID and to acknowledge PWID as essential harm reduction agents in the prevention of BBVs. Copyright © 2016 Elsevier B.V. All rights reserved.
Evaluating Implementations of Service Oriented Architecture for Sensor Network via Simulation
2011-04-01
Subject: COMPUTER SCIENCE Approved: Boleslaw Szymanski , Thesis Adviser Rensselaer Polytechnic Institute Troy, New York April 2011 (For Graduation May 2011...simulation supports distributed and centralized composition with a type hierarchy and multiple -service statically-located nodes in a 2-dimensional space...distributed and centralized composition with a type hierarchy and multiple -service statically-located nodes in a 2-dimensional space. The second simulation
Introducing high performance distributed logging service for ACS
NASA Astrophysics Data System (ADS)
Avarias, Jorge A.; López, Joao S.; Maureira, Cristián; Sommer, Heiko; Chiozzi, Gianluca
2010-07-01
The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.
NASA Technical Reports Server (NTRS)
Ziegler, C.; Schilling, D. L.
1977-01-01
Two networks consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self delay and interference delay.
Distributed Multiple Access Control for the Wireless Mesh Personal Area Networks
NASA Astrophysics Data System (ADS)
Park, Moo Sung; Lee, Byungjoo; Rhee, Seung Hyong
Mesh networking technologies for both high-rate and low-rate wireless personal area networks (WPANs) are under development by several standardization bodies. They are considering to adopt distributed TDMA MAC protocols to provide seamless user mobility as well as a good peer-to-peer QoS in WPAN mesh. It has been, however, pointed out that the absence of a central controller in the wireless TDMA MAC may cause a severe performance degradation: e. g., fair allocation, service differentiation, and admission control may be hard to achieve or can not be provided. In this paper, we suggest a new framework of resource allocation for the distributed MAC protocols in WPANs. Simulation results show that our algorithm achieves both a fair resource allocation and flexible service differentiations in a fully distributed way for mesh WPANs where the devices have high mobility and various requirements. We also provide an analytical modeling to discuss about its unique equilibrium and to compute the lengths of reserved time slots at the stable point.
Security and Efficiency Concerns With Distributed Collaborative Networking Environments
2003-09-01
have the ability to access Web communications services of the WebEx MediaTone Network from a single login. [24] WebEx provides a range of secure...Web. WebEx services enable secure data, voice and video communications through the browser and are supported by the WebEx MediaTone Network, a global...designed to host large-scale, structured events and conferences, featuring a Q&A Manager that allows multiple moderators to handle questions while
Vulnerability of water supply systems to cyber-physical attacks
NASA Astrophysics Data System (ADS)
Galelli, Stefano; Taormina, Riccardo; Tippenhauer, Nils; Salomons, Elad; Ostfeld, Avi
2016-04-01
The adoption of smart meters, distributed sensor networks and industrial control systems has largely improved the level of service provided by modern water supply systems. Yet, the progressive computerization exposes these critical infrastructures to cyber-physical attacks, which are generally aimed at stealing critical information (cyber-espionage) or causing service disruption (denial-of-service). Recent statistics show that water and power utilities are undergoing frequent attacks - such as the December power outage in Ukraine - , attracting the interest of operators and security agencies. Taking the security of Water Distribution Networks (WDNs) as domain of study, our work seeks to characterize the vulnerability of WDNs to cyber-physical attacks, so as to conceive adequate defense mechanisms. We extend the functionality of EPANET, which models hydraulic and water quality processes in pressurized pipe networks, to include a cyber layer vulnerable to repeated attacks. Simulation results on a medium-scale network show that several hydraulic actuators (valves and pumps, for example) can be easily attacked, causing both service disruption - i.e., water spillage and loss of pressure - and structural damages - e.g., pipes burst. Our work highlights the need for adequate countermeasures, such as attacks detection and reactive control systems.
Scaling and correlations in three bus-transport networks of China
NASA Astrophysics Data System (ADS)
Xu, Xinping; Hu, Junhui; Liu, Feng; Liu, Lianshou
2007-01-01
We report the statistical properties of three bus-transport networks (BTN) in three different cities of China. These networks are composed of a set of bus lines and stations serviced by these. Network properties, including the degree distribution, clustering and average path length are studied in different definitions of network topology. We explore scaling laws and correlations that may govern intrinsic features of such networks. Besides, we create a weighted network representation for BTN with lines mapped to nodes and number of common stations to weights between lines. In such a representation, the distributions of degree, strength and weight are investigated. A linear behavior between strength and degree s(k)∼k is also observed.
A bipartite fitness model for online music streaming services
NASA Astrophysics Data System (ADS)
Pongnumkul, Suchit; Motohashi, Kazuyuki
2018-01-01
This paper proposes an evolution model and an analysis of the behavior of music consumers on online music streaming services. While previous studies have observed power-law degree distributions of usage in online music streaming services, the underlying behavior of users has not been well understood. Users and songs can be described using a bipartite network where an edge exists between a user node and a song node when the user has listened that song. The growth mechanism of bipartite networks has been used to understand the evolution of online bipartite networks Zhang et al. (2013). Existing bipartite models are based on a preferential attachment mechanism László Barabási and Albert (1999) in which the probability that a user listens to a song is proportional to its current popularity. This mechanism does not allow for two types of real world phenomena. First, a newly released song with high quality sometimes quickly gains popularity. Second, the popularity of songs normally decreases as time goes by. Therefore, this paper proposes a new model that is more suitable for online music services by adding fitness and aging functions to the song nodes of the bipartite network proposed by Zhang et al. (2013). Theoretical analyses are performed for the degree distribution of songs. Empirical data from an online streaming service, Last.fm, are used to confirm the degree distribution of the object nodes. Simulation results show improvements from a previous model. Finally, to illustrate the application of the proposed model, a simplified royalty cost model for online music services is used to demonstrate how the changes in the proposed parameters can affect the costs for online music streaming providers. Managerial implications are also discussed.
NASA Astrophysics Data System (ADS)
Zhang, Fan; Zhou, Zude; Liu, Quan; Xu, Wenjun
2017-02-01
Due to the advantages of being able to function under harsh environmental conditions and serving as a distributed condition information source in a networked monitoring system, the fibre Bragg grating (FBG) sensor network has attracted considerable attention for equipment online condition monitoring. To provide an overall conditional view of the mechanical equipment operation, a networked service-oriented condition monitoring framework based on FBG sensing is proposed, together with an intelligent matching method for supporting monitoring service management. In the novel framework, three classes of progressive service matching approaches, including service-chain knowledge database service matching, multi-objective constrained service matching and workflow-driven human-interactive service matching, are developed and integrated with an enhanced particle swarm optimisation (PSO) algorithm as well as a workflow-driven mechanism. Moreover, the manufacturing domain ontology, FBG sensor network structure and monitoring object are considered to facilitate the automatic matching of condition monitoring services to overcome the limitations of traditional service processing methods. The experimental results demonstrate that FBG monitoring services can be selected intelligently, and the developed condition monitoring system can be re-built rapidly as new equipment joins the framework. The effectiveness of the service matching method is also verified by implementing a prototype system together with its performance analysis.
Delay decomposition at a single server queue with constant service time and multiple inputs
NASA Technical Reports Server (NTRS)
Ziegler, C.; Schilling, D. L.
1978-01-01
Two network consisting of single server queues, each with a constant service time, are considered. The external inputs to each network are assumed to follow some general probability distribution. Several interesting equivalencies that exist between the two networks considered are derived. This leads to the introduction of an important concept in delay decomposition. It is shown that the waiting time experienced by a customer can be decomposed into two basic components called self-delay and interference delay.
NASA Astrophysics Data System (ADS)
The present conference on global telecommunications discusses topics in the fields of Integrated Services Digital Network (ISDN) technology field trial planning and results to date, motion video coding, ISDN networking, future network communications security, flexible and intelligent voice/data networks, Asian and Pacific lightwave and radio systems, subscriber radio systems, the performance of distributed systems, signal processing theory, satellite communications modulation and coding, and terminals for the handicapped. Also discussed are knowledge-based technologies for communications systems, future satellite transmissions, high quality image services, novel digital signal processors, broadband network access interface, traffic engineering for ISDN design and planning, telecommunications software, coherent optical communications, multimedia terminal systems, advanced speed coding, portable and mobile radio communications, multi-Gbit/second lightwave transmission systems, enhanced capability digital terminals, communications network reliability, advanced antimultipath fading techniques, undersea lightwave transmission, image coding, modulation and synchronization, adaptive signal processing, integrated optical devices, VLSI technologies for ISDN, field performance of packet switching, CSMA protocols, optical transport system architectures for broadband ISDN, mobile satellite communications, indoor wireless communication, echo cancellation in communications, and distributed network algorithms.
NASA Astrophysics Data System (ADS)
Allison, M.; Gundersen, L. C.; Richard, S. M.; Dickinson, T. L.
2008-12-01
A coalition of the state geological surveys (AASG), the U.S. Geological Survey (USGS), and partners will receive NSF funding over 3 years under the INTEROP solicitation to start building the Geoscience Information Network (www.geoinformatics.info/gin) a distributed, interoperable data network. The GIN project will develop standardized services to link existing and in-progress components using a few standards and protocols, and work with data providers to implement these services. The key components of this network are 1) catalog system(s) for data discovery; 2) service definitions for interfaces for searching catalogs and accessing resources; 3) shared interchange formats to encode information for transmission (e.g. various XML markup languages); 4) data providers that publish information using standardized services defined by the network; and 5) client applications adapted to use information resources provided by the network. The GIN will integrate and use catalog resources that currently exist or are in development. We are working with the USGS National Geologic Map Database's existing map catalog, with the USGS National Geological and Geophysical Data Preservation Program, which is developing a metadata catalog (National Digital Catalog) for geoscience information resource discovery, and with the GEON catalog. Existing interchange formats will be used, such as GeoSciML, ChemML, and Open Geospatial Consortium sensor, observation and measurement MLs. Client application development will be fostered by collaboration with industry and academic partners. The GIN project will focus on the remaining aspects of the system -- service definitions and assistance to data providers to implement the services and bring content online - and on system integration of the modules. Initial formal collaborators include the OneGeology-Europe consortium of 27 nations that is building a comparable network under the EU INSPIRE initiative, GEON, Earthchem, and GIS software company ESRI. OneGeology-Europe and GIN have agreed to integrate their networks, effectively adopting global standards among geological surveys that are available across the entire field. ESRI is creating a Geology Data Model for ArcGIS software to be compatible with GIN, and other companies are expressing interest in adapting their services, applications, and clients to take advantage of the large data resources planned to become available through GIN.
A model for the distribution of watermarked digital content on mobile networks
NASA Astrophysics Data System (ADS)
Frattolillo, Franco; D'Onofrio, Salvatore
2006-10-01
Although digital watermarking can be considered one of the key technologies to implement the copyright protection of digital contents distributed on the Internet, most of the content distribution models based on watermarking protocols proposed in literature have been purposely designed for fixed networks and cannot be easily adapted to mobile networks. On the contrary, the use of mobile devices currently enables new types of services and business models, and this makes the development of new content distribution models for mobile environments strategic in the current scenario of the Internet. This paper presents and discusses a distribution model of watermarked digital contents for such environments able to achieve a trade-off between the needs of efficiency and security.
An economic analysis on optical Ethernet in the access network
NASA Astrophysics Data System (ADS)
Kim, Sung Hwi; Nam, Dohyun; Yoo, Gunil; Kim, WoonHa
2004-04-01
Nowadays, Broadband service subscribers have increased exponentially and have almost saturated in Korea. Several types of solutions for broadband service applied to the field. Among several types of broadband services, most of subscribers provided xDSL service like ADSL or VDSL. Usually, they who live in an apartment provided Internet service by Ntopia network as FTTC structure that is a dormant network in economical view at KT. Under competitive telecom environment for new services like video, we faced with needing to expand or rebuild portions of our access networks, are looking for ways to provide any service that competitors might offer presently or in the near future. In order to look for new business model like FTTH service, we consider deploying optical access network. In spite of numerous benefits of PON until now, we cannot believe that PON is the best solution in Korea. Because we already deployed optical access network of ring type feeder cable and have densely population of subscribers that mainly distributed inside 6km from central office. So we try to utilize an existing Ntopia network for FTTH service under optical access environment. Despite of such situations, we try to deploy PON solution in the field as FTTC or FTTH architecture. Therefore we analyze PON structure in comparison with AON structure in order to look for optimized structure in Korea. At first, we describe the existing optical access networks and network architecture briefly. Secondly we investigate the cost of building optical access networks by modeling cost functions on AON and PON structure which based on Ethernet protocol, and analyze two different network architectures according to different deployment scenarios: Urban, small town, rural. Finally we suggest the economic and best solution with PON structure to optimize to optical access environment of KT.
Decentralized Online Social Networks
NASA Astrophysics Data System (ADS)
Datta, Anwitaman; Buchegger, Sonja; Vu, Le-Hung; Strufe, Thorsten; Rzadca, Krzysztof
Current Online social networks (OSN) are web services run on logically centralized infrastructure. Large OSN sites use content distribution networks and thus distribute some of the load by caching for performance reasons, nevertheless there is a central repository for user and application data. This centralized nature of OSNs has several drawbacks including scalability, privacy, dependence on a provider, need for being online for every transaction, and a lack of locality. There have thus been several efforts toward decentralizing OSNs while retaining the functionalities offered by centralized OSNs. A decentralized online social network (DOSN) is a distributed system for social networking with no or limited dependency on any dedicated central infrastructure. In this chapter we explore the various motivations of a decentralized approach to online social networking, discuss several concrete proposals and types of DOSN as well as challenges and opportunities associated with decentralization.
Real-time distributed scheduling algorithm for supporting QoS over WDM networks
NASA Astrophysics Data System (ADS)
Kam, Anthony C.; Siu, Kai-Yeung
1998-10-01
Most existing or proposed WDM networks employ circuit switching, typically with one session having exclusive use of one entire wavelength. Consequently they are not suitable for data applications involving bursty traffic patterns. The MIT AON Consortium has developed an all-optical LAN/MAN testbed which provides time-slotted WDM service and employs fast-tunable transceivers in each optical terminal. In this paper, we explore extensions of this service to achieve fine-grained statistical multiplexing with different virtual circuits time-sharing the wavelengths in a fair manner. In particular, we develop a real-time distributed protocol for best-effort traffic over this time-slotted WDM service with near-optical fairness and throughput characteristics. As an additional design feature, our protocol supports the allocation of guaranteed bandwidths to selected connections. This feature acts as a first step towards supporting integrated services and quality-of-service guarantees over WDM networks. To achieve high throughput, our approach is based on scheduling transmissions, as opposed to collision- based schemes. Our distributed protocol involves one MAN scheduler and several LAN schedulers (one per LAN) in a master-slave arrangement. Because of propagation delays and limits on control channel capacities, all schedulers are designed to work with partial, delayed traffic information. Our distributed protocol is of the `greedy' type to ensure fast execution in real-time in response to dynamic traffic changes. It employs a hybrid form of rate and credit control for resource allocation. We have performed extensive simulations, which show that our protocol allocates resources (transmitters, receivers, wavelengths) fairly with high throughput, and supports bandwidth guarantees.
Development of a Coordinated National Soil Moisture Network: A Pilot Study
NASA Astrophysics Data System (ADS)
Lucido, J. M.; Quiring, S. M.; Verdin, J. P.; Pulwarty, R. S.; Baker, B.; Cosgrove, B.; Escobar, V. M.; Strobel, M.
2014-12-01
Soil moisture data is critical for accurate drought prediction, flood forecasting, climate modeling, prediction of crop yields and water budgeting. However, soil moisture data are collected by many agencies and organizations in the United States using a variety of instruments and methods for varying applications. These data are often distributed and represented in disparate formats, posing significant challenges for use. In recognition of these challenges, the President's Climate Action Plan articulated the need for a coordinated national soil moisture network. In response to this action plan, a team led by the National Integrated Drought Information System has begun to develop a framework for this network and has instituted a proof-of-concept pilot study. This pilot is located in the south-central plains of the US, and will serve as a reference architecture for the requisite data systems and inform the design of the national network. The pilot comprises both in-situ and modeled soil moisture datasets (historical and real-time) and will serve the following use cases: operational drought monitoring, experimental land surface modeling, and operational hydrological modeling. The pilot will be implemented using a distributed network design in order to serve dispersed data in real-time directly from data providers. Standard service protocols will be used to enable future integration with external clients. The pilot network will additionally contain a catalog of data sets and web service endpoints, which will be used to broker web service calls. A mediation and aggregation service will then intelligently request, compile, and transform the distributed datasets from their native formats into a standardized output. This mediation framework allows data to be hosted and maintained locally by the data owners while simplifying access through a single service interface. These data services will then be used to create visualizations, for example, views of the current soil moisture conditions compared to historical baselines via a map-based web application. This talk will comprise an overview of the pilot design and implementation, a discussion of strategies for integrating in-situ and modeled soil moisture data sets as well as lessons learned during the course of the pilot.
NASA Astrophysics Data System (ADS)
Felix, J.
The management center and new circuit switching services offered by the French Telecom I network are described. Attention is focused on business services. The satellite has a 125 Mbit/sec capability distributed over 5 frequency bands, yielding the equivalent of 1800 channels. Data are transmitted in digitized bursts with TDMA techniques. Besides the management center, Telecom I interfaces with 310 local network antennas with access managed by the center through a reservation service and protocol assignment. The center logs and supervises alarms and network events, monitors traffic, logs taxation charges and manages the man-machine dialog for TDMA and terrestrial operations. Time slots are arranged in terms of minimal 10 min segments. The reservations can be directly accessed by up to 1000 terminals. All traffic is handled on a call-by-call basis.
Dyadic Interactions in Service Encounter: Bayesian SEM Approach
NASA Astrophysics Data System (ADS)
Sagan, Adam; Kowalska-Musiał, Magdalena
Dyadic interactions are an important aspects in service encounters. They may be observed in B2B distribution channels, professional services, buying centers, family decision making or WOM communications. The networks consist of dyadic bonds that form dense but weak ties among the actors.
A Scalable QoS-Aware VoD Resource Sharing Scheme for Next Generation Networks
NASA Astrophysics Data System (ADS)
Huang, Chenn-Jung; Luo, Yun-Cheng; Chen, Chun-Hua; Hu, Kai-Wen
In network-aware concept, applications are aware of network conditions and are adaptable to the varying environment to achieve acceptable and predictable performance. In this work, a solution for video on demand service that integrates wireless and wired networks by using the network aware concepts is proposed to reduce the blocking probability and dropping probability of mobile requests. Fuzzy logic inference system is employed to select appropriate cache relay nodes to cache published video streams and distribute them to different peers through service oriented architecture (SOA). SIP-based control protocol and IMS standard are adopted to ensure the possibility of heterogeneous communication and provide a framework for delivering real-time multimedia services over an IP-based network to ensure interoperability, roaming, and end-to-end session management. The experimental results demonstrate that effectiveness and practicability of the proposed work.
DGs for Service Restoration to Critical Loads in a Secondary Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Yin; Liu, Chen-Ching; Wang, Zhiwen
During a major outage in a secondary network distribution system, distributed generators (DGs) connected to the primary feeders as well as the secondary network can be used to serve critical loads. This paper proposed a resilience-oriented method to determine restoration strategies for secondary network distribution systems after a major disaster. Technical issues associated with the restoration process are analyzed, including the operation of network protectors, inrush currents caused by the energization of network transformers, synchronization of DGs to the network, and circulating currents among DGs. A look-ahead load restoration framework is proposed, incorporating technical issues associated with secondary networks, limitsmore » on DG capacity and generation resources, dynamic constraints, and operational limits. The entire outage duration is divided into a sequence of periods. Restoration strategies can be adjusted at the beginning of each period using the latest information. Finally, numerical simulation of the modified IEEE 342-node low voltage networked test system is performed to validate the effectiveness of the proposed method.« less
DGs for Service Restoration to Critical Loads in a Secondary Network
Xu, Yin; Liu, Chen-Ching; Wang, Zhiwen; ...
2017-08-25
During a major outage in a secondary network distribution system, distributed generators (DGs) connected to the primary feeders as well as the secondary network can be used to serve critical loads. This paper proposed a resilience-oriented method to determine restoration strategies for secondary network distribution systems after a major disaster. Technical issues associated with the restoration process are analyzed, including the operation of network protectors, inrush currents caused by the energization of network transformers, synchronization of DGs to the network, and circulating currents among DGs. A look-ahead load restoration framework is proposed, incorporating technical issues associated with secondary networks, limitsmore » on DG capacity and generation resources, dynamic constraints, and operational limits. The entire outage duration is divided into a sequence of periods. Restoration strategies can be adjusted at the beginning of each period using the latest information. Finally, numerical simulation of the modified IEEE 342-node low voltage networked test system is performed to validate the effectiveness of the proposed method.« less
Maritime Operations in Disconnected, Intermittent, and Low-Bandwidth Environments
2013-06-01
of a Dynamic Distributed Database ( DDD ) is a core element enabling the distributed operation of networks and applications, as described in this...document. The DDD is a database containing all the relevant information required to reconfigure the applications, routing, and other network services...optimize application configuration. Figure 5 gives a snapshot of entries in the DDD . In current testing, the DDD is replicated using Domino
1994-03-01
Specification and Network Time Protocol(NTP) over the Implementation. RFC-o 119, Network OSI Remote Operations Service. RFC- Working Group, September...approximately ISIS implements a powerful model of 94% of the total computation time. distributed computation known as modelo Timing results are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Qi; Al-Shaer, Ehab; Chatterjee, Samrat
The Infrastructure Distributed Denial of Service (IDDoS) attacks continue to be one of the most devastating challenges facing cyber systems. The new generation of IDDoS attacks exploit the inherent weakness of cyber infrastructure including deterministic nature of routes, skew distribution of flows, and Internet ossification to discover the network critical links and launch highly stealthy flooding attacks that are not observable at the victim end. In this paper, first, we propose a new metric to quantitatively measure the potential susceptibility of any arbitrary target server or domain to stealthy IDDoS attacks, and es- timate the impact of such susceptibility onmore » enterprises. Second, we develop a proactive route mutation technique to minimize the susceptibility to these attacks by dynamically changing the flow paths periodically to invalidate the adversary knowledge about the network and avoid targeted critical links. Our proposed approach actively changes these network paths while satisfying security and qualify of service requirements. We present an integrated approach of proactive route mutation that combines both infrastructure-based mutation that is based on reconfiguration of switches and routers, and middle-box approach that uses an overlay of end-point proxies to construct a virtual network path free of critical links to reach a destination. We implemented the proactive path mutation technique on a Software Defined Network using the OpendDaylight controller to demonstrate a feasible deployment of this approach. Our evaluation validates the correctness, effectiveness, and scalability of the proposed approaches.« less
A Novel Trust Service Provider for Internet Based Commerce Applications.
ERIC Educational Resources Information Center
Siyal, M. Y.; Barkat, B.
2002-01-01
Presents a framework for enhancing trust in Internet commerce. Shows how trust can be provided through a network of Trust Service Providers (TSp). Identifies a set of services that should be offered by a TSp. Presents a distributed object-oriented implementation of trust services using CORBA, JAVA and XML. (Author/AEF)
Distributed network management in the flat structured mobile communities
NASA Astrophysics Data System (ADS)
Balandina, Elena
2005-10-01
Delivering proper management into the flat structured mobile communities is crucial for improving users experience and increase applications diversity in mobile networks. The available P2P applications do application-centric management, but it cannot replace network-wide management, especially when a number of different applications are used simultaneously in the network. The network-wide management is the key element required for a smooth transition from standalone P2P applications to the self-organizing mobile communities that maintain various services with quality and security guaranties. The classical centralized network management solutions are not applicable in the flat structured mobile communities due to the decentralized nature and high mobility of the underlying networks. Also the basic network management tasks have to be revised taking into account specialties of the flat structured mobile communities. The network performance management becomes more dependent on the current nodes' context, which also requires extension of the configuration management functionality. The fault management has to take into account high mobility of the network nodes. The performance and accounting managements are mainly targeted in maintain an efficient and fair access to the resources within the community, however they also allow unbalanced resource use of the nodes that explicitly permit it, e.g. as a voluntary donation to the community or due to the profession (commercial) reasons. The security management must implement the new trust models, which are based on the community feedback, professional authorization, and a mix of both. For fulfilling these and another specialties of the flat structured mobile communities, a new network management solution is demanded. The paper presents a distributed network management solution for flat structured mobile communities. Also the paper points out possible network management roles for the different parties (e.g. operators, service providing hubs/super nodes, etc.) involved in a service providing chain.
High Speed Computing, LANs, and WAMs
NASA Technical Reports Server (NTRS)
Bergman, Larry A.; Monacos, Steve
1994-01-01
Optical fiber networks may one day offer potential capacities exceeding 10 terabits/sec. This paper describes present gigabit network techniques for distributed computing as illustrated by the CASA gigabit testbed, and then explores future all-optic network architectures that offer increased capacity, more optimized level of service for a given application, high fault tolerance, and dynamic reconfigurability.
Open solutions to distributed control in ground tracking stations
NASA Technical Reports Server (NTRS)
Heuser, William Randy
1994-01-01
The advent of high speed local area networks has made it possible to interconnect small, powerful computers to function together as a single large computer. Today, distributed computer systems are the new paradigm for large scale computing systems. However, the communications provided by the local area network is only one part of the solution. The services and protocols used by the application programs to communicate across the network are as indispensable as the local area network. And the selection of services and protocols that do not match the system requirements will limit the capabilities, performance, and expansion of the system. Proprietary solutions are available but are usually limited to a select set of equipment. However, there are two solutions based on 'open' standards. The question that must be answered is 'which one is the best one for my job?' This paper examines a model for tracking stations and their requirements for interprocessor communications in the next century. The model and requirements are matched with the model and services provided by the five different software architectures and supporting protocol solutions. Several key services are examined in detail to determine which services and protocols most closely match the requirements for the tracking station environment. The study reveals that the protocols are tailored to the problem domains for which they were originally designed. Further, the study reveals that the process control model is the closest match to the tracking station model.
Chip-set for quality of service support in passive optical networks
NASA Astrophysics Data System (ADS)
Ringoot, Edwin; Hoebeke, Rudy; Slabbinck, B. Hans; Verhaert, Michel
1998-10-01
In this paper the design of a chip-set for QoS provisioning in ATM-based Passive Optical Networks is discussed. The implementation of a general-purpose switch chip on the Optical Network Unit is presented, with focus on the design of the cell scheduling and buffer management logic. The cell scheduling logic supports `colored' grants, priority jumping and weighted round-robin scheduling. The switch chip offers powerful buffer management capabilities enabling the efficient support of GFR and UBR services. Multicast forwarding is also supported. In addition, the architecture of a MAC controller chip developed for a SuperPON access network is introduced. In particular, the permit scheduling logic and its implementation on the Optical Line Termination will be discussed. The chip-set enables the efficient support of services with different service requirements on the SuperPON. The permit scheduling logic built into the MAC controller chip in combination with the cell scheduling and buffer management capabilities of the switch chip can be used by network operators to offer guaranteed service performance to delay sensitive services, and to efficiently and fairly distribute any spare capacity to delay insensitive services.
Novel mechanism of network protection against the new generation of cyber attacks
NASA Astrophysics Data System (ADS)
Milovanov, Alexander; Bukshpun, Leonid; Pradhan, Ranjit
2012-06-01
A new intelligent mechanism is presented to protect networks against the new generation of cyber attacks. This mechanism integrates TCP/UDP/IP protocol stack protection and attacker/intruder deception to eliminate existing TCP/UDP/IP protocol stack vulnerabilities. It allows to detect currently undetectable, highly distributed, low-frequency attacks such as distributed denial-of-service (DDoS) attacks, coordinated attacks, botnet, and stealth network reconnaissance. The mechanism also allows insulating attacker/intruder from the network and redirecting the attack to a simulated network acting as a decoy. As a result, network security personnel gain sufficient time to defend the network and collect the attack information. The presented approach can be incorporated into wireless or wired networks that require protection against known and the new generation of cyber attacks.
One network metric datastore to track them all: the OSG network metric service
NASA Astrophysics Data System (ADS)
Quick, Robert; Babik, Marian; Fajardo, Edgar M.; Gross, Kyle; Hayashi, Soichi; Krenz, Marina; Lee, Thomas; McKee, Shawn; Pipes, Christopher; Teige, Scott
2017-10-01
The Open Science Grid (OSG) relies upon the network as a critical part of the distributed infrastructures it enables. In 2012, OSG added a new focus area in networking with a goal of becoming the primary source of network information for its members and collaborators. This includes gathering, organizing, and providing network metrics to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion, and traffic routing. In September of 2015, this service was deployed into the OSG production environment. We will report on the creation, implementation, testing, and deployment of the OSG Networking Service. Starting from organizing the deployment of perfSONAR toolkits within OSG and its partners, to the challenges of orchestrating regular testing between sites, to reliably gathering the resulting network metrics and making them available for users, virtual organizations, and higher level services, all aspects of implementation will be reviewed. In particular, several higher-level services were developed to bring the OSG network service to its full potential. These include a web-based mesh configuration system, which allows central scheduling and management of all the network tests performed by the instances; a set of probes to continually gather metrics from the remote instances and publish it to different sources; a central network datastore (esmond), which provides interfaces to access the network monitoring information in close to real time and historically (up to a year) giving the state of the tests; and a perfSONAR infrastructure monitor system, ensuring the current perfSONAR instances are correctly configured and operating as intended. We will also describe the challenges we encountered in ongoing operations of the network service and how we have evolved our procedures to address those challenges. Finally we will describe our plans for future extensions and improvements to the service.
Quality of service routing in the differentiated services framework
NASA Astrophysics Data System (ADS)
Oliveira, Marilia C.; Melo, Bruno; Quadros, Goncalo; Monteiro, Edmundo
2001-02-01
In this paper we present a quality of service routing strategy for network where traffic differentiation follows the class-based paradigm, as in the Differentiated Services framework. This routing strategy is based on a metric of quality of service. This metric represents the impact that delay and losses verified at each router in the network have in application performance. Based on this metric, it is selected a path for each class according to the class sensitivity to delay and losses. The distribution of the metric is triggered by a relative criterion with two thresholds, and the values advertised are the moving average of the last values measured.
Going the Extra Mile: Enabling Joint Logistics for the Tactical War Fighter
2010-05-04
few of the links when relocating hubs. Chains v. Networks Supply Chain Too brittle , long CPL, low clustering, simple pattern, simple control...Mass Service Perspective Efficiency Highly Optimized Brittle , Rigid Supply Chains vs Networked Cross-Service Mutual Support Cross-Enterprise...Storage and Distribution Centei\\" Army Logistician 39, no. 6 (November-December 2007): 40. 68 Glen R Dowling, "Army and Marine Joint Ammunition
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.
Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.
Yi, Meng; Chen, Qingkui; Xiong, Neal N.
2016-01-01
This paper considers the distributed access and control problem of massive wireless sensor networks’ data access center for the Internet of Things, which is an extension of wireless sensor networks and an element of its topology structure. In the context of the arrival of massive service access requests at a virtual data center, this paper designs a massive sensing data access and control mechanism to improve the access efficiency of service requests and makes full use of the available resources at the data access center for the Internet of things. Firstly, this paper proposes a synergistically distributed buffer access model, which separates the information of resource and location. Secondly, the paper divides the service access requests into multiple virtual groups based on their characteristics and locations using an optimized self-organizing feature map neural network. Furthermore, this paper designs an optimal scheduling algorithm of group migration based on the combination scheme between the artificial bee colony algorithm and chaos searching theory. Finally, the experimental results demonstrate that this mechanism outperforms the existing schemes in terms of enhancing the accessibility of service requests effectively, reducing network delay, and has higher load balancing capacity and higher resource utility rate. PMID:27827878
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud
Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
NASA Astrophysics Data System (ADS)
Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun
2002-07-01
In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.
Software-defined Radio Based Measurement Platform for Wireless Networks
Chao, I-Chun; Lee, Kang B.; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-01-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc.) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks. PMID:27891210
Software-defined Radio Based Measurement Platform for Wireless Networks.
Chao, I-Chun; Lee, Kang B; Candell, Richard; Proctor, Frederick; Shen, Chien-Chung; Lin, Shinn-Yan
2015-10-01
End-to-end latency is critical to many distributed applications and services that are based on computer networks. There has been a dramatic push to adopt wireless networking technologies and protocols (such as WiFi, ZigBee, WirelessHART, Bluetooth, ISA100.11a, etc. ) into time-critical applications. Examples of such applications include industrial automation, telecommunications, power utility, and financial services. While performance measurement of wired networks has been extensively studied, measuring and quantifying the performance of wireless networks face new challenges and demand different approaches and techniques. In this paper, we describe the design of a measurement platform based on the technologies of software-defined radio (SDR) and IEEE 1588 Precision Time Protocol (PTP) for evaluating the performance of wireless networks.
Driving Innovation in Optical Networking
NASA Astrophysics Data System (ADS)
Colizzi, Ernesto
Over the past 30 years, network applications have changed with the advent of innovative services spanning from high-speed broadband access to mobile data communications and to video signal distribution. To support this service evolution, optical transport infrastructures have changed their role. Innovations in optical networking have not only allowed the pure "bandwidth per fiber" increase, but also the realization of highly dependable and easy-to-manage networks. This article analyzes the innovations that have characterized the optical networking solutions from different perspectives, with a specific focus on the advancements introduced by Alcatel-Lucent's research and development laboratories located in Italy. The advancements of optical networking will be explored and discussed through Alcatel-Lucent's optical products to contextualize each innovation with the market evolution.
GeoNetwork powered GI-cat: a geoportal hybrid solution
NASA Astrophysics Data System (ADS)
Baldini, Alessio; Boldrini, Enrico; Santoro, Mattia; Mazzetti, Paolo
2010-05-01
To the aim of setting up a Spatial Data Infrastructures (SDI) the creation of a system for the metadata management and discovery plays a fundamental role. An effective solution is the use of a geoportal (e.g. FAO/ESA geoportal), that has the important benefit of being accessible from a web browser. With this work we present a solution based integrating two of the available frameworks: GeoNetwork and GI-cat. GeoNetwork is an opensource software designed to improve accessibility of a wide variety of data together with the associated ancillary information (metadata), at different scale and from multidisciplinary sources; data are organized and documented in a standard and consistent way. GeoNetwork implements both the Portal and Catalog components of a Spatial Data Infrastructure (SDI) defined in the OGC Reference Architecture. It provides tools for managing and publishing metadata on spatial data and related services. GeoNetwork allows harvesting of various types of web data sources e.g. OGC Web Services (e.g. CSW, WCS, WMS). GI-cat is a distributed catalog based on a service-oriented framework of modular components and can be customized and tailored to support different deployment scenarios. It can federate a multiplicity of catalogs services, as well as inventory and access services in order to discover and access heterogeneous ESS resources. The federated resources are exposed by GI-cat through several standard catalog interfaces (e.g. OGC CSW AP ISO, OpenSearch, etc.) and by the GI-cat extended interface. Specific components implement mediation services for interfacing heterogeneous service providers, each of which exposes a specific standard specification; such components are called Accessors. These mediating components solve providers data modelmultiplicity by mapping them onto the GI-cat internal data model which implements the ISO 19115 Core profile. Accessors also implement the query protocol mapping; first they translate the query requests expressed according to the interface protocols exposed by GI-cat into the multiple query dialects spoken by the resource service providers. Currently, a number of well-accepted catalog and inventory services are supported, including several OGC Web Services, THREDDS Data Server, SeaDataNet Common Data Index, GBIF and OpenSearch engines. A GeoNetwork powered GI-cat has been developed in order to exploit the best of the two frameworks. The new system uses a modified version of GeoNetwork web interface in order to add the capability of querying also the specified GI-cat catalog and not only the GeoNetwork internal database. The resulting system consists in a geoportal in which GI-cat plays the role of the search engine. This new system allows to distribute the query on the different types of data sources linked to a GI-cat. The metadata results of the query are then visualized by the Geonetwork web interface. This configuration was experimented in the framework of GIIDA, a project of the Italian National Research Council (CNR) focused on data accessibility and interoperability. A second advantage of this solution is achieved setting up a GeoNetwork catalog amongst the accessors of the GI-cat instance. Such a configuration will allow in turn GI-cat to run the query against the internal GeoNetwork database. This allows to have both the harvesting and the metadata editor functionalities provided by GeoNetwork and the distributed search functionality of GI-cat available in a consistent way through the same web interface.
39 CFR 121.4 - Package Services.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Center Facility (SCF) turnaround Package Services mail accepted at the origin SCF before the day zero... origin before the day-zero Critical Entry Time is 3 days, for each remaining (non-intra-SCF) 3-digit ZIP... intra-Network Distribution Center (NDC) Package Services mail accepted at origin before the day-zero...
39 CFR 121.4 - Package Services.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Center Facility (SCF) turnaround Package Services mail accepted at the origin SCF before the day zero... origin before the day-zero Critical Entry Time is 3 days, for each remaining (non-intra-SCF) 3-digit ZIP... intra-Network Distribution Center (NDC) Package Services mail accepted at origin before the day-zero...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... Networks of California, Inc.'s Distributed Antenna Systems and Other ``Small-Cell'' Solutions AGENCY.... Petitioner states that it provides telecommunications service via Distributed Antenna Systems (DAS) and other...
Cross-layer restoration with software defined networking based on IP over optical transport networks
NASA Astrophysics Data System (ADS)
Yang, Hui; Cheng, Lei; Deng, Junni; Zhao, Yongli; Zhang, Jie; Lee, Young
2015-10-01
The IP over optical transport network is a very promising networking architecture applied to the interconnection of geographically distributed data centers due to the performance guarantee of low delay, huge bandwidth and high reliability at a low cost. It can enable efficient resource utilization and support heterogeneous bandwidth demands in highly-available, cost-effective and energy-effective manner. In case of cross-layer link failure, to ensure a high-level quality of service (QoS) for user request after the failure becomes a research focus. In this paper, we propose a novel cross-layer restoration scheme for data center services with software defined networking based on IP over optical network. The cross-layer restoration scheme can enable joint optimization of IP network and optical network resources, and enhance the data center service restoration responsiveness to the dynamic end-to-end service demands. We quantitatively evaluate the feasibility and performances through the simulation under heavy traffic load scenario in terms of path blocking probability and path restoration latency. Numeric results show that the cross-layer restoration scheme improves the recovery success rate and minimizes the overall recovery time.
A Large Scale Code Resolution Service Network in the Internet of Things
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-01-01
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207
A large scale code resolution service network in the Internet of Things.
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-11-07
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.
31 CFR 1025.100 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR INSURANCE COMPANIES Definitions § 1025.100... and/or service representative of an insurance company. The term “insurance agent” encompasses any person that sells, markets, distributes, or services an insurance company's covered products, including...
31 CFR 1025.100 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR INSURANCE COMPANIES Definitions § 1025.100... and/or service representative of an insurance company. The term “insurance agent” encompasses any person that sells, markets, distributes, or services an insurance company's covered products, including...
31 CFR 1025.100 - Definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR INSURANCE COMPANIES Definitions § 1025.100... and/or service representative of an insurance company. The term “insurance agent” encompasses any person that sells, markets, distributes, or services an insurance company's covered products, including...
Bailly, Jean-Stéphane; Vinatier, Fabrice
2018-01-01
To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute new seed bank sources for species that are affected by the distance to natural lands and roads. PMID:29360857
Rudi, Gabrielle; Bailly, Jean-Stéphane; Vinatier, Fabrice
2018-01-01
To optimize ecosystem services provided by agricultural drainage networks (ditches) in headwater catchments, we need to manage the spatial distribution of plant species living in these networks. Geomorphological variables have been shown to be important predictors of plant distribution in other ecosystems because they control the water regime, the sediment deposition rates and the sun exposure in the ditches. Whether such variables may be used to predict plant distribution in agricultural drainage networks is unknown. We collected presence and absence data for 10 herbaceous plant species in a subset of a network of drainage ditches (35 km long) within a Mediterranean agricultural catchment. We simulated their spatial distribution with GLM and Maxent model using geomorphological variables and distance to natural lands and roads. Models were validated using k-fold cross-validation. We then compared the mean Area Under the Curve (AUC) values obtained for each model and other metrics issued from the confusion matrices between observed and predicted variables. Based on the results of all metrics, the models were efficient at predicting the distribution of seven species out of ten, confirming the relevance of geomorphological variables and distance to natural lands and roads to explain the occurrence of plant species in this Mediterranean catchment. In particular, the importance of the landscape geomorphological variables, ie the importance of the geomorphological features encompassing a broad environment around the ditch, has been highlighted. This suggests that agro-ecological measures for managing ecosystem services provided by ditch plants should focus on the control of the hydrological and sedimentological connectivity at the catchment scale. For example, the density of the ditch network could be modified or the spatial distribution of vegetative filter strips used for sediment trapping could be optimized. In addition, the vegetative filter strips could constitute new seed bank sources for species that are affected by the distance to natural lands and roads.
SSC Tenant Meeting: NASA Near Earth Network (NEN) Overview
NASA Technical Reports Server (NTRS)
Carter, David; Larsen, David; Baldwin, Philip; Wilson, Cristy; Ruley, LaMont
2018-01-01
The Near Earth Network (NEN) consists of globally distributed tracking stations that are strategically located throughout the world which provide Telemetry, Tracking, and Commanding (TTC) services support to a variety of orbital and suborbital flight missions, including Low Earth Orbit (LEO), Geosynchronous Earth Orbit (GEO), highly elliptical, and lunar orbits. Swedish Space Corporation (SSC), which is one of the NEN Commercial Service Provider, has provided the NEN with TTC services support from its Alaska, Hawaii, Chile and Sweden. The presentation will give an overview of the NEN and its support from SSC.
NASA Astrophysics Data System (ADS)
Sana, Ajaz; Saddawi, Samir; Moghaddassi, Jalil; Hussain, Shahab; Zaidi, Syed R.
2010-01-01
In this research paper we propose a novel Passive Optical Network (PON) based Mobile Worldwide Interoperability for Microwave Access (WiMAX) access network architecture to provide high capacity and performance multimedia services to mobile WiMAX users. Passive Optical Networks (PON) networks do not require powered equipment; hence they cost lower and need less network management. WiMAX technology emerges as a viable candidate for the last mile solution. In the conventional WiMAX access networks, the base stations and Multiple Input Multiple Output (MIMO) antennas are connected by point to point lines. Ideally in theory, the Maximum WiMAX bandwidth is assumed to be 70 Mbit/s over 31 miles. In reality, WiMAX can only provide one or the other as when operating over maximum range, bit error rate increases and therefore it is required to use lower bit rate. Lowering the range allows a device to operate at higher bit rates. Our focus in this research paper is to increase both range and bit rate by utilizing distributed cluster of MIMO antennas connected to WiMAX base stations with PON based topologies. A novel quality of service (QoS) algorithm is also proposed to provide admission control and scheduling to serve classified traffic. The proposed architecture presents flexible and scalable system design with different performance requirements and complexity.
Flood impacts on a water distribution network
NASA Astrophysics Data System (ADS)
Arrighi, Chiara; Tarani, Fabio; Vicario, Enrico; Castelli, Fabio
2017-12-01
Floods cause damage to people, buildings and infrastructures. Water distribution systems are particularly exposed, since water treatment plants are often located next to the rivers. Failure of the system leads to both direct losses, for instance damage to equipment and pipework contamination, and indirect impact, since it may lead to service disruption and thus affect populations far from the event through the functional dependencies of the network. In this work, we present an analysis of direct and indirect damages on a drinking water supply system, considering the hazard of riverine flooding as well as the exposure and vulnerability of active system components. The method is based on interweaving, through a semi-automated GIS procedure, a flood model and an EPANET-based pipe network model with a pressure-driven demand approach, which is needed when modelling water distribution networks in highly off-design conditions. Impact measures are defined and estimated so as to quantify service outage and potential pipe contamination. The method is applied to the water supply system of the city of Florence, Italy, serving approximately 380 000 inhabitants. The evaluation of flood impact on the water distribution network is carried out for different events with assigned recurrence intervals. Vulnerable elements exposed to the flood are identified and analysed in order to estimate their residual functionality and to simulate failure scenarios. Results show that in the worst failure scenario (no residual functionality of the lifting station and a 500-year flood), 420 km of pipework would require disinfection with an estimated cost of EUR 21 million, which is about 0.5 % of the direct flood losses evaluated for buildings and contents. Moreover, if flood impacts on the water distribution network are considered, the population affected by the flood is up to 3 times the population directly flooded.
A Logically Centralized Approach for Control and Management of Large Computer Networks
ERIC Educational Resources Information Center
Iqbal, Hammad A.
2012-01-01
Management of large enterprise and Internet service provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these…
Motion/imagery secure cloud enterprise architecture analysis
NASA Astrophysics Data System (ADS)
DeLay, John L.
2012-06-01
Cloud computing with storage virtualization and new service-oriented architectures brings a new perspective to the aspect of a distributed motion imagery and persistent surveillance enterprise. Our existing research is focused mainly on content management, distributed analytics, WAN distributed cloud networking performance issues of cloud based technologies. The potential of leveraging cloud based technologies for hosting motion imagery, imagery and analytics workflows for DOD and security applications is relatively unexplored. This paper will examine technologies for managing, storing, processing and disseminating motion imagery and imagery within a distributed network environment. Finally, we propose areas for future research in the area of distributed cloud content management enterprises.
KeyWare: an open wireless distributed computing environment
NASA Astrophysics Data System (ADS)
Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir
1995-12-01
Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.
NASA Astrophysics Data System (ADS)
Baru, C.; Lin, K.
2009-04-01
The Geosciences Network project (www.geongrid.org) has been developing cyberinfrastructure for data sharing in the Earth Science community based on a service-oriented architecture. The project defines a standard "software stack", which includes a standardized set of software modules and corresponding service interfaces. The system employs Grid certificates for distributed user authentication. The GEON Portal provides online access to these services via a set of portlets. This service-oriented approach has enabled the GEON network to easily expand to new sites and deploy the same infrastructure in new projects. To facilitate interoperation with other distributed geoinformatics environments, service standards are being defined and implemented for catalog services and federated search across distributed catalogs. The need arises because there may be multiple metadata catalogs in a distributed system, for example, for each institution, agency, geographic region, and/or country. Ideally, a geoinformatics user should be able to search across all such catalogs by making a single search request. In this paper, we describe our implementation for such a search capability across federated metadata catalogs in the GEON service-oriented architecture. The GEON catalog can be searched using spatial, temporal, and other metadata-based search criteria. The search can be invoked as a Web service and, thus, can be imbedded in any software application. The need for federated catalogs in GEON arises because, (i) GEON collaborators at the University of Hyderabad, India have deployed their own catalog, as part of the iGEON-India effort, to register information about local resources for broader access across the network, (ii) GEON collaborators in the GEO Grid (Global Earth Observations Grid) project at AIST, Japan have implemented a catalog for their ASTER data products, and (iii) we have recently deployed a search service to access all data products from the EarthScope project in the US (http://es-portal.geongrid.org), which are distributed across data archives at IRIS in Seattle, Washington, UNAVCO in Boulder, Colorado, and at the ICDP archives in GFZ, Potsdam, Germany. This service implements a "virtual" catalog--the actual/"physical" catalogs and data are stored at each of the remote locations. A federated search across all these catalogs would enable GEON users to discover data across all of these environments with a single search request. Our objective is to implement this search service via the OGC Catalog Services for the Web (CS-W) standard by providing appropriate CSW "wrappers" for each metadata catalog, as necessary. This paper will discuss technical issues in designing and deploying such a multi-catalog search service in GEON and describe an initial prototype of the federated search capability.
NASA Astrophysics Data System (ADS)
Eschenbächer, Jens; Seifert, Marcus; Thoben, Klaus-Dieter
Distributed innovation processes are considered as a new option to handle both the complexity and the speed in which new products and services need to be prepared. Indeed most research on innovation processes was focused on multinational companies with an intra-organisational perspective. The phenomena of innovation processes in networks - with an inter-organisational perspective - have been almost neglected. Collaborative networks present a perfect playground for such distributed innovation processes whereas the authors highlight in specific Virtual Organisation because of their dynamic behaviour. Research activities supporting distributed innovation processes in VO are rather new so that little knowledge about the management of such research is available. With the presentation of the collaborative network relationship analysis this gap will be addressed. It will be shown that a qualitative planning of collaboration intensities can support real business cases by proving knowledge and planning data.
Multiobjective assessment of distributed energy storage location in electricity networks
NASA Astrophysics Data System (ADS)
Ribeiro Gonçalves, José António; Neves, Luís Pires; Martins, António Gomes
2017-07-01
This paper presents a methodology to provide information to a decision maker on the associated impacts, both of economic and technical nature, of possible management schemes of storage units for choosing the best location of distributed storage devices, with a multiobjective optimisation approach based on genetic algorithms. The methodology was applied to a case study, a known distribution network model in which the installation of distributed storage units was tested, using lithium-ion batteries. The obtained results show a significant influence of the charging/discharging profile of batteries on the choice of their best location, as well as the relevance that these choices may have for the different network management objectives, for example, for reducing network energy losses or minimising voltage deviations. Results also show a difficult cost-effectiveness of an energy-only service, with the tested systems, both due to capital cost and due to the efficiency of conversion.
Gopalakrishnan, Ravichandran C; Karunakaran, Manivannan
2014-01-01
Nowadays, quality of service (QoS) is very popular in various research areas like distributed systems, multimedia real-time applications and networking. The requirements of these systems are to satisfy reliability, uptime, security constraints and throughput as well as application specific requirements. The real-time multimedia applications are commonly distributed over the network and meet various time constraints across networks without creating any intervention over control flows. In particular, video compressors make variable bit-rate streams that mismatch the constant-bit-rate channels typically provided by classical real-time protocols, severely reducing the efficiency of network utilization. Thus, it is necessary to enlarge the communication bandwidth to transfer the compressed multimedia streams using Flexible Time Triggered- Enhanced Switched Ethernet (FTT-ESE) protocol. FTT-ESE provides automation to calculate the compression level and change the bandwidth of the stream. This paper focuses on low-latency multimedia transmission over Ethernet with dynamic quality-of-service (QoS) management. This proposed framework deals with a dynamic QoS for multimedia transmission over Ethernet with FTT-ESE protocol. This paper also presents distinct QoS metrics based both on the image quality and network features. Some experiments with recorded and live video streams show the advantages of the proposed framework. To validate the solution we have designed and implemented a simulator based on the Matlab/Simulink, which is a tool to evaluate different network architecture using Simulink blocks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez; ...
2017-11-18
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
An evolving model of online bipartite networks
NASA Astrophysics Data System (ADS)
Zhang, Chu-Xu; Zhang, Zi-Ke; Liu, Chuang
2013-12-01
Understanding the structure and evolution of online bipartite networks is a significant task since they play a crucial role in various e-commerce services nowadays. Recently, various attempts have been tried to propose different models, resulting in either power-law or exponential degree distributions. However, many empirical results show that the user degree distribution actually follows a shifted power-law distribution, the so-called Mandelbrot’s law, which cannot be fully described by previous models. In this paper, we propose an evolving model, considering two different user behaviors: random and preferential attachment. Extensive empirical results on two real bipartite networks, Delicious and CiteULike, show that the theoretical model can well characterize the structure of real networks for both user and object degree distributions. In addition, we introduce a structural parameter p, to demonstrate that the hybrid user behavior leads to the shifted power-law degree distribution, and the region of power-law tail will increase with the increment of p. The proposed model might shed some lights in understanding the underlying laws governing the structure of real online bipartite networks.
Quality of service policy control in virtual private networks
NASA Astrophysics Data System (ADS)
Yu, Yiqing; Wang, Hongbin; Zhou, Zhi; Zhou, Dongru
2004-04-01
This paper studies the QoS of VPN in an environment where the public network prices connection-oriented services based on source, destination and grade of service, and advertises these prices to its VPN customers (users). As different QoS technologies can produce different QoS, there are according different traffic classification rules and priority rules. The internet service provider (ISP) may need to build complex mechanisms separately for each node. In order to reduce the burden of network configuration, we need to design policy control technologies. We considers mainly directory server, policy server, policy manager and policy enforcers. Policy decision point (PDP) decide its control according to policy rules. In network, policy enforce point (PEP) decide its network controlled unit. For InterServ and DiffServ, we will adopt different policy control methods as following: (1) In InterServ, traffic uses resource reservation protocol (RSVP) to guarantee the network resource. (2) In DiffServ, policy server controls the DiffServ code points and per hop behavior (PHB), its PDP distributes information to each network node. Policy server will function as following: information searching; decision mechanism; decision delivering; auto-configuration. In order to prove the effectiveness of QoS policy control, we make the corrective simulation.
Composition and structure of a large online social network in The Netherlands.
Corten, Rense
2012-01-01
Limitations in data collection have long been an obstacle in research on friendship networks. Most earlier studies use either a sample of ego-networks, or complete network data on a relatively small group (e.g., a single organization). The rise of online social networking services such as Friendster and Facebook, however, provides researchers with opportunities to study friendship networks on a much larger scale. This study uses complete network data from Hyves, a popular online social networking service in The Netherlands, comprising over eight million members and over 400 million online friendship relations. In the first study of its kind for The Netherlands, I examine the structure of this network in terms of the degree distribution, characteristic path length, clustering, and degree assortativity. Results indicate that this network shares features of other large complex networks, but also deviates in other respects. In addition, a comparison with other online social networks shows that these networks show remarkable similarities.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-14
... advanced communications services are offered for sale or otherwise distributed in interstate commerce by... individuals with disabilities are able to fully utilize advanced communications services (ACS) and equipment and networks used for such services. Specifically, we seek comment on ways to implement the CVAA's...
An object-based storage model for distributed remote sensing images
NASA Astrophysics Data System (ADS)
Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng
2006-10-01
It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.
Grids, virtualization, and clouds at Fermilab
Timm, S.; Chadwick, K.; Garzoglio, G.; ...
2014-06-11
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture andmore » the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). Lastly, this work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.« less
Grids, virtualization, and clouds at Fermilab
NASA Astrophysics Data System (ADS)
Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.
2014-06-01
Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.
A Fuzzy analytical hierarchy process approach in irrigation networks maintenance
NASA Astrophysics Data System (ADS)
Riza Permana, Angga; Rintis Hadiani, Rr.; Syafi'i
2017-11-01
Ponorogo Regency has 440 Irrigation Area with a total area of 17,950 Ha. Due to the limited budget and lack of maintenance cause decreased function on the irrigation. The aim of this study is to make an appropriate system to determine the indices weighted of the rank prioritization criteria for irrigation network maintenance using a fuzzy-based methodology. The criteria that are used such as the physical condition of irrigation networks, area of service, estimated maintenance cost, and efficiency of irrigation water distribution. 26 experts in the field of water resources in the Dinas Pekerjaan Umum were asked to fill out the questionnaire, and the result will be used as a benchmark to determine the rank of irrigation network maintenance priority. The results demonstrate that the physical condition of irrigation networks criterion (W1) = 0,279 has the greatest impact on the assessment process. The area of service (W2) = 0,270, efficiency of irrigation water distribution (W4) = 0,249, and estimated maintenance cost (W3) = 0,202 criteria rank next in effectiveness, respectively. The proposed methodology deals with uncertainty and vague data using triangular fuzzy numbers, and, moreover, it provides a comprehensive decision-making technique to assess maintenance priority on irrigation network.
NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid.
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-02-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid.
Towards Integrating Distributed Energy Resources and Storage Devices in Smart Grid
Xu, Guobin; Yu, Wei; Griffith, David; Golmie, Nada; Moulema, Paul
2017-01-01
Internet of Things (IoT) provides a generic infrastructure for different applications to integrate information communication techniques with physical components to achieve automatic data collection, transmission, exchange, and computation. The smart grid, as one of typical applications supported by IoT, denoted as a re-engineering and a modernization of the traditional power grid, aims to provide reliable, secure, and efficient energy transmission and distribution to consumers. How to effectively integrate distributed (renewable) energy resources and storage devices to satisfy the energy service requirements of users, while minimizing the power generation and transmission cost, remains a highly pressing challenge in the smart grid. To address this challenge and assess the effectiveness of integrating distributed energy resources and storage devices, in this paper we develop a theoretical framework to model and analyze three types of power grid systems: the power grid with only bulk energy generators, the power grid with distributed energy resources, and the power grid with both distributed energy resources and storage devices. Based on the metrics of the power cumulative cost and the service reliability to users, we formally model and analyze the impact of integrating distributed energy resources and storage devices in the power grid. We also use the concept of network calculus, which has been traditionally used for carrying out traffic engineering in computer networks, to derive the bounds of both power supply and user demand to achieve a high service reliability to users. Through an extensive performance evaluation, our data shows that integrating distributed energy resources conjointly with energy storage devices can reduce generation costs, smooth the curve of bulk power generation over time, reduce bulk power generation and power distribution losses, and provide a sustainable service reliability to users in the power grid1. PMID:29354654
Communications and media services
NASA Technical Reports Server (NTRS)
Mcculla, James W.; Kukowski, James F.
1990-01-01
NASA's internal and external communication methods are reviewed. NASA information services for the media, for the public, and for employees are discussed. Consideration is given to electron information distribution, the NASA TV-audio system, the NASA broadcast news service, astronaut appearances, technology and information exhibits, speaker services, and NASA news reports for internal communications. Also, the NASA worldwide electronic mail network is described and trends for future NASA communications and media services are outlined.
Serving by local consensus in the public service location game.
Sun, Yi-Fan; Zhou, Hai-Jun
2016-09-02
We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems.
Serving by local consensus in the public service location game
Sun, Yi-Fan; Zhou, Hai-Jun
2016-01-01
We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems. PMID:27586793
Serving by local consensus in the public service location game
NASA Astrophysics Data System (ADS)
Sun, Yi-Fan; Zhou, Hai-Jun
2016-09-01
We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems.
Packet-aware transport for video distribution [Invited
NASA Astrophysics Data System (ADS)
Aguirre-Torres, Luis; Rosenfeld, Gady; Bruckman, Leon; O'Connor, Mannix
2006-05-01
We describe a solution based on resilient packet rings (RPR) for the distribution of broadcast video and video-on-demand (VoD) content over a packet-aware transport network. The proposed solution is based on our experience in the design and deployment of nationwide Triple Play networks and relies on technologies such as RPR, multiprotocol label switching (MPLS), and virtual private LAN service (VPLS) to provide the most efficient solution in terms of utilization, scalability, and availability.
Authenticated IGMP for Controlling Access to Multicast Distribution Tree
NASA Astrophysics Data System (ADS)
Park, Chang-Seop; Kang, Hyun-Sun
A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.
Research on virtual network load balancing based on OpenFlow
NASA Astrophysics Data System (ADS)
Peng, Rong; Ding, Lei
2017-08-01
The Network based on OpenFlow technology separate the control module and data forwarding module. Global deployment of load balancing strategy through network view of control plane is fast and of high efficiency. This paper proposes a Weighted Round-Robin Scheduling algorithm for virtual network and a load balancing plan for server load based on OpenFlow. Load of service nodes and load balancing tasks distribution algorithm will be taken into account.
ERIC Educational Resources Information Center
Johnson, Bette; Swinton, Olivia
The purpose of this unit is to investigate a simple energy network and to make an analogy with similar mutually supporting networks in the natural and man-made worlds. The lessons in this unit develop the network idea around a simple electrical distribution system that we depend on and also into further consideration of electrical energy itself.…
ERIC Educational Resources Information Center
Johnson, Bette; Swinton, Olivia
The purpose of this unit is to investigate a simple energy network and to make an analogy with similar mutually supporting networks in the natural and man-made worlds. The lessons in this unit develop the network idea around a simple electrical distribution system that we depend on and also into further consideration of electrical energy itself.…
New forms of data communication; Symposium, Munich, West Germany, July 1, 2, 1980, Proceedings
NASA Astrophysics Data System (ADS)
Seegmueller, G.
1981-11-01
An overview is provided of functions and realizations of value-added networks, and new developments in packet communications are considered. A description is presented of wide-band digital communication for the 80's, satellite-integrated communication networks, and the risks and benefits which the new communication services represent for a nation. Attention is given to the impact of the new communication services on data processing with conventional distribution, the effects of the new communication technologies on the structure and the flow of activities in an enterprise, and status and future planning regarding new telecommunication services in Europe.
On effectiveness of network sensor-based defense framework
NASA Astrophysics Data System (ADS)
Zhang, Difan; Zhang, Hanlin; Ge, Linqiang; Yu, Wei; Lu, Chao; Chen, Genshe; Pham, Khanh
2012-06-01
Cyber attacks are increasing in frequency, impact, and complexity, which demonstrate extensive network vulnerabilities with the potential for serious damage. Defending against cyber attacks calls for the distributed collaborative monitoring, detection, and mitigation. To this end, we develop a network sensor-based defense framework, with the aim of handling network security awareness, mitigation, and prediction. We implement the prototypical system and show its effectiveness on detecting known attacks, such as port-scanning and distributed denial-of-service (DDoS). Based on this framework, we also implement the statistical-based detection and sequential testing-based detection techniques and compare their respective detection performance. The future implementation of defensive algorithms can be provisioned in our proposed framework for combating cyber attacks.
A heuristic method for consumable resource allocation in multi-class dynamic PERT networks
NASA Astrophysics Data System (ADS)
Yaghoubi, Saeed; Noori, Siamak; Mazdeh, Mohammad Mahdavi
2013-06-01
This investigation presents a heuristic method for consumable resource allocation problem in multi-class dynamic Project Evaluation and Review Technique (PERT) networks, where new projects from different classes (types) arrive to system according to independent Poisson processes with different arrival rates. Each activity of any project is operated at a devoted service station located in a node of the network with exponential distribution according to its class. Indeed, each project arrives to the first service station and continues its routing according to precedence network of its class. Such system can be represented as a queuing network, while the discipline of queues is first come, first served. On the basis of presented method, a multi-class system is decomposed into several single-class dynamic PERT networks, whereas each class is considered separately as a minisystem. In modeling of single-class dynamic PERT network, we use Markov process and a multi-objective model investigated by Azaron and Tavakkoli-Moghaddam in 2007. Then, after obtaining the resources allocated to service stations in every minisystem, the final resources allocated to activities are calculated by the proposed method.
Information Systems Should Be Both Useful and Used: The Benetton Experience.
ERIC Educational Resources Information Center
Zuccaro, Bruno
1990-01-01
Describes the information systems strategy and network development of the Benetton clothing business. Applications in the areas of manufacturing, scheduling, centralized distribution, and centralized cash flow are discussed; the GEIS managed network service is described; and internal and external electronic data interchange (EDI) is explained.…
Clock Synchronization for Multihop Wireless Sensor Networks
ERIC Educational Resources Information Center
Solis Robles, Roberto
2009-01-01
In wireless sensor networks, more so generally than in other types of distributed systems, clock synchronization is crucial since by having this service available, several applications such as media access protocols, object tracking, or data fusion, would improve their performance. In this dissertation, we propose a set of algorithms to achieve…
NASA Technical Reports Server (NTRS)
Stevens, Grady H.
1992-01-01
The Data Distribution Satellite (DDS), operating in conjunction with the planned space network, the National Research and Education Network and its commercial derivatives, would play a key role in networking the emerging supercomputing facilities, national archives, academic, industrial, and government institutions. Centrally located over the United States in geostationary orbit, DDS would carry sophisticated on-board switching and make use of advanced antennas to provide an array of special services. Institutions needing continuous high data rate service would be networked together by use of a microwave switching matrix and electronically steered hopping beams. Simultaneously, DDS would use other beams and on board processing to interconnect other institutions with lesser, low rate, intermittent needs. Dedicated links to White Sands and other facilities would enable direct access to space payloads and sensor data. Intersatellite links to a second generation ATDRS, called Advanced Space Data Acquisition and Communications System (ASDACS), would eliminate one satellite hop and enhance controllability of experimental payloads by reducing path delay. Similarly, direct access would be available to the supercomputing facilities and national data archives. Economies with DDS would be derived from its ability to switch high rate facilities amongst users needed. At the same time, having a CONUS view, DDS would interconnect with any institution regardless of how remote. Whether one needed high rate service or low rate service would be immaterial. With the capability to assign resources on demand, DDS will need only carry a portion of the resources needed if dedicated facilities were used. Efficiently switching resources to users as needed, DDS would become a very feasible spacecraft, even though it would tie together the space network, the terrestrial network, remote sites, 1000's of small users, and those few who need very large data links intermittently.
Expeditionary Oblong Mezzanine
2016-03-01
Operating System OSI Open Systems Interconnection OS X Operating System Ten PDU Power Distribution Unit POE Power Over Ethernet xvii SAAS ...providing infrastructure as a service (IaaS) and software as a service ( SaaS ) cloud computing technologies. IaaS is a way of providing computing services...such as servers, storage, and network equipment services (Mell & Grance, 2009). SaaS is a means of providing software and applications as an on
NASA Astrophysics Data System (ADS)
Okamoto, Satoru; Sato, Takehiro; Yamanaka, Naoaki
2017-01-01
In this paper, flexible and highly reliable metro and access integrated networks with network virtualization and software defined networking technologies will be presented. Logical optical line terminal (L-OLT) technologies and active optical distribution networks (ODNs) are the key to introduce flexibility and high reliability into the metro and access integrated networks. In the Elastic Lambda Aggregation Network (EλAN) project which was started in 2012, a concept of the programmable optical line terminal (P-OLT) has been proposed. A role of the P-OLT is providing multiple network services that have different protocols and quality of service requirements by single OLT box. Accommodated services will be Internet access, mobile front-haul/back-haul, data-center access, and leased line. L-OLTs are configured within the P-OLT box to support the functions required for each network service. Multiple P-OLTs and programmable optical network units (P-ONUs) are connected by the active ODN. Optical access paths which have flexible capacity are set on the ODN to provide network services from L-OLT to logical ONUs (L-ONUs). The L-OLT to L-ONU path on the active ODN provides a logical connection. Therefore, introducing virtualization technologies becomes possible. One example is moving an L-OLT from one P-OLT to another P-OLT like a virtual machine. This movement is called L-OLT migration. The L-OLT migration provides flexible and reliable network functions such as energy saving by aggregating L-OLTs to a limited number of P-OLTs, and network wide optical access path restoration. Other L-OLT virtualization technologies and experimental results will be also discussed in the paper.
Dynamic Evolution Model Based on Social Network Services
NASA Astrophysics Data System (ADS)
Xiong, Xi; Gou, Zhi-Jian; Zhang, Shi-Bin; Zhao, Wen
2013-11-01
Based on the analysis of evolutionary characteristics of public opinion in social networking services (SNS), in the paper we propose a dynamic evolution model, in which opinions are coupled with topology. This model shows the clustering phenomenon of opinions in dynamic network evolution. The simulation results show that the model can fit the data from a social network site. The dynamic evolution of networks accelerates the opinion, separation and aggregation. The scale and the number of clusters are influenced by confidence limit and rewiring probability. Dynamic changes of the topology reduce the number of isolated nodes, while the increased confidence limit allows nodes to communicate more sufficiently. The two effects make the distribution of opinion more neutral. The dynamic evolution of networks generates central clusters with high connectivity and high betweenness, which make it difficult to control public opinions in SNS.
Peer-to-Peer Science Data Environment
NASA Astrophysics Data System (ADS)
Byrnes, J. B.; Holland, M. P.
2004-12-01
The goal of P2PSDE is to provide a convenient and extensible Peer-to-Peer (P2P) network architecture that allows: distributed science-data services-seamlessly incorporating collaborative value-added services with search-oriented access to remote science data. P2PSDE features the real-time discovery of data-serving peers (plus peer-groups and peer-group services), in addition to the searching for and transferring of science data. These features are implemented using "Project JXTA", the first and only standardized set of open, generalized P2P protocols that allow arbitrary network devices to communicate and collaborate as peers. The JXTA protocols standardize the manner in which peers discover each other, self-organize into peer groups, advertise and discover network services, and securely communicate with and monitor each other-even across network firewalls. The key benefits include: Potential for dramatic improvements in science-data dissemination; Real-time-discoverable, potentially redundant (reliable), science-data services; Openness/Extensibility; Decentralized use of small, inexpensive, readily-available desktop machines; and Inherently secure-with ability to create variable levels of security by group.
The research and application of Ethernet over RPR technology
NASA Astrophysics Data System (ADS)
Feng, Xiancheng; Yun, Xiang
2008-11-01
With service competitions of carriers aggravating and client's higher service experience requirement, it urges the MAN technology develops forward. When the Core Layer and Distribution Layer technology are mature, all kinds of reliability technologies of MAN access Layer are proposed. EoRPR is one of reliability technologies for MAN access network service protection. This paper elaborates Ethernet over RPR technology's many advantages through analyzing basic principle, address learning and key technologies of Ethernet over RPR. EpRPR has quicker replacing speed, plug and play, stronger QoS ability, convenient service deployment, band fairly sharing, and so on. At the same time the paper proposed solution of Ethernet over RPR in MAN, NGN network and enterprise Private network. So, among many technologies of MAN access network, EoRPR technology has higher reliability and manageable and highly effectiveness and lower costive of Ethernet. It is not only suitable for enterprise interconnection, BTV and NGN access services and so on, but also can meet the requirement of carriers' reducing CAPEX and OPEX's and increase the rate of investment.
Internal services simulation control in 220/110kV power transformer station Mintia
NASA Astrophysics Data System (ADS)
Ciulica, D.; Rob, R.
2018-01-01
The main objectives in developing the electric transport and distribution networks infrastructure are satisfying the electric energy demand, ensuring the continuity of supply to customers, minimizing electricity losses in the transmission and distribution networks of public interest. This paper presents simulations in functioning of the internal services system 400/230 V ac in the 220/110 kV power transformer station Mintia. Using simulations in Visual Basic, the following premises are taken into consideration. All the ac consumers of the 220/110 kV power transformer station Mintia will be supplied by three 400/230 V transformers for internal services which can mutual reserve. In case of damaging at one transformer, the others are able to assume the entire consumption using automatic release of reserves. The simulation program studies three variants in which the continuity of supply to customers are ensured. As well, by simulations, all the functioning situations are analyzed in detail.
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support
Camargo, João; Rochol, Juergen; Gerla, Mario
2018-01-01
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends. PMID:29364172
Service Migration from Cloud to Multi-tier Fog Nodes for Multimedia Dissemination with QoE Support.
Rosário, Denis; Schimuneck, Matias; Camargo, João; Nobre, Jéferson; Both, Cristiano; Rochol, Juergen; Gerla, Mario
2018-01-24
A wide range of multimedia services is expected to be offered for mobile users via various wireless access networks. Even the integration of Cloud Computing in such networks does not support an adequate Quality of Experience (QoE) in areas with high demands for multimedia contents. Fog computing has been conceptualized to facilitate the deployment of new services that cloud computing cannot provide, particularly those demanding QoE guarantees. These services are provided using fog nodes located at the network edge, which is capable of virtualizing their functions/applications. Service migration from the cloud to fog nodes can be actuated by request patterns and the timing issues. To the best of our knowledge, existing works on fog computing focus on architecture and fog node deployment issues. In this article, we describe the operational impacts and benefits associated with service migration from the cloud to multi-tier fog computing for video distribution with QoE support. Besides that, we perform the evaluation of such service migration of video services. Finally, we present potential research challenges and trends.
Harris, Scott H.; Johnson, Joel A.; Neiswanger, Jeffery R.; Twitchell, Kevin E.
2004-03-09
The present invention includes systems configured to distribute a telephone call, communication systems, communication methods and methods of routing a telephone call to a customer service representative. In one embodiment of the invention, a system configured to distribute a telephone call within a network includes a distributor adapted to connect with a telephone system, the distributor being configured to connect a telephone call using the telephone system and output the telephone call and associated data of the telephone call; and a plurality of customer service representative terminals connected with the distributor and a selected customer service representative terminal being configured to receive the telephone call and the associated data, the distributor and the selected customer service representative terminal being configured to synchronize, application of the telephone call and associated data from the distributor to the selected customer service representative terminal.
The United States Postal Service (USPS) in cooperation with EPA's National Risk Management Research Laboratory (NRMRL) is engaged in an effort to integrate Waste prevention and recycling activities into the waste management programs at Postal facilities. In this report, the findi...
Interarrival times of message propagation on directed networks.
Mihaljev, Tamara; de Arcangelis, Lucilla; Herrmann, Hans J
2011-08-01
One of the challenges in fighting cybercrime is to understand the dynamics of message propagation on botnets, networks of infected computers used to send viruses, unsolicited commercial emails (SPAM) or denial of service attacks. We map this problem to the propagation of multiple random walkers on directed networks and we evaluate the interarrival time distribution between successive walkers arriving at a target. We show that the temporal organization of this process, which models information propagation on unstructured peer to peer networks, has the same features as SPAM reaching a single user. We study the behavior of the message interarrival time distribution on three different network topologies using two different rules for sending messages. In all networks the propagation is not a pure Poisson process. It shows universal features on Poissonian networks and a more complex behavior on scale free networks. Results open the possibility to indirectly learn about the process of sending messages on networks with unknown topologies, by studying interarrival times at any node of the network.
Interarrival times of message propagation on directed networks
NASA Astrophysics Data System (ADS)
Mihaljev, Tamara; de Arcangelis, Lucilla; Herrmann, Hans J.
2011-08-01
One of the challenges in fighting cybercrime is to understand the dynamics of message propagation on botnets, networks of infected computers used to send viruses, unsolicited commercial emails (SPAM) or denial of service attacks. We map this problem to the propagation of multiple random walkers on directed networks and we evaluate the interarrival time distribution between successive walkers arriving at a target. We show that the temporal organization of this process, which models information propagation on unstructured peer to peer networks, has the same features as SPAM reaching a single user. We study the behavior of the message interarrival time distribution on three different network topologies using two different rules for sending messages. In all networks the propagation is not a pure Poisson process. It shows universal features on Poissonian networks and a more complex behavior on scale free networks. Results open the possibility to indirectly learn about the process of sending messages on networks with unknown topologies, by studying interarrival times at any node of the network.
Wireless intelligent network: infrastructure before services?
NASA Astrophysics Data System (ADS)
Chu, Narisa N.
1996-01-01
The Wireless Intelligent Network (WIN) intends to take advantage of the Advanced Intelligent Network (AIN) concepts and products developed from wireline communications. However, progress of the AIN deployment has been slow due to the many barriers that exist in the traditional wireline carriers' deployment procedures and infrastructure. The success of AIN has not been truly demonstrated. The AIN objectives and directions are applicable to the wireless industry although the plans and implementations could be significantly different. This paper points out WIN characteristics in architecture, flexibility, deployment, and value to customers. In order to succeed, the technology driven AIN concept has to be reinforced by the market driven WIN services. An infrastructure suitable for the WIN will contain elements that are foreign to the wireline network. The deployment process is expected to seed with the revenue generated services. Standardization will be achieved by simplifying and incorporating the IS-41C, AIN, and Intelligent Network CS-1 recommendations. Integration of the existing and future systems impose the biggest challenge of all. Service creation has to be complemented with service deployment process which heavily impact the carriers' infrastructure. WIN deployment will likely start from an Intelligent Peripheral, a Service Control Point and migrate to a Service Node when sufficient triggers are implemented in the mobile switch for distributed call control. The struggle to move forward will not be based on technology, but rather on the impact to existing infrastructure.
NASA Technical Reports Server (NTRS)
Wang, Ray (Inventor)
2009-01-01
A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2015-01-01
Indoor location-based services (iLBS) are extremely dynamic and changeable, and include numerous resources and mobile devices. In particular, the network infrastructure requires support for high scalability in the indoor environment, and various resource lookups are requested concurrently and frequently from several locations based on the dynamic network environment. A traditional map-based centralized approach for iLBSs has several disadvantages: it requires global knowledge to maintain a complete geographic indoor map; the central server is a single point of failure; it can also cause low scalability and traffic congestion; and it is hard to adapt to a change of service area in real time. This paper proposes a self-organizing and fully distributed platform for iLBSs. The proposed self-organizing distributed platform provides a dynamic reconfiguration of locality accuracy and service coverage by expanding and contracting dynamically. In order to verify the suggested platform, scalability performance according to the number of inserted or deleted nodes composing the dynamic infrastructure was evaluated through a simulation similar to the real environment. PMID:26016908
Lai, Chin-Feng; Chen, Min; Pan, Jeng-Shyang; Youn, Chan-Hyun; Chao, Han-Chieh
2014-03-01
As cloud computing and wireless body sensor network technologies become gradually developed, ubiquitous healthcare services prevent accidents instantly and effectively, as well as provides relevant information to reduce related processing time and cost. This study proposes a co-processing intermediary framework integrated cloud and wireless body sensor networks, which is mainly applied to fall detection and 3-D motion reconstruction. In this study, the main focuses includes distributed computing and resource allocation of processing sensing data over the computing architecture, network conditions and performance evaluation. Through this framework, the transmissions and computing time of sensing data are reduced to enhance overall performance for the services of fall events detection and 3-D motion reconstruction.
Identifying PHM market and network opportunities.
Grube, Mark E; Krishnaswamy, Anand; Poziemski, John; York, Robert W
2015-11-01
Two key processes for healthcare organizations seeking to assume a financially sustainable role in population health management (PHM), after laying the groundwork for the effort, are to identify potential PHM market opportunities and determine the scope of the PHM network. Key variables organizations should consider with respect to market opportunities include the patient population, the overall insurance/employer market, and available types of insurance products. Regarding the network's scope, organizations should consider both traditional strategic criteria for a viable network and at least five additional criteria: network essentiality and PHM care continuum, network adequacy, service distribution right-sizing, network growth strategy, and organizational agility.
Recommendations for a service framework to access astronomical archives
NASA Technical Reports Server (NTRS)
Travisano, J. J.; Pollizzi, J.
1992-01-01
There are a large number of astronomical archives and catalogs on-line for network access, with many different user interfaces and features. Some systems are moving towards distributed access, supplying users with client software for their home sites which connects to servers at the archive site. Many of the issues involved in defining a standard framework of services that archive/catalog suppliers can use to achieve a basic level of interoperability are described. Such a framework would simplify the development of client and server programs to access the wide variety of astronomical archive systems. The primary services that are supplied by current systems include: catalog browsing, dataset retrieval, name resolution, and data analysis. The following issues (and probably more) need to be considered in establishing a standard set of client/server interfaces and protocols: Archive Access - dataset retrieval, delivery, file formats, data browsing, analysis, etc.; Catalog Access - database management systems, query languages, data formats, synchronous/asynchronous mode of operation, etc.; Interoperability - transaction/message protocols, distributed processing mechanisms (DCE, ONC/SunRPC, etc), networking protocols, etc.; Security - user registration, authorization/authentication mechanisms, etc.; Service Directory - service registration, lookup, port/task mapping, parameters, etc.; Software - public vs proprietary, client/server software, standard interfaces to client/server functions, software distribution, operating system portability, data portability, etc. Several archive/catalog groups, notably the Astrophysics Data System (ADS), are already working in many of these areas. In the process of developing StarView, which is the user interface to the Space Telescope Data Archive and Distribution Service (ST-DADS), these issues and the work of others were analyzed. A framework of standard interfaces for accessing services on any archive system which would benefit archive user and supplier alike is proposed.
Architectural and Functional Design of an Environmental Information Network.
1984-04-30
study was accomplished under contract F08635-83-C-013(,, Task 83- 2 for Headquarters Air Force Engineering and Services Center, Engineering and Services...election Procedure ............................... 11 2 General Architecture of Distributed Data Management System...o.......60 A-1 Schema Architecture .......... o-.................. .... 74 A- 2 MULTIBASE Component Architecture
An autonomous recovery mechanism against optical distribution network failures in EPON
NASA Astrophysics Data System (ADS)
Liem, Andrew Tanny; Hwang, I.-Shyan; Nikoukar, AliAkbar
2014-10-01
Ethernet Passive Optical Network (EPON) is chosen for servicing diverse applications with higher bandwidth and Quality-of-Service (QoS), starting from Fiber-To-The-Home (FTTH), FTTB (business/building) and FTTO (office). Typically, a single OLT can provide services to both residential and business customers on the same Optical Line Terminal (OLT) port; thus, any failures in the system will cause a great loss for both network operators and customers. Network operators are looking for low-cost and high service availability mechanisms that focus on the failures that occur within the drop fiber section because the majority of faults are in this particular section. Therefore, in this paper, we propose an autonomous recovery mechanism that provides protection and recovery against Drop Distribution Fiber (DDF) link faults or transceiver failure at the ONU(s) in EPON systems. In the proposed mechanism, the ONU can automatically detect any signal anomalies in the physical layer or transceiver failure, switching the working line to the protection line and sending the critical event alarm to OLT via its neighbor. Each ONU has a protection line, which is connected to the nearest neighbor ONU, and therefore, when failure occurs, the ONU can still transmit and receive data via the neighbor ONU. Lastly, the Fault Dynamic Bandwidth Allocation for recovery mechanism is presented. Simulation results show that our proposed autonomous recovery mechanism is able to maintain the overall QoS performance in terms of mean packet delay, system throughput, packet loss and EF jitter.
Service Oriented Architecture for Wireless Sensor Networks in Agriculture
NASA Astrophysics Data System (ADS)
Sawant, S. A.; Adinarayana, J.; Durbha, S. S.; Tripathy, A. K.; Sudharsan, D.
2012-08-01
Rapid advances in Wireless Sensor Network (WSN) for agricultural applications has provided a platform for better decision making for crop planning and management, particularly in precision agriculture aspects. Due to the ever-increasing spread of WSNs there is a need for standards, i.e. a set of specifications and encodings to bring multiple sensor networks on common platform. Distributed sensor systems when brought together can facilitate better decision making in agricultural domain. The Open Geospatial Consortium (OGC) through Sensor Web Enablement (SWE) provides guidelines for semantic and syntactic standardization of sensor networks. In this work two distributed sensing systems (Agrisens and FieldServer) were selected to implement OGC SWE standards through a Service Oriented Architecture (SOA) approach. Online interoperable data processing was developed through SWE components such as Sensor Model Language (SensorML) and Sensor Observation Service (SOS). An integrated web client was developed to visualize the sensor observations and measurements that enables the retrieval of crop water resources availability and requirements in a systematic manner for both the sensing devices. Further, the client has also the ability to operate in an interoperable manner with any other OGC standardized WSN systems. The study of WSN systems has shown that there is need to augment the operations / processing capabilities of SOS in order to understand about collected sensor data and implement the modelling services. Also, the very low cost availability of WSN systems in future, it is possible to implement the OGC standardized SWE framework for agricultural applications with open source software tools.
An air-quality forecasting (AQF) system based on the National Weather Service (NWS) National Centers for Environmental Prediction's (NCEP's) Eta model and the U.S. EPA's Community Multiscale Air Quality (CMAQ) Modeling System is used to simulate the distributions of tropospheric ...
A Simple XML Producer-Consumer Protocol
NASA Technical Reports Server (NTRS)
Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)
2001-01-01
There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.
NASA Astrophysics Data System (ADS)
Ninsawat, Sarawut; Yamamoto, Hirokazu; Kamei, Akihide; Nakamura, Ryosuke; Tsuchida, Satoshi; Maeda, Takahisa
2010-05-01
With the availability of network enabled sensing devices, the volume of information being collected by networked sensors has increased dramatically in recent years. Over 100 physical, chemical and biological properties can be sensed using in-situ or remote sensing technology. A collection of these sensor nodes forms a sensor network, which is easily deployable to provide a high degree of visibility into real-world physical processes as events unfold. The sensor observation network could allow gathering of diverse types of data at greater spatial and temporal resolution, through the use of wired or wireless network infrastructure, thus real-time or near-real time data from sensor observation network allow researchers and decision-makers to respond speedily to events. However, in the case of environmental monitoring, only a capability to acquire in-situ data periodically is not sufficient but also the management and proper utilization of data also need to be careful consideration. It requires the implementation of database and IT solutions that are robust, scalable and able to interoperate between difference and distributed stakeholders to provide lucid, timely and accurate update to researchers, planners and citizens. The GEO (Global Earth Observation) Grid is primarily aiming at providing an e-Science infrastructure for the earth science community. The GEO Grid is designed to integrate various kinds of data related to the earth observation using the grid technology, which is developed for sharing data, storage, and computational powers of high performance computing, and is accessible as a set of services. A comprehensive web-based system for integrating field sensor and data satellite image based on various open standards of OGC (Open Geospatial Consortium) specifications has been developed. Web Processing Service (WPS), which is most likely the future direction of Web-GIS, performs the computation of spatial data from distributed data sources and returns the outcome in a standard format. The interoperability capabilities and Service Oriented Architecture (SOA) of web services allow incorporating between sensor network measurement available from Sensor Observation Service (SOS) and satellite remote sensing data from Web Mapping Service (WMS) as distributed data sources for WPS. Various applications have been developed to demonstrate the efficacy of integrating heterogeneous data source. For example, the validation of the MODIS aerosol products (MOD08_D3, the Level-3 MODIS Atmosphere Daily Global Product) by ground-based measurements using the sunphotometer (skyradiometer, Prede POM-02) installed at Phenological Eyes Network (PEN) sites in Japan. Furthermore, the web-based framework system for studying a relationship between calculated Vegetation Index from MODIS satellite image surface reflectance (MOD09GA, the Surface Reflectance Daily L2G Global 1km and 500m Product) and Gross Primary Production (GPP) field measurement at flux tower site in Thailand and Japan has been also developed. The success of both applications will contribute to maximize data utilization and improve accuracy of information by validate MODIS satellite products using high degree of accuracy and temporal measurement of field measurement data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
2016-01-01
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
Collaborative video caching scheme over OFDM-based long-reach passive optical networks
NASA Astrophysics Data System (ADS)
Li, Yan; Dai, Shifang; Chang, Xiangmao
2018-07-01
Long-reach passive optical networks (LR-PONs) are now considered as a desirable access solution for cost-efficiently delivering broadband services by integrating metro network with access network, among which orthogonal frequency division multiplexing (OFDM)-based LR-PONs gain greater research interests due to their good robustness and high spectrum efficiency. In such attractive OFDM-based LR-PONs, however, it is still challenging to effectively provide video service, which is one of the most popular and profitable broadband services, for end users. Given that more video requesters (i.e., end users) far away from optical line terminal (OLT) are served in OFDM-based LR-PONs, it is efficiency-prohibitive to use traditional video delivery model, which relies on the OLT to transmit videos to requesters, for providing video service, due to the model will incur not only larger video playback delay but also higher downstream bandwidth consumption. In this paper, we propose a novel video caching scheme that to collaboratively cache videos on distributed optical network units (ONUs) which are closer to end users, and thus to timely and cost-efficiently provide videos for requesters by ONUs over OFDM-based LR-PONs. We firstly construct an OFDM-based LR-PON architecture to enable the cooperation among ONUs while caching videos. Given a limited storage capacity of each ONU, we then propose collaborative approaches to cache videos on ONUs with the aim to maximize the local video hit ratio (LVHR), i.e., the proportion of video requests that can be directly satisfied by ONUs, under diverse resources requirements and requests distributions of videos. Simulations are finally conducted to evaluate the efficiency of our proposed scheme.
Intelligent distributed medical image management
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.
1995-05-01
The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.
An Architecture for Integrated Regional Health Telematics Networks
2001-10-25
that enables informed citizens to have an impact on the healthcare system and to be more concerned and care for their own health . The current...resource, educational, integrated electronic health record (I- EHR ), and added value services [2]. These classes of telematic services are applica...cally distributed clinical information systems . 5) Finally, added-value services (e.g. image processing, information indexing, data pre-fetching
Tools to manage the enterprise-wide picture archiving and communications system environment.
Lannum, L M; Gumpf, S; Piraino, D
2001-06-01
The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.
Data Aggregation in Multi-Agent Systems in the Presence of Hybrid Faults
ERIC Educational Resources Information Center
Srinivasan, Satish Mahadevan
2010-01-01
Data Aggregation (DA) is a set of functions that provide components of a distributed system access to global information for purposes of network management and user services. With the diverse new capabilities that networks can provide, applicability of DA is growing. DA is useful in dealing with multi-value domain information and often requires…
Characterization of attacks on public telephone networks
NASA Astrophysics Data System (ADS)
Lorenz, Gary V.; Manes, Gavin W.; Hale, John C.; Marks, Donald; Davis, Kenneth; Shenoi, Sujeet
2001-02-01
The U.S. Public Telephone Network (PTN) is a massively connected distributed information systems, much like the Internet. PTN signaling, transmission and operations functions must be protected from physical and cyber attacks to ensure the reliable delivery of telecommunications services. The increasing convergence of PTNs with wireless communications systems, computer networks and the Internet itself poses serious threats to our nation's telecommunications infrastructure. Legacy technologies and advanced services encumber well-known and as of yet undiscovered vulnerabilities that render them susceptible to cyber attacks. This paper presents a taxonomy of cyber attacks on PTNs in converged environments that synthesizes exploits in computer and communications network domains. The taxonomy provides an opportunity for the systematic exploration of mitigative and preventive strategies, as well as for the identification and classification of emerging threats.
Body area network--a key infrastructure element for patient-centered telemedicine.
Norgall, Thomas; Schmidt, Robert; von der Grün, Thomas
2004-01-01
The Body Area Network (BAN) extends the range of existing wireless network technologies by an ultra-low range, ultra-low power network solution optimised for long-term or continuous healthcare applications. It enables wireless radio communication between several miniaturised, intelligent Body Sensor (or actor) Units (BSU) and a single Body Central Unit (BCU) worn at the human body. A separate wireless transmission link from the BCU to a network access point--using different technology--provides for online access to BAN components via usual network infrastructure. The BAN network protocol maintains dynamic ad-hoc network configuration scenarios and co-existence of multiple networks.BAN is expected to become a basic infrastructure element for electronic health services: By integrating patient-attached sensors and mobile actor units, distributed information and data processing systems, the range of medical workflow can be extended to include applications like wireless multi-parameter patient monitoring and therapy support. Beyond clinical use and professional disease management environments, private personal health assistance scenarios (without financial reimbursement by health agencies / insurance companies) enable a wide range of applications and services in future pervasive computing and networking environments.
2010-01-01
service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services
Mission Services Evolution Center Message Bus
NASA Technical Reports Server (NTRS)
Mayorga, Arturo; Bristow, John O.; Butschky, Mike
2011-01-01
The Goddard Mission Services Evolution Center (GMSEC) Message Bus is a robust, lightweight, fault-tolerant middleware implementation that supports all messaging capabilities of the GMSEC API. This architecture is a distributed software system that routes messages based on message subject names and knowledge of the locations in the network of the interested software components.
Digital Talking Books: Planning for the Future.
ERIC Educational Resources Information Center
Cookson, John; Cylke, Frank Kurt; Dixon, Judith; Fistick, Robert E.; Fitzpatrick, Vicki; Kormann, Wells B.; Moodie, Michael M.; Redmond, Linda; Thuronyi, George
This report describes the plans of the National Library Service for the Blind and Physically Handicapped (NLS) to convert their talking books service to a digitally based audio system. The NLS program selects and produces full-length books and magazines in braille and on recorded disc and cassettes and distributes them to a cooperating network of…
NASA Astrophysics Data System (ADS)
Puche, William S.; Sierra, Javier E.; Moreno, Gustavo A.
2014-08-01
The convergence of new technologies in the digital world has made devices with internet connectivity such as televisions, smatphone, Tablet, Blu-ray, game consoles, among others, to increase more and more. Therefore the major research centers are in the task of improving the network performance to mitigate the bottle neck phenomenon regarding capacity and high transmission rates in information and data. The implementation of standard HbbTV (Hybrid Broadcast Broadband TV), and technological platforms OTT (Over the Top), capable of distributing video, audio, TV, and other Internet services via devices connected directly to the cloud. Therefore a model to improve the transmission capacity required by content distribution networks (CDN) for online TV, with high-capacity optical networks is proposed.
Distributed intelligent monitoring and reporting facilities
NASA Astrophysics Data System (ADS)
Pavlou, George; Mykoniatis, George; Sanchez-P, Jorge-A.
1996-06-01
Distributed intelligent monitoring and reporting facilities are of paramount importance in both service and network management as they provide the capability to monitor quality of service and utilization parameters and notify degradation so that corrective action can be taken. By intelligent, we refer to the capability of performing the monitoring tasks in a way that has the smallest possible impact on the managed network, facilitates the observation and summarization of information according to a number of criteria and in its most advanced form and permits the specification of these criteria dynamically to suit the particular policy in hand. In addition, intelligent monitoring facilities should minimize the design and implementation effort involved in such activities. The ISO/ITU Metric, Summarization and Performance management functions provide models that only partially satisfy the above requirements. This paper describes our extensions to the proposed models to support further capabilities, with the intention to eventually lead to fully dynamically defined monitoring policies. The concept of distributing intelligence is also discussed, including the consideration of security issues and the applicability of the model in ODP-based distributed processing environments.
TTCN-3 Based Conformance Testing of Mobile Broadcast Business Management System in 3G Networks
NASA Astrophysics Data System (ADS)
Wang, Zhiliang; Yin, Xia; Xiang, Yang; Zhu, Ruiping; Gao, Shirui; Wu, Xin; Liu, Shijian; Gao, Song; Zhou, Li; Li, Peng
Mobile broadcast service is one of the emerging most important new services in 3G networks. To better operate and manage mobile broadcast services, mobile broadcast business management system (MBBMS) should be designed and developed. Such a system, with its distributed nature, complicated XML data and security mechanism, faces many challenges in testing technology. In this paper, we study the conformance testing methodology of MBBMS, and design and implement a MBBMS protocol conformance testing tool based on TTCN-3, a standardized test description language that can be used in black-box testing of reactive and distributed system. In this methodology and testing tool, we present a semi-automatic XML test data generation method of TTCN-3 test suite and use HMSC model to help the design of test suite. In addition, we also propose an integrated testing method for hierarchical MBBMS security architecture. This testing tool has been used in industrial level’s testing.
An uncertainty-based distributed fault detection mechanism for wireless sensor networks.
Yang, Yang; Gao, Zhipeng; Zhou, Hang; Qiu, Xuesong
2014-04-25
Exchanging too many messages for fault detection will cause not only a degradation of the network quality of service, but also represents a huge burden on the limited energy of sensors. Therefore, we propose an uncertainty-based distributed fault detection through aided judgment of neighbors for wireless sensor networks. The algorithm considers the serious influence of sensing measurement loss and therefore uses Markov decision processes for filling in missing data. Most important of all, fault misjudgments caused by uncertainty conditions are the main drawbacks of traditional distributed fault detection mechanisms. We draw on the experience of evidence fusion rules based on information entropy theory and the degree of disagreement function to increase the accuracy of fault detection. Simulation results demonstrate our algorithm can effectively reduce communication energy overhead due to message exchanges and provide a higher detection accuracy ratio.
Cross stratum resources protection in fog-computing-based radio over fiber networks for 5G services
NASA Astrophysics Data System (ADS)
Guo, Shaoyong; Shao, Sujie; Wang, Yao; Yang, Hui
2017-09-01
In order to meet the requirement of internet of things (IoT) and 5G, the cloud radio access network is a paradigm which converges all base stations computational resources into a cloud baseband unit (BBU) pool, while the distributed radio frequency signals are collected by remote radio head (RRH). A precondition for centralized processing in the BBU pool is an interconnection fronthaul network with high capacity and low delay. However, it has become more complex and frequent in the interaction between RRH and BBU and resource scheduling among BBUs in cloud. Cloud radio over fiber network has been proposed in our previous work already. In order to overcome the complexity and latency, in this paper, we first present a novel cross stratum resources protection (CSRP) architecture in fog-computing-based radio over fiber networks (F-RoFN) for 5G services. Additionally, a cross stratum protection (CSP) scheme considering the network survivability is introduced in the proposed architecture. The CSRP with CSP scheme can effectively pull the remote processing resource locally to implement the cooperative radio resource management, enhance the responsiveness and resilience to the dynamic end-to-end 5G service demands, and globally optimize optical network, wireless and fog resources. The feasibility and efficiency of the proposed architecture with CSP scheme are verified on our software defined networking testbed in terms of service latency, transmission success rate, resource occupation rate and blocking probability.
A Wireless Sensor Network approach for distributed in-line chemical analysis of water.
Capella, J V; Bonastre, A; Ors, R; Peris, M
2010-03-15
In this work we propose the implementation of a distributed system based on a Wireless Sensor Network for the control of a chemical analysis system for fresh water. This implementation is presented by describing the nodes that form the distributed system, the communication system by wireless networks, control strategies, and so on. Nitrate, ammonium, and chloride are measured in-line using appropriate ion selective electrodes (ISEs), the results obtained being compared with those provided by the corresponding reference methods. Recovery analyses with ISEs and standard methods, study of interferences, and evaluation of major sensor features have also been carried out. The communication among the nodes that form the distributed system is implemented by means of the utilization of proprietary wireless networks, and secondary data transmission services (GSM or GPRS) provided by a mobile telephone operator. The information is processed, integrated and stored in a control center. These data can be retrieved--through the Internet--so as to know the real-time system status and its evolution. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Vertical equity of healthcare in Taiwan: health services were distributed according to need
2013-01-01
Introduction To test the hypothesis that the distribution of healthcare services is according to health need can be achieved under a rather open access system. Methods The 2001 National Health Interview Survey of Taiwan and National Health Insurance claims data were linked in the study. Health need was defined by self-perceived health status. We used Concentration index to measure need-related inequality in healthcare utilization and expenditure. Results People with greater health need received more healthcare services, indicating a pro-need character of healthcare distribution, conforming to the meaning of vertical equity. For outpatient service, subjects with the highest health need had higher proportion of ever use in a year than those who had the least health need and consumed more outpatient visits and expenditures per person per year. Similar patterns were observed for emergency services and hospitalization. The concentration indices of utilization for outpatient, emergency services, and hospitalization suggest that the distribution of utilization was related to health need, whereas the preventive service was less related to need. Conclusions The universal coverage plus healthcare networking system makes it possible for healthcare to be utilized according to need. Taiwan’s experience can serve as a reference for health reform. PMID:23363855
Vertical equity of healthcare in Taiwan: health services were distributed according to need.
Wang, Shiow-Ing; Yaung, Chih-Liang
2013-01-31
To test the hypothesis that the distribution of healthcare services is according to health need can be achieved under a rather open access system. The 2001 National Health Interview Survey of Taiwan and National Health Insurance claims data were linked in the study. Health need was defined by self-perceived health status. We used Concentration index to measure need-related inequality in healthcare utilization and expenditure. People with greater health need received more healthcare services, indicating a pro-need character of healthcare distribution, conforming to the meaning of vertical equity. For outpatient service, subjects with the highest health need had higher proportion of ever use in a year than those who had the least health need and consumed more outpatient visits and expenditures per person per year. Similar patterns were observed for emergency services and hospitalization. The concentration indices of utilization for outpatient, emergency services, and hospitalization suggest that the distribution of utilization was related to health need, whereas the preventive service was less related to need. The universal coverage plus healthcare networking system makes it possible for healthcare to be utilized according to need. Taiwan's experience can serve as a reference for health reform.
Output power distributions of mobile radio base stations based on network measurements
NASA Astrophysics Data System (ADS)
Colombi, D.; Thors, B.; Persson, T.; Wirén, N.; Larsson, L.-E.; Törnevik, C.
2013-04-01
In this work output power distributions of mobile radio base stations have been analyzed for 2G and 3G telecommunication systems. The approach is based on measurements in selected networks using performance surveillance tools part of the network Operational Support System (OSS). For the 3G network considered, direct measurements of output power levels were possible, while for the 2G networks, output power levels were estimated from measurements of traffic volumes. Both voice and data services were included in the investigation. Measurements were conducted for large geographical areas, to ensure good overall statistics, as well as for smaller areas to investigate the impact of different environments. For high traffic hours, the 90th percentile of the averaged output power was found to be below 65% and 45% of the available output power for the 2G and 3G systems, respectively.
Complex Dynamics in Information Sharing Networks
NASA Astrophysics Data System (ADS)
Cronin, Bruce
This study examines the roll-out of an electronic knowledge base in a medium-sized professional services firm over a six year period. The efficiency of such implementation is a key business problem in IT systems of this type. Data from usage logs provides the basis for analysis of the dynamic evolution of social networks around the depository during this time. The adoption pattern follows an "s-curve" and usage exhibits something of a power law distribution, both attributable to network effects, and network position is associated with organisational performance on a number of indicators. But periodicity in usage is evident and the usage distribution displays an exponential cut-off. Further analysis provides some evidence of mathematical complexity in the periodicity. Some implications of complex patterns in social network data for research and management are discussed. The study provides a case study demonstrating the utility of the broad methodological approach.
The CARFAX road traffic information system
NASA Astrophysics Data System (ADS)
Sandell, R. S.
1984-02-01
A description of the development work and field trials which led to the completion of the dedicated traffic information service "CARFAX' is presented. The system employs a single medium frequency channel, and involves a network of low powered transmitters that operate in time division multiplex to provide traffic announcements. A description of the network distribution, equipment test, results and future system utilization is included.
GPS Data Analysis for Earth Orientation at the Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Zumberge, J.; Webb, F.; Lindqwister, U.; Lichten, S.; Jefferson, D.; Ibanez-Meier, R.; Heflin, M.; Freedman, A.; Blewitt, G.
1994-01-01
Beginning June 1992 and continuing indefinitely as part of our contribution to FLINN (Fiducial Laboratories for an International Natural Science Network), DOSE (NASA's Dynamics of the Solid Earth Program), and the IGS (International GPS Geodynamics Service), analysts at the Jet Propulsion Laboratory (JPL) have routinely been reducing data from a globally-distributed network of Rogue Global Positioning System (GPS) receivers.
An eConsent-based System Architecture Supporting Cooperation in Integrated Healthcare Networks.
Bergmann, Joachim; Bott, Oliver J; Hoffmann, Ina; Pretschner, Dietrich P
2005-01-01
The economical need for efficient healthcare leads to cooperative shared care networks. A virtual electronic health record is required, which integrates patient related information but reflects the distributed infrastructure and restricts access only to those health professionals involved into the care process. Our work aims on specification and development of a system architecture fulfilling these requirements to be used in concrete regional pilot studies. Methodical analysis and specification have been performed in a healthcare network using the formal method and modelling tool MOSAIK-M. The complexity of the application field was reduced by focusing on the scenario of thyroid disease care, which still includes various interdisciplinary cooperation. Result is an architecture for a secure distributed electronic health record for integrated care networks, specified in terms of a MOSAIK-M-based system model. The architecture proposes business processes, application services, and a sophisticated security concept, providing a platform for distributed document-based, patient-centred, and secure cooperation. A corresponding system prototype has been developed for pilot studies, using advanced application server technologies. The architecture combines a consolidated patient-centred document management with a decentralized system structure without needs for replication management. An eConsent-based approach assures, that access to the distributed health record remains under control of the patient. The proposed architecture replaces message-based communication approaches, because it implements a virtual health record providing complete and current information. Acceptance of the new communication services depends on compatibility with the clinical routine. Unique and cross-institutional identification of a patient is also a challenge, but will loose significance with establishing common patient cards.
Differentially Private Distributed Sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fink, Glenn A.
The growth of the Internet of Things (IoT) creates the possibility of decentralized systems of sensing and actuation, potentially on a global scale. IoT devices connected to cloud networks can offer Sensing and Actuation as a Service (SAaaS) enabling networks of sensors to grow to a global scale. But extremely large sensor networks can violate privacy, especially in the case where IoT devices are mobile and connected directly to the behaviors of people. The thesis of this paper is that by adapting differential privacy (adding statistically appropriate noise to query results) to groups of geographically distributed sensors privacy could bemore » maintained without ever sending all values up to a central curator and without compromising the overall accuracy of the data collected. This paper outlines such a scheme and performs an analysis of differential privacy techniques adapted to edge computing in a simulated sensor network where ground truth is known. The positive and negative outcomes of employing differential privacy in distributed networks of devices are discussed and a brief research agenda is presented.« less
NASA Astrophysics Data System (ADS)
The subjects discussed are related to LSI/VLSI based subscriber transmission and customer access for the Integrated Services Digital Network (ISDN), special applications of fiber optics, ISDN and competitive telecommunication services, technical preparations for the Geostationary-Satellite Orbit Conference, high-capacity statistical switching fabrics, networking and distributed systems software, adaptive arrays and cancelers, synchronization and tracking, speech processing, advances in communication terminals, full-color videotex, and a performance analysis of protocols. Advances in data communications are considered along with transmission network plans and progress, direct broadcast satellite systems, packet radio system aspects, radio-new and developing technologies and applications, the management of software quality, and Open Systems Interconnection (OSI) aspects of telematic services. Attention is given to personal computers and OSI, the role of software reliability measurement in information systems, and an active array antenna for the next-generation direct broadcast satellite.
Rehan, R; Knight, M A; Unger, A J A; Haas, C T
2013-12-15
This paper develops causal loop diagrams and a system dynamics model for financially sustainable management of urban water distribution networks. The developed causal loop diagrams are a novel contribution in that it illustrates the unique characteristics and feedback loops for financially self-sustaining water distribution networks. The system dynamics model is a mathematical realization of the developed interactions among system variables over time and is comprised of three sectors namely watermains network, consumer, and finance. This is the first known development of a water distribution network system dynamics model. The watermains network sector accounts for the unique characteristics of watermain pipes such as service life, deterioration progression, pipe breaks, and water leakage. The finance sector allows for cash reserving by the utility in addition to the pay-as-you-go and borrowing strategies. The consumer sector includes controls to model water fee growth as a function of service performance and a household's financial burden due to water fees. A series of policy levers are provided that allow the impact of various financing strategies to be evaluated in terms of financial sustainability and household affordability. The model also allows for examination of the impact of different management strategies on the water fee in terms of consistency and stability over time. The paper concludes with a discussion on how the developed system dynamics water model can be used by water utilities to achieve a variety of utility short and long-term objectives and to establish realistic and defensible water utility policies. It also discusses how the model can be used by regulatory bodies, government agencies, the financial industry, and researchers. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
PILOT: An intelligent distributed operations support system
NASA Technical Reports Server (NTRS)
Rasmussen, Arthur N.
1993-01-01
The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.
NASA Technical Reports Server (NTRS)
Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)
1998-01-01
The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.
High-Throughput and Low-Latency Network Communication with NetIO
NASA Astrophysics Data System (ADS)
Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer
2017-10-01
HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.
NASA Astrophysics Data System (ADS)
Huang, Haibin; Guo, Bingli; Li, Xin; Yin, Shan; Zhou, Yu; Huang, Shanguo
2017-12-01
Virtualization of datacenter (DC) infrastructures enables infrastructure providers (InPs) to provide novel services like virtual networks (VNs). Furthermore, optical networks have been employed to connect the metro-scale geographically distributed DCs. The synergistic virtualization of the DC infrastructures and optical networks enables the efficient VN service over inter-DC optical networks (inter-DCONs). While the capacity of the used standard single-mode fiber (SSMF) is limited by their nonlinear characteristics. Thus, mode-division multiplexing (MDM) technology based on few-mode fibers (FMFs) could be employed to increase the capacity of optical networks. Whereas, modal crosstalk (XT) introduced by optical fibers and components deployed in the MDM optical networks impacts the performance of VN embedding (VNE) over inter-DCONs with FMFs. In this paper, we propose a XT-aware VNE mechanism over inter-DCONs with FMFs. The impact of XT is considered throughout the VNE procedures. The simulation results show that the proposed XT-aware VNE can achieves better performances of blocking probability and spectrum utilization compared to conventional VNE mechanisms.
The impact of occupational health service network and reporting system in Taiwan.
Chu, Po-Ching; Fuh, Hwan-Ran; Luo, Jiin-Chyuan; Du, Chung-Li; Chuang, Hung-Yi; Guo, How-Ran; Liu, Chiu-Shong; Su, Chien-Tien; Tang, Feng-Cheng; Chen, Chun-Chieh; Yang, Hsiao-Yu; Guo, Yue Leon
2013-01-01
Underreporting occupational disease cases has been a long-standing problem in Taiwan, which hinders the progress in occupational health and safety. To address this problem, the government has founded the Network of Occupational Diseases and Injuries Service (NODIS) for occupational disease and injury services and established a new Internet-based reporting system. The aims of this study are to analyze the possible influence of the NODIS, comprised of Center for Occupational Disease and Injury Services and their local network hospitals, on compensable occupational diseases and describe the distribution of occupational diseases across occupations and industries from 2005 to 2010 in Taiwan. We conducted a secondary analysis of two datasets, including the NODIS reporting dataset and the National Labor Insurance scheme's dataset of compensated cases. For the NODIS dataset, demographics, disease distribution, and the time trends of occupational diseases were analyzed. The data of the Labor Insurance dataset was used to calculate the annual incidence of compensated cases. Furthermore, the annual incidence of reported occupational diseases from the NODIS was further compared with the annual incidence of compensable occupational diseases from the compensated dataset during the same period. After the establishment of the NODIS, the two annual incidence rates of reported and compensable occupational disease cases have increased by 1.2 and 2.0 folds from 2007 to 2010, respectively. The reason for this increased reporting may be the implementation of the new government-funded Internet-based system. The reason for the increased compensable cases may be the increasing availability of hospitals and clinics to provide occupational health services. During the 2008-2010 period, the most frequently reported occupational diseases were carpal tunnel syndrome, lumbar disc disorder, upper limb musculoskeletal disorders, and contact dermatitis. The new network and reporting system was successful in providing more occupational health services, providing more workers with compensation for occupational diseases, and reducing underreporting of occupational diseases. Therefore, the experience in Taiwan could serve as an example for other newly developed countries in a similar situation.
Virtual Sensor Web Architecture
NASA Astrophysics Data System (ADS)
Bose, P.; Zimdars, A.; Hurlburt, N.; Doug, S.
2006-12-01
NASA envisions the development of smart sensor webs, intelligent and integrated observation network that harness distributed sensing assets, their associated continuous and complex data sets, and predictive observation processing mechanisms for timely, collaborative hazard mitigation and enhanced science productivity and reliability. This paper presents Virtual Sensor Web Infrastructure for Collaborative Science (VSICS) Architecture for sustained coordination of (numerical and distributed) model-based processing, closed-loop resource allocation, and observation planning. VSICS's key ideas include i) rich descriptions of sensors as services based on semantic markup languages like OWL and SensorML; ii) service-oriented workflow composition and repair for simple and ensemble models; event-driven workflow execution based on event-based and distributed workflow management mechanisms; and iii) development of autonomous model interaction management capabilities providing closed-loop control of collection resources driven by competing targeted observation needs. We present results from initial work on collaborative science processing involving distributed services (COSEC framework) that is being extended to create VSICS.
pSCANNER: patient-centered Scalable National Network for Effectiveness Research
Ohno-Machado, Lucila; Agha, Zia; Bell, Douglas S; Dahm, Lisa; Day, Michele E; Doctor, Jason N; Gabriel, Davera; Kahlon, Maninder K; Kim, Katherine K; Hogarth, Michael; Matheny, Michael E; Meeker, Daniella; Nebeker, Jonathan R
2014-01-01
This article describes the patient-centered Scalable National Network for Effectiveness Research (pSCANNER), which is part of the recently formed PCORnet, a national network composed of learning healthcare systems and patient-powered research networks funded by the Patient Centered Outcomes Research Institute (PCORI). It is designed to be a stakeholder-governed federated network that uses a distributed architecture to integrate data from three existing networks covering over 21 million patients in all 50 states: (1) VA Informatics and Computing Infrastructure (VINCI), with data from Veteran Health Administration's 151 inpatient and 909 ambulatory care and community-based outpatient clinics; (2) the University of California Research exchange (UC-ReX) network, with data from UC Davis, Irvine, Los Angeles, San Francisco, and San Diego; and (3) SCANNER, a consortium of UCSD, Tennessee VA, and three federally qualified health systems in the Los Angeles area supplemented with claims and health information exchange data, led by the University of Southern California. Initial use cases will focus on three conditions: (1) congestive heart failure; (2) Kawasaki disease; (3) obesity. Stakeholders, such as patients, clinicians, and health service researchers, will be engaged to prioritize research questions to be answered through the network. We will use a privacy-preserving distributed computation model with synchronous and asynchronous modes. The distributed system will be based on a common data model that allows the construction and evaluation of distributed multivariate models for a variety of statistical analyses. PMID:24780722
Service discovery with routing protocols for MANETs
NASA Astrophysics Data System (ADS)
Gu, Xuemai; Shi, Shuo
2005-11-01
Service discovery is becoming an important topic as its use throughout the Internet becomes more widespread. In Mobile Ad hoc Networks (MANETs), the routing protocol is very important because it is special network. To find a path for data, and destination nodes, nodes send packets to each node, creating substantial overhead traffic and consuming much time. Even though a variety of routing protocols have been developed for use in MANETs, they are insufficient for reducing overhead traffic and time. In this paper, we propose SDRP: a new service discovery protocol combined with routing policies in MANETs. The protocol is performed upon a distributed network. We describe a service by a unique ID number and use a group-cast routing policy in advertisement and request. The group-cast routing policy decreases the traffic in networks, and it is efficient to find destination node. In addition, the nodes included in the reply path also cache the advertisement information, and it means when each node finds a node next time, they can know where it is as soon as possible, so they minimize the time. Finally, we compare SDRP with both Flood and MAODV in terms of overload, and average delay. Simulation results show SDRP can spend less response time and accommodate even high mobility network environments.
Collaborative Scheduling Using JMS in a Mixed Java and .NET Environment
NASA Technical Reports Server (NTRS)
Wang, Yeou-Fang; Wax, Allan; Lam, Ray; Baldwin, John; Borden, Chet
2006-01-01
A viewgraph presentation to demonstrate collaborative scheduling using Java Message Service (JMS) in a mixed Java and .Net environment is given. The topics include: 1) NASA Deep Space Network scheduling; 2) Collaborative scheduling concept; 3) Distributed computing environment; 4) Platform concerns in a distributed environment; 5) Messaging and data synchronization; and 6) The prototype.
ERIC Educational Resources Information Center
Ahmed, Iftikhar; Sadeq, Muhammad Jafar
2006-01-01
Current distance learning systems are increasingly packing highly data-intensive contents on servers, resulting in the congestion of network and server resources at peak service times. A distributed learning system based on faded information field (FIF) architecture that employs mobile agents (MAs) has been proposed and simulated in this work. The…
NASA Astrophysics Data System (ADS)
Senyel, Muzeyyen Anil
Investments in the urban energy infrastructure for distributing electricity and natural gas are analyzed using (1) property data measuring distribution plant value at the local/tax district level, and (2) system outputs such as sectoral numbers of customers and energy sales, input prices, company-specific characteristics such as average wages and load factor. Socio-economic and site-specific urban and geographic variables, however, often been neglected in past studies. The purpose of this research is to incorporate these site-specific characteristics of electricity and natural gas distribution into investment cost model estimations. These local characteristics include (1) socio-economic variables, such as income and wealth; (2) urban-related variables, such as density, land-use, street pattern, housing pattern; (3) geographic and environmental variables, such as soil, topography, and weather, and (4) company-specific characteristics such as average wages, and load factor. The classical output variables include residential and commercial-industrial customers and sales. In contrast to most previous research, only capital investments at the local level are considered. In addition to aggregate cost modeling, the analysis focuses on the investment costs for the system components: overhead conductors, underground conductors, conduits, poles, transformers, services, street lighting, and station equipment for electricity distribution; and mains, services, regular and industrial measurement and regulation stations for natural gas distribution. The Box-Cox, log-log and additive models are compared to determine the best fitting cost functions. The Box-Cox form turns out to be superior to the other forms at the aggregate level and for network components. However, a linear additive form provides a better fit for end-user related components. The results show that, in addition to output variables and company-specific variables, various site-specific variables are statistically significant at the aggregate and disaggregate levels. Local electricity and natural gas distribution networks are characterized by a natural monopoly cost structure and economies of scale and density. The results provide evidence for the economies of scale and density for the aggregate electricity and natural gas distribution systems. However, distribution components have varying economic characteristics. The backbones of the networks (overhead conductors for electricity, and mains for natural gas) display economies of scale and density, but services in both systems and street lighting display diseconomies of scale and diseconomies of density. Finally multi-utility network cost analyses are presented for aggregate and disaggregate electricity and natural gas capital investments. Economies of scope analyses investigate whether providing electricity and natural gas jointly is economically advantageous, as compared to providing these products separately. Significant economies of scope are observed for both the total network and the underground capital investments.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2003-07-01
Mobile ad hoc networking (MANET) supports self-organizing, mobile infrastructures and enables an autonomous network of mobile nodes that can operate without a wired backbone. Ad hoc networks are characterized by multihop, wireless connectivity via packet radios and by the need for efficient dynamic protocols. All routers are mobile and can establish connectivity with other nodes only when they are within transmission range. Importantly, ad hoc wireless nodes are resource-constrained, having limited processing, memory, and battery capacity. Delivery of high quality-ofservice (QoS), real-time multimedia services from Internet-based applications over a MANET is a challenge not yet achieved by proposed Internet Engineering Task Force (IETF) ad hoc network protocols in terms of standard performance metrics such as end-to-end throughput, packet error rate, and delay. In the distributed operations of route discovery and maintenance, strong interaction occurs across MANET protocol layers, in particular, the physical, media access control (MAC), network, and application layers. The QoS requirements are specified for the service classes by the application layer. The cross-layer design must also satisfy the battery-limited energy constraints, by minimizing the distributed power consumption at the nodes and of selected routes. Interactions across the layers are modeled in terms of the set of concatenated design parameters including associated energy costs. Functional dependencies of the QoS metrics are described in terms of the concatenated control parameters. New cross-layer designs are sought that optimize layer interdependencies to achieve the "best" QoS available in an energy-constrained, time-varying network. The protocol design, based on a reactive MANET protocol, adapts the provisioned QoS to dynamic network conditions and residual energy capacities. The cross-layer optimization is based on stochastic dynamic programming conditions derived from time-dependent models of MANET packet flows. Regulation of network behavior is modeled by the optimal control of the conditional rates of multivariate point processes (MVPPs); these rates depend on the concatenated control parameters through a change of probability measure. The MVPP models capture behavior of many service applications, e.g., voice, video and the self-similar behavior of Internet data sessions. Performance verification of the cross-layer protocols, derived from the dynamic programming conditions, can be achieved by embedding the conditions in a reactive routing protocol for MANETs, in a simulation environment, such as the wireless extension of ns-2. A canonical MANET scenario consists of a distributed collection of battery-powered laptops or hand-held terminals, capable of hosting multimedia applications. Simulation details and performance tradeoffs, not presented, remain for a sequel to the paper.
Distributed medical services within the ATM-based Berlin regional test bed
NASA Astrophysics Data System (ADS)
Thiel, Andreas; Bernarding, Johannes; Krauss, Manfred; Schulz, Sandra; Tolxdorff, Thomas
1996-05-01
The ATM-based Metropolitan Area Network (MAN) of Berlin connects two university hospitals (Benjamin Franklin University Hospital and Charite) with the computer resources of the Technical University of Berlin (TUB). Distributed new medical services have been implemented and will be evaluated within the highspeed MAN of Berlin. The network with its data transmission rates of up to 155 Mbit/s renders these medical services externally available to practicing physicians. Resource and application sharing is demonstrated by the use of two software systems. The first software system is an interactive 3D reconstruction tool (3D- Medbild), based on a client-server mechanism. This structure allows the use of high- performance computers at the TUB from the low-level workstations in the hospitals. A second software system, RAMSES, utilizes a tissue database of Magnetic Resonance Images. For the remote control of the software, the developed applications use standards such as DICOM 3.0 and features of the World Wide Web. Data security concepts are being tested and integrated for the needs of the sensitive medical data. The highspeed network is the necessary prerequisite for the clinical evaluation of data in a joint teleconference. The transmission of digitized real-time sequences such as video and ultrasound and the interactive manipulation of data are made possible by Multi Media tools.
Evaluation of AL-FEC performance for IP television services QoS
NASA Astrophysics Data System (ADS)
Mammi, E.; Russo, G.; Neri, A.
2010-01-01
The IP television services quality is a critical issue because of the nature of transport infrastructure. Packet loss is the main cause of service degradation in such kind of network platforms. The use of forward error correction (FEC) techniques in the application layer (AL-FEC), between the source of TV service (video server) and the user terminal, seams to be an efficient strategy to counteract packet losses alternatively or in addiction to suitable traffic management policies (only feasible in "managed networks"). A number of AL-FEC techniques have been discussed in literature and proposed for inclusion in TV over IP international standards. In this paper a performance evaluation of the AL-FEC defined in SMPTE 2022-1 standard is presented. Different typical events occurring in IP networks causing different types (in terms of statistic distribution) of IP packet losses have been studied and AL-FEC performance to counteract these kind of losses have been evaluated. The performed analysis has been carried out in view of fulfilling the TV services QoS requirements that are usually very demanding. For managed networks, this paper envisages a strategy to combine the use of AL-FEC with the set-up of a transport quality based on FEC packets prioritization. Promising results regard this kind of strategy have been obtained.
An Uncertainty-Based Distributed Fault Detection Mechanism for Wireless Sensor Networks
Yang, Yang; Gao, Zhipeng; Zhou, Hang; Qiu, Xuesong
2014-01-01
Exchanging too many messages for fault detection will cause not only a degradation of the network quality of service, but also represents a huge burden on the limited energy of sensors. Therefore, we propose an uncertainty-based distributed fault detection through aided judgment of neighbors for wireless sensor networks. The algorithm considers the serious influence of sensing measurement loss and therefore uses Markov decision processes for filling in missing data. Most important of all, fault misjudgments caused by uncertainty conditions are the main drawbacks of traditional distributed fault detection mechanisms. We draw on the experience of evidence fusion rules based on information entropy theory and the degree of disagreement function to increase the accuracy of fault detection. Simulation results demonstrate our algorithm can effectively reduce communication energy overhead due to message exchanges and provide a higher detection accuracy ratio. PMID:24776937
System Analysis for the Huntsville Operation Support Center, Distributed Computer System
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Massey, D.
1985-01-01
HOSC as a distributed computing system, is responsible for data acquisition and analysis during Space Shuttle operations. HOSC also provides computing services for Marshall Space Flight Center's nonmission activities. As mission and nonmission activities change, so do the support functions of HOSC change, demonstrating the need for some method of simulating activity at HOSC in various configurations. The simulation developed in this work primarily models the HYPERchannel network. The model simulates the activity of a steady state network, reporting statistics such as, transmitted bits, collision statistics, frame sequences transmitted, and average message delay. These statistics are used to evaluate such performance indicators as throughout, utilization, and delay. Thus the overall performance of the network is evaluated, as well as predicting possible overload conditions.
Rocha, T A H; da Silva, N C; Amaral, P V; Barbosa, A C Q; Rocha, J V M; Alvares, V; de Almeida, D G; Thumé, E; Thomaz, E B A F; de Sousa Queiroz, R C; de Souza, M R; Lein, A; Toomey, N; Staton, C A; Vissoci, J R N; Facchini, L A
2017-12-01
Studies of health geography are important in the planning and allocation of emergency health services. The geographical distribution of health facilities is an important factor in timely and quality access to emergency services; therefore, the present study analyzed the emergency health care network in Brazil, focusing the analysis at the roles of small hospitals (SHs). Cross-sectional ecological study. Data were collected from 9429 hospitals of which 3524 were SHs and 5905 were high-complexity centers (HCCs). For analytical purposes, we considered four specialties when examining the proxies of emergency care capability: adult, pediatrics, neonatal, and obstetric. We analyzed the spatial distribution of hospitals, identifying municipalities that rely exclusively on SHs and the distance of these cities from HCCs. More than 14 and 30 million people were at least 120 km away from HCCs with an adult intensive care unit (ICU) and pediatric ICU, respectively. For neonatal care distribution, 12% of the population was more than 120 km away from a health facility with a neonatal ICU. The maternities situation is different from other specialties, where 81% of the total Brazilian population was within 1 h or less from such health facilities. Our results highlighted a polarization in distribution of Brazilian health care facilities. There is a concentration of hospitals in urban areas more developed and access gaps in rural areas and the Amazon region. Our results demonstrate that the distribution of emergency services in Brazil is not facilitating access to the population due to geographical barriers associated with great distances. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Using heterogeneous wireless sensor networks in a telemonitoring system for healthcare.
Corchado, Juan M; Bajo, Javier; Tapia, Dante I; Abraham, Ajith
2010-03-01
Ambient intelligence has acquired great importance in recent years and requires the development of new innovative solutions. This paper presents a distributed telemonitoring system, aimed at improving healthcare and assistance to dependent people at their homes. The system implements a service-oriented architecture based platform, which allows heterogeneous wireless sensor networks to communicate in a distributed way independent of time and location restrictions. This approach provides the system with a higher ability to recover from errors and a better flexibility to change their behavior at execution time. Preliminary results are presented in this paper.
Distributed Emulation in Support of Large Networks
2016-06-01
Provider LTE Long Term Evolution MB Megabyte MIPS Microprocessor without Interlocked Pipeline Stages MRT Multi-Threaded Routing Toolkit NPS Naval...environment, modifications to a network, protocol, or model can be executed – and the effects measured – without affecting real-world users or services...produce their results when analyzing performance of Long Term Evolution ( LTE ) gateways [3]. Many research scenarios allow problems to be represented
Incorporating Human Movement Behavior into the Analysis of Spatially Distributed Infrastructure.
Wu, Lihua; Leung, Henry; Jiang, Hao; Zheng, Hong; Ma, Li
2016-01-01
For the first time in human history, the majority of the world's population resides in urban areas. Therefore, city managers are faced with new challenges related to the efficiency, equity and quality of the supply of resources, such as water, food and energy. Infrastructure in a city can be viewed as service points providing resources. These service points function together as a spatially collaborative system to serve an increasing population. To study the spatial collaboration among service points, we propose a shared network according to human's collective movement and resource usage based on data usage detail records (UDRs) from the cellular network in a city in western China. This network is shown to be not scale-free, but exhibits an interesting triangular property governed by two types of nodes with very different link patterns. Surprisingly, this feature is consistent with the urban-rural dualistic context of the city. Another feature of the shared network is that it consists of several spatially separated communities that characterize local people's active zones but do not completely overlap with administrative areas. According to these features, we propose the incorporation of human movement into infrastructure classification. The presence of well-defined spatially separated clusters confirms the effectiveness of this approach. In this paper, our findings reveal the spatial structure inside a city, and the proposed approach provides a new perspective on integrating human movement into the study of a spatially distributed system.
NASA Astrophysics Data System (ADS)
Bokhari, Abdullah
Demarcations between traditional distribution power systems and distributed generation (DG) architectures are increasingly evolving as higher DG penetration is introduced in the system. The concerns in existing electric power systems (EPSs) to accommodate less restrictive interconnection policies while maintaining reliability and performance of power delivery have been the major challenge for DG growth. In this dissertation, the work is aimed to study power quality, energy saving and losses in a low voltage distributed network under various DG penetration cases. Simulation platform suite that includes electric power system, distributed generation and ZIP load models is implemented to determine the impact of DGs on power system steady state performance and the voltage profile of the customers/loads in the network under the voltage reduction events. The investigation designed to test the DG impact on power system starting with one type of DG, then moves on multiple DG types distributed in a random case and realistic/balanced case. The functionality of the proposed DG interconnection is designed to meet the basic requirements imposed by the various interconnection standards, most notably IEEE 1547, public service commission, and local utility regulation. It is found that implementation of DGs on the low voltage secondary network would improve customer's voltage profile, system losses and significantly provide energy savings and economics for utilities. In a network populated with DGs, utility would have a uniform voltage profile at the customers end as the voltage profile becomes more concentrated around targeted voltage level. The study further reinforced the concept that the behavior of DG in distributed network would improve voltage regulation as certain percentage reduction on utility side would ensure uniform percentage reduction seen by all customers and reduce number of voltage violations.
NASA Astrophysics Data System (ADS)
Garg, Amit Kumar; Madavi, Amresh Ashok; Janyani, Vijay
2017-02-01
A flexible hybrid wavelength division multiplexing-time division multiplexing passive optical network architecture that allows dual rate signals to be sent at 1 and 10 Gbps to each optical networking unit depending upon the traffic load is proposed. The proposed design allows dynamic wavelength allocation with pay-as-you-grow deployment capability. This architecture is capable of providing up to 40 Gbps of equal data rates to all optical distribution networks (ODNs) and up to 70 Gbps of a asymmetrical data rate to the specific ODN. The proposed design handles broadcasting capability with simultaneous point-to-point transmission, which further reduces energy consumption. In this architecture, each module sends a wavelength to each ODN, thus making the architecture fully flexible; this flexibility allows network providers to use only required OLT components and switch off others. The design is also reliable to any module or TRx failure and provides services without any service disruption. Dynamic wavelength allocation and pay-as-you-grow deployment support network extensibility and bandwidth scalability to handle future generation access networks.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-12
...) Not to exceed 3000 positions that require unique cyber security skills and knowledge to perform cyber..., distributed control systems security, cyber incident response, cyber exercise facilitation and management, cyber vulnerability detection and assessment, network and systems engineering, enterprise architecture...
Development of a web service for analysis in a distributed network.
Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila
2014-01-01
We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.
Development of a Web Service for Analysis in a Distributed Network
Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila
2014-01-01
Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586
Implementation of a Campuswide Distributed Mass Storage Service: the Dream Versus Reality
NASA Technical Reports Server (NTRS)
Prahst, Stephen; Armstead, Betty Jo
1996-01-01
In 1990, a technical team at NASA Lewis Research Center, Cleveland, Ohio, began defining a Mass Storage Service to pro- wide long-term archival storage, short-term storage for very large files, distributed Network File System access, and backup services for critical data dw resides on workstations and personal computers. Because of software availability and budgets, the total service was phased in over dm years. During the process of building the service from the commercial technologies available, our Mass Storage Team refined the original vision and learned from the problems and mistakes that occurred. We also enhanced some technologies to better meet the needs of users and system administrators. This report describes our team's journey from dream to reality, outlines some of the problem areas that still exist, and suggests some solutions.
Securing electronic mail: The risks and future of electronic mail
NASA Astrophysics Data System (ADS)
Weeber, S. A.
1993-03-01
The network explosion of the past decade has significantly affected how many of us conduct our day to day work. We increasingly rely on network services such as electronic mail, file transfer, and network newsgroups to collect and distribute information. Unfortunately, few of the network services in use today were designed with the security issues of large heterogeneous networks in mind. In particular, electronic mail, although heavily relied upon, is notoriously insecure. Messages can be forged, snooped, and even altered by users with only a moderate level of system proficiency. The level of trust that can be assigned at present to these services needs to be carefully considered. In the past few years, standards and tools have begun to appear addressing the security concerns of electronic mail. Principal among these are RFC's 1421, 1422, 1423, and 1424, which propose Internet standards in the areas of message encipherment, key management, and algorithms for privacy enhanced mail (PEM). Additionally, three PEM systems, offering varying levels of compliance with the PEM RFC's, have also recently emerged: PGP, RIPEM, and TIS/PEM. This paper addresses the motivations and requirements for more secure electronic mail, and evaluates the suitability of the currently available PEM systems.
Furukawa, Hideaki; Miyazawa, Takaya; Wada, Naoya; Harai, Hiroaki
2014-01-13
Optical packet and circuit integrated (OPCI) networks provide both optical packet switching (OPS) and optical circuit switching (OCS) links on the same physical infrastructure using a wavelength multiplexing technique in order to deal with best-effort services and quality-guaranteed services. To immediately respond to changes in user demand for OPS and OCS links, OPCI networks should dynamically adjust the amount of wavelength resources for each link. We propose a resource-adjustable hybrid optical packet/circuit switch and transponder. We also verify that distributed control of resource adjustments can be applied to the OPCI ring network testbed we developed. In cooperation with the resource adjustment mechanism and the hybrid switch and transponder, we demonstrate that automatically allocating a shared resource and moving the wavelength resource boundary between OPS and OCS links can be successfully executed, depending on the number of optical paths in use.
A Performance Comparison of Tree and Ring Topologies in Distributed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Min
A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less
Inter-computer communication architecture for a mixed redundancy distributed system
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Adams, Stuart J.
1987-01-01
The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.
QoS support over ultrafast TDM optical networks
NASA Astrophysics Data System (ADS)
Narvaez, Paolo; Siu, Kai-Yeung; Finn, Steven G.
1999-08-01
HLAN is a promising architecture to realize Tb/s access networks based on ultra-fast optical TDM technologies. This paper presents new research results on efficient algorithms for the support of quality of service over the HLAN network architecture. In particular, we propose a new scheduling algorithm that emulates fair queuing in a distributed manner for bandwidth allocation purpose. The proposed scheduler collects information on the queue of each host on the network and then instructs each host how much data to send. Our new scheduling algorithm ensures full bandwidth utilization, while guaranteeing fairness among all hosts.
Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users. PMID:24790579
Lim, Han Chuen; Yoshizawa, Akio; Tsuchida, Hidemi; Kikuchi, Kazuro
2008-09-15
We present a theoretical model for the distribution of polarization-entangled photon-pairs produced via spontaneous parametric down-conversion within a local-area fiber network. This model allows an entanglement distributor who plays the role of a service provider to determine the photon-pair generation rate giving highest two-photon interference fringe visibility for any pair of users, when given user-specific parameters. Usefulness of this model is illustrated in an example and confirmed in an experiment, where polarization-entangled photon-pairs are distributed over 82 km and 132 km of dispersion-managed optical fiber. Experimentally observed visibilities and entanglement fidelities are in good agreement with theoretically predicted values.
Zhong, Cheng; Liu, Lei; Zhao, Jing
2014-01-01
An efficient location-based query algorithm of protecting the privacy of the user in the distributed networks is given. This algorithm utilizes the location indexes of the users and multiple parallel threads to search and select quickly all the candidate anonymous sets with more users and their location information with more uniform distribution to accelerate the execution of the temporal-spatial anonymous operations, and it allows the users to configure their custom-made privacy-preserving location query requests. The simulated experiment results show that the proposed algorithm can offer simultaneously the location query services for more users and improve the performance of the anonymous server and satisfy the anonymous location requests of the users.
GMPLS-based control plane for optical networks: early implementation experience
NASA Astrophysics Data System (ADS)
Liu, Hang; Pendarakis, Dimitrios; Komaee, Nooshin; Saha, Debanjan
2002-07-01
Generalized Multi-Protocol Label Switching (GMPLS) extends MPLS signaling and Internet routing protocols to provide a scalable, interoperable, distributed control plane, which is applicable to multiple network technologies such as optical cross connects (OXCs), photonic switches, IP routers, ATM switches, SONET and DWDM systems. It is intended to facilitate automatic service provisioning and dynamic neighbor and topology discovery across multi-vendor intelligent transport networks, as well as their clients. Efforts to standardize such a distributed common control plane have reached various stages in several bodies such as the IETF, ITU and OIF. This paper describes the design considerations and architecture of a GMPLS-based control plane that we have prototyped for core optical networks. Functional components of GMPLS signaling and routing are integrated in this architecture with an application layer controller module. Various requirements including bandwidth, network protection and survivability, traffic engineering, optimal utilization of network resources, and etc. are taken into consideration during path computation and provisioning. Initial experiments with our prototype demonstrate the feasibility and main benefits of GMPLS as a distributed control plane for core optical networks. In addition to such feasibility results, actual adoption and deployment of GMPLS as a common control plane for intelligent transport networks will depend on the successful completion of relevant standardization activities, extensive interoperability testing as well as the strengthening of appropriate business drivers.
Solutions for Mining Distributed Scientific Data
NASA Astrophysics Data System (ADS)
Lynnes, C.; Pham, L.; Graves, S.; Ramachandran, R.; Maskey, M.; Keiser, K.
2007-12-01
Researchers at the University of Alabama in Huntsville (UAH) and the Goddard Earth Sciences Data and Information Services Center (GES DISC) are working on approaches and methodologies facilitating the analysis of large amounts of distributed scientific data. Despite the existence of full-featured analysis tools, such as the Algorithm Development and Mining (ADaM) toolkit from UAH, and data repositories, such as the GES DISC, that provide online access to large amounts of data, there remain obstacles to getting the analysis tools and the data together in a workable environment. Does one bring the data to the tools or deploy the tools close to the data? The large size of many current Earth science datasets incurs significant overhead in network transfer for analysis workflows, even with the advanced networking capabilities that are available between many educational and government facilities. The UAH and GES DISC team are developing a capability to define analysis workflows using distributed services and online data resources. We are developing two solutions for this problem that address different analysis scenarios. The first is a Data Center Deployment of the analysis services for large data selections, orchestrated by a remotely defined analysis workflow. The second is a Data Mining Center approach of providing a cohesive analysis solution for smaller subsets of data. The two approaches can be complementary and thus provide flexibility for researchers to exploit the best solution for their data requirements. The Data Center Deployment of the analysis services has been implemented by deploying ADaM web services at the GES DISC so they can access the data directly, without the need of network transfers. Using the Mining Workflow Composer, a user can define an analysis workflow that is then submitted through a Web Services interface to the GES DISC for execution by a processing engine. The workflow definition is composed, maintained and executed at a distributed location, but most of the actual services comprising the workflow are available local to the GES DISC data repository. Additional refinements will ultimately provide a package that is easily implemented and configured at additional data centers for analysis of additional science data sets. Enhancements to the ADaM toolkit allow the staging of distributed data wherever the services are deployed, to support a Data Mining Center that can provide additional computational resources, large storage of output, easier addition and updates to available services, and access to data from multiple repositories. The Data Mining Center case provides researchers more flexibility to quickly try different workflow configurations and refine the process, using smaller amounts of data that may likely be transferred from distributed online repositories. This environment is sufficient for some analyses, but can also be used as an initial sandbox to test and refine a solution before staging the execution at a Data Center Deployment. Detection of airborne dust both over water and land in MODIS imagery using mining services for both solutions will be presented. The dust detection is just one possible example of the mining and analysis capabilities the proposed mining services solutions will provide to the science community. More information about the available services and the current status of this project is available at http://www.itsc.uah.edu/mws/
Wireless Telemetry and Command (T and C) Program
NASA Technical Reports Server (NTRS)
Jiang, Hui; Horan, Stephen
2000-01-01
The Wireless Telemetry and Command (T&C) program is to investigate methods of using commercial telecommunications service providers to support command and telemetry services between a remote user and a base station. While the initial development is based on ground networks, the development is being done with an eye towards future space communications needs. Both NASA and the Air Force have indicated a plan to consider the use of commercial telecommunications providers to support their space missions. To do this, there will need to be an understanding of the requirements and limitations of interfacing with the commercial providers. The eventual payoff will be the reduced operations cost and the ability to tap into commercial services being developed by the commercial networks. This should enable easier realization of EP services to the end points, commercial routing of data, and quicker integration of new services into the space mission operations. Therefore, the ultimate goal of this program is not just to provide wireless radio communications for T&C services but to enhance those services through wireless networking and provider enhancements that come with the networks. In the following chapters, the detailed technical procedure will be showed step by step. Chapter 2 will talk about the general idea of simulation as well as the implementation of data acquisition including sensor array data and GPS data. Chapter 3 will talk about how to use LabVEEW and Component Works to do wireless communication simulation and how to distribute the real-time information over the Internet by using Visual Basic and ActiveX controls. Also talk about the test configuration and validation. Chapter 4 will show the test results both from In-Lab test and Networking Test. Chapter 5 will summarize the whole procedure and give the perspective for the future consideration.
PrismTech Data Distribution Service Java API Evaluation
NASA Technical Reports Server (NTRS)
Riggs, Cortney
2008-01-01
My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.
Session Initiation Protocol Network Encryption Device Plain Text Domain Discovery Service
2007-12-07
MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a...such as the TACLANE, have developed unique discovery methods to establish Plain Text Domain (PTD) Security Associations (SA). All of these techniques...can include network and host Internet Protocol (IP) addresses, Information System Security Office (ISSO) point of contact information and PTD status
Ngi and Internet2: accelerating the creation of tomorrow's internet.
Kratz, M; Ackerman, M; Hanss, T; Corbato, S
2001-01-01
Internet2 is a consortium of leading U.S. universities working in partnership with industry and the U.S. government's Next Generation Internet (NGI) initiative to develop a faster, more reliable Internet for research and education including enhanced, high-performance networking services and the advanced applications that are enabled by those services [1]. By facilitating and coordinating the development, deployment, operation, and technology transfer of advanced, network-based applications and network services, Internet2 and NGI are working together to fundamentally change the way scientists, engineers, clinicians, and others work together. [http://www.internet2.edu] The NGI Program has three tracks: research, network testbeds, and applications. The aim of the research track is to promote experimentation with the next generation of network technologies. The network testbed track aims to develop next generation network testbeds to connect universities and federal research institutions at speeds that are sufficient to demonstrate new technologies and support future research. The aim of the applications track is to demonstrate new applications, enabled by the NGI networks, to meet important national goals and missions [2]. [http://www.ngi.gov/] The Internet2/NGI backbone networks, Abilene and vBNS (very high performance Backbone Network Service), provide the basis of collaboration and development for a new breed of advanced medical applications. Academic medical centers leverage the resources available throughout the Internet2 high-performance networking community for high-capacity broadband and selectable quality of service to make effective use of national repositories. The Internet2 Health Sciences Initiative enables a new generation of emerging medical applications whose architecture and development have been restricted by or are beyond the constraints of traditional Internet environments. These initiatives facilitate a variety of activities to foster the development and deployment of emerging applications that meet the requirements of clinical practice, medical and related biological research, education, and medical awareness throughout the public sector. Medical applications that work with high performance networks and supercomputing capabilities offer exciting new solutions for the medical industry. Internet2 and NGI,strive to combine the expertise of their constituents to establish a distributed knowledge system for achieving innovation in research, teaching, learning, and clinical care.
Automation of multi-agent control for complex dynamic systems in heterogeneous computational network
NASA Astrophysics Data System (ADS)
Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan
2017-01-01
The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.
Resource Aware Intelligent Network Services (RAINS) Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, Tom; Yang, Xi
The Resource Aware Intelligent Network Services (RAINS) project conducted research and developed technologies in the area of cyber infrastructure resource modeling and computation. The goal of this work was to provide a foundation to enable intelligent, software defined services which spanned the network AND the resources which connect to the network. A Multi-Resource Service Plane (MRSP) was defined, which allows resource owners/managers to locate and place themselves from a topology and service availability perspective within the dynamic networked cyberinfrastructure ecosystem. The MRSP enables the presentation of integrated topology views and computation results which can include resources across the spectrum ofmore » compute, storage, and networks. The RAINS project developed MSRP includes the following key components: i) Multi-Resource Service (MRS) Ontology/Multi-Resource Markup Language (MRML), ii) Resource Computation Engine (RCE), iii) Modular Driver Framework (to allow integration of a variety of external resources). The MRS/MRML is a general and extensible modeling framework that allows for resource owners to model, or describe, a wide variety of resource types. All resources are described using three categories of elements: Resources, Services, and Relationships between the elements. This modeling framework defines a common method for the transformation of cyber infrastructure resources into data in the form of MRML models. In order to realize this infrastructure datification, the RAINS project developed a model based computation system, i.e. “RAINS Computation Engine (RCE)”. The RCE has the ability to ingest, process, integrate, and compute based on automatically generated MRML models. The RCE interacts with the resources thru system drivers which are specific to the type of external network or resource controller. The RAINS project developed a modular and pluggable driver system which facilities a variety of resource controllers to automatically generate, maintain, and distribute MRML based resource descriptions. Once all of the resource topologies are absorbed by the RCE, a connected graph of the full distributed system topology is constructed, which forms the basis for computation and workflow processing. The RCE includes a Modular Computation Element (MCE) framework which allows for tailoring of the computation process to the specific set of resources under control, and the services desired. The input and output of an MCE are both model data based on MRS/MRML ontology and schema. Some of the RAINS project accomplishments include: Development of general and extensible multi-resource modeling framework; Design of a Resource Computation Engine (RCE) system which includes the following key capabilities; Absorb a variety of multi-resource model types and build integrated models; Novel architecture which uses model based communications across the full stack for all Flexible provision of abstract or intent based user facing interfaces; Workflow processing based on model descriptions; Release of the RCE as an open source software; Deployment of RCE in the University of Maryland/Mid-Atlantic Crossroad ScienceDMZ in prototype mode with a plan under way to transition to production; Deployment at the Argonne National Laboratory DTN Facility in prototype mode; Selection of RCE by the DOE SENSE (SDN for End-to-end Networked Science at the Exascale) project as the basis for their orchestration service.« less
Distributed Power Allocation for Wireless Sensor Network Localization: A Potential Game Approach.
Ke, Mingxing; Li, Ding; Tian, Shiwei; Zhang, Yuli; Tong, Kaixiang; Xu, Yuhua
2018-05-08
The problem of distributed power allocation in wireless sensor network (WSN) localization systems is investigated in this paper, using the game theoretic approach. Existing research focuses on the minimization of the localization errors of individual agent nodes over all anchor nodes subject to power budgets. When the service area and the distribution of target nodes are considered, finding the optimal trade-off between localization accuracy and power consumption is a new critical task. To cope with this issue, we propose a power allocation game where each anchor node minimizes the square position error bound (SPEB) of the service area penalized by its individual power. Meanwhile, it is proven that the power allocation game is an exact potential game which has one pure Nash equilibrium (NE) at least. In addition, we also prove the existence of an ϵ -equilibrium point, which is a refinement of NE and the better response dynamic approach can reach the end solution. Analytical and simulation results demonstrate that: (i) when prior distribution information is available, the proposed strategies have better localization accuracy than the uniform strategies; (ii) when prior distribution information is unknown, the performance of the proposed strategies outperforms power management strategies based on the second-order cone program (SOCP) for particular agent nodes after obtaining the estimated distribution of agent nodes. In addition, proposed strategies also provide an instructional trade-off between power consumption and localization accuracy.
Evolution of the Global Space Geodesy Network
NASA Astrophysics Data System (ADS)
Pearlman, Michael R.; Bianco, Giuseppe; Ipatov, Alexander; Ma, Chopo; Neilan, Ruth; Noll, Carey; Park, Jong Uk; Pavlis, Erricos; Wetzel, Scott
2013-04-01
The improvements in the reference frame and other space geodesy data products spelled out in the GGOS 2020 plan will evolve over time as new space geodesy sites enhance the global distribution of the network and new technologies are implemented at the sites thus enabling improved data processing and analysis. The goal of 30 globally distributed core sites with VLBI, SLR, GNSS and DORIS (where available) will take time to materialize. Co-location sites with less than the full core complement will continue to play a very important role in filling out the network while it is evolving and even after full implementation. GGOS through its Call for Participation, bi-lateral and multi-lateral discussions and work through the scientific Services has been encouraging current groups to upgrade and new groups to join the activity. This talk will give an update on the current expansion of the global network and the projection for the network configuration that we forecast over the next 10 years.
The SysMan monitoring service and its management environment
NASA Astrophysics Data System (ADS)
Debski, Andrzej; Janas, Ekkehard
1996-06-01
Management of modern information systems is becoming more and more complex. There is a growing need for powerful, flexible and affordable management tools to assist system managers in maintaining such systems. It is at the same time evident that effective management should integrate network management, system management and application management in a uniform way. Object oriented OSI management architecture with its four basic modelling concepts (information, organization, communication and functional models) together with widely accepted distribution platforms such as ANSA/CORBA, constitutes a reliable and modern framework for the implementation of a management toolset. This paper focuses on the presentation of concepts and implementation results of an object oriented management toolset developed and implemented within the framework of the ESPRIT project 7026 SysMan. An overview is given of the implemented SysMan management services including the System Management Service, Monitoring Service, Network Management Service, Knowledge Service, Domain and Policy Service, and the User Interface. Special attention is paid to the Monitoring Service which incorporates the architectural key entity responsible for event management. Its architecture and building components, especially filters, are emphasized and presented in detail.
A design of wireless sensor networks for a power quality monitoring system.
Lim, Yujin; Kim, Hak-Man; Kang, Sanggil
2010-01-01
Power grids deal with the business of generation, transmission, and distribution of electric power. Recently, interest in power quality in electrical distribution systems has increased rapidly. In Korea, the communication network to deliver voltage, current, and temperature measurements gathered from pole transformers to remote monitoring centers employs cellular mobile technology. Due to high cost of the cellular mobile technology, power quality monitoring measurements are limited and data gathering intervals are large. This causes difficulties in providing the power quality monitoring service. To alleviate the problems, in this paper we present a communication infrastructure to provide low cost, reliable data delivery. The communication infrastructure consists of wired connections between substations and monitoring centers, and wireless connections between pole transformers and substations. For the wireless connection, we employ a wireless sensor network and design its corresponding data forwarding protocol to improve the quality of data delivery. For the design, we adopt a tree-based data forwarding protocol in order to customize the distribution pattern of the power quality information. We verify the performance of the proposed data forwarding protocol quantitatively using the NS-2 network simulator.
A Case for Open Network Health Systems: Systems as Networks in Public Mental Health.
Rhodes, Michael Grant; de Vries, Marten W
2017-01-08
Increases in incidents involving so-called confused persons have brought attention to the potential costs of recent changes to public mental health (PMH) services in the Netherlands. Decentralized under the (Community) Participation Act (2014), local governments must find resources to compensate for reduced central funding to such services or "innovate." But innovation, even when pressure for change is intense, is difficult. This perspective paper describes experience during and after an investigation into a particularly violent incident and murder. The aim was to provide recommendations to improve the functioning of local PMH services. The investigation concluded that no specific failure by an individual professional or service provider facility led to the murder. Instead, also as a result of the Participation Act that severed communication lines between individuals and organizations, information sharing failures were likely to have reduced system level capacity to identify risks. The methods and analytical frameworks employed to reach this conclusion, also lead to discussion as to the plausibility of an unconventional solution. If improving communication is the primary problem, non-hierarchical information, and organizational networks arise as possible and innovative system solutions. The proposal for debate is that traditional "health system" definitions, literature and narratives, and operating assumptions in public (mental) health are 'locked in' constraining technical and organization innovations. If we view a "health system" as an adaptive system of economic and social "networks," it becomes clear that the current orthodox solution, the so-called integrated health system, typically results in a "centralized hierarchical" or "tree" network. An overlooked alternative that breaks out of the established policy narratives is the view of a 'health systems' as a non-hierarchical organizational structure or 'Open Network.' In turn, this opens new technological and organizational possibilities in seeking policy solutions, and suggests an alternative governance model of huge potential value in public health both locally and globally. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Health sector reform in Brazil: a case study of inequity.
Almeida, C; Travassos, C; Porto, S; Labra, M E
2000-01-01
Health sector reform in Brazil built the Unified Health System according to a dense body of administrative instruments for organizing decentralized service networks and institutionalizing a complex decision-making arena. This article focuses on the equity in health care services. Equity is defined as a principle governing distributive functions designed to reduce or offset socially unjust inequalities, and it is applied to evaluate the distribution of financial resources and the use of health services. Even though in the Constitution the term "equity" refers to equal opportunity of access for equal needs, the implemented policies have not guaranteed these rights. Underfunding, fiscal stress, and lack of priorities for the sector have contributed to a progressive deterioration of health care services, with continuing regressive tax collection and unequal distribution of financial resources among regions. The data suggest that despite regulatory measures to increase efficiency and reduce inequalities, delivery of health care services remains extremely unequal across the country. People in lower income groups experience more difficulties in getting access to health services. Utilization rates vary greatly by type of service among income groups, positions in the labor market, and levels of education.
Neel, Maile; Tumas, Hayley R; Marsden, Brittany W
2014-01-01
We apply a comprehensive suite of graph theoretic metrics to illustrate how landscape connectivity can be effectively incorporated into conservation status assessments and in setting conservation objectives. These metrics allow conservation practitioners to evaluate and quantify connectivity in terms of representation, resiliency, and redundancy and the approach can be applied in spite of incomplete knowledge of species-specific biology and dispersal processes. We demonstrate utility of the graph metrics by evaluating changes in distribution and connectivity that would result from implementing two conservation plans for three endangered plant species (Erigeron parishii, Acanthoscyphus parishii var. goodmaniana, and Eriogonum ovalifolium var. vineum) relative to connectivity under current conditions. Although distributions of the species differ from one another in terms of extent and specific location of occupied patches within the study landscape, the spatial scale of potential connectivity in existing networks were strikingly similar for Erigeron and Eriogonum, but differed for Acanthoscyphus. Specifically, patches of the first two species were more regularly distributed whereas subsets of patches of Acanthoscyphus were clustered into more isolated components. Reserves based on US Fish and Wildlife Service critical habitat designation would not greatly contribute to maintain connectivity; they include 83-91% of the extant occurrences and >92% of the aerial extent of each species. Effective connectivity remains within 10% of that in the whole network for all species. A Forest Service habitat management strategy excluded up to 40% of the occupied habitat of each species resulting in both range reductions and loss of occurrences from the central portions of each species' distribution. Overall effective network connectivity was reduced to 62-74% of the full networks. The distance at which each CHMS network first became fully connected was reduced relative to the full network in Erigeron and Acanthoscyphus due to exclusion of peripheral patches, but was slightly increased for Eriogonum. Distances at which networks were sensitive to loss of connectivity due to presence non-redundant connections were affected mostly for Acanthoscyphos. Of most concern was that the range of distances at which lack of redundancy yielded high risk was much greater than in the full network. Through this in-depth example evaluating connectivity using a comprehensive suite of developed graph theoretic metrics, we establish an approach as well as provide sample interpretations of subtle variations in connectivity that conservation managers can incorporate into planning.
Tumas, Hayley R.; Marsden, Brittany W.
2014-01-01
We apply a comprehensive suite of graph theoretic metrics to illustrate how landscape connectivity can be effectively incorporated into conservation status assessments and in setting conservation objectives. These metrics allow conservation practitioners to evaluate and quantify connectivity in terms of representation, resiliency, and redundancy and the approach can be applied in spite of incomplete knowledge of species-specific biology and dispersal processes. We demonstrate utility of the graph metrics by evaluating changes in distribution and connectivity that would result from implementing two conservation plans for three endangered plant species (Erigeron parishii, Acanthoscyphus parishii var. goodmaniana, and Eriogonum ovalifolium var. vineum) relative to connectivity under current conditions. Although distributions of the species differ from one another in terms of extent and specific location of occupied patches within the study landscape, the spatial scale of potential connectivity in existing networks were strikingly similar for Erigeron and Eriogonum, but differed for Acanthoscyphus. Specifically, patches of the first two species were more regularly distributed whereas subsets of patches of Acanthoscyphus were clustered into more isolated components. Reserves based on US Fish and Wildlife Service critical habitat designation would not greatly contribute to maintain connectivity; they include 83–91% of the extant occurrences and >92% of the aerial extent of each species. Effective connectivity remains within 10% of that in the whole network for all species. A Forest Service habitat management strategy excluded up to 40% of the occupied habitat of each species resulting in both range reductions and loss of occurrences from the central portions of each species’ distribution. Overall effective network connectivity was reduced to 62–74% of the full networks. The distance at which each CHMS network first became fully connected was reduced relative to the full network in Erigeron and Acanthoscyphus due to exclusion of peripheral patches, but was slightly increased for Eriogonum. Distances at which networks were sensitive to loss of connectivity due to presence non-redundant connections were affected mostly for Acanthoscyphos. Of most concern was that the range of distances at which lack of redundancy yielded high risk was much greater than in the full network. Through this in-depth example evaluating connectivity using a comprehensive suite of developed graph theoretic metrics, we establish an approach as well as provide sample interpretations of subtle variations in connectivity that conservation managers can incorporate into planning. PMID:25320685
ISLE: Intelligent Selection of Loop Electronics. A CLIPS/C++/INGRES integrated application
NASA Technical Reports Server (NTRS)
Fischer, Lynn; Cary, Judson; Currie, Andrew
1990-01-01
The Intelligent Selection of Loop Electronics (ISLE) system is an integrated knowledge-based system that is used to configure, evaluate, and rank possible network carrier equipment known as Digital Loop Carrier (DLC), which will be used to meet the demands of forecasted telephone services. Determining the best carrier systems and carrier architectures, while minimizing the cost, meeting corporate policies and addressing area service demands, has become a formidable task. Network planners and engineers use the ISLE system to assist them in this task of selecting and configuring the appropriate loop electronics equipment for future telephone services. The ISLE application is an integrated system consisting of a knowledge base, implemented in CLIPS (a planner application), C++, and an object database created from existing INGRES database information. The embedibility, performance, and portability of CLIPS provided us with a tool with which to capture, clarify, and refine corporate knowledge and distribute this knowledge within a larger functional system to network planners and engineers throughout U S WEST.
A Distributed User Information System
1990-03-01
NOE08 Department of Computer Science NOVO 8 1990 University of Maryland S College Park, MD 20742 D Abstract Current user information database technology ...Transactions on Computer Systems, May 1988. [So189] K. Sollins. A plan for internet directory services. Technical report, DDN Network Information Center...2424 A Distributed User Information System DTiC Steven D. Miller, Scott Carson, and Leo Mark DELECTE Institute for Advanced Computer Studies and
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2009-04-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology-specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user`s own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
NASA Astrophysics Data System (ADS)
Spinuso, A.; Trani, L.; Rives, S.; Thomy, P.; Euchner, F.; Schorlemmer, D.; Saul, J.; Heinloo, A.; Bossu, R.; van Eck, T.
2008-12-01
The Network of Research Infrastructures for European Seismology (NERIES) is European Commission (EC) project whose focus is networking together seismological observatories and research institutes into one integrated European infrastructure that provides access to data and data products for research. Seismological institutes and organizations in European and Mediterranean countries maintain large, geographically distributed data archives, therefore this scenario suggested a design approach based on the concept of an internet service oriented architecture (SOA) to establish a cyberinfrastructure for distributed and heterogeneous data streams and services. Moreover, one of the goals of NERIES is to design and develop a Web portal that acts as the uppermost layer of the infrastructure and provides rendering capabilities for the underlying sets of data The Web services that are currently being designed and implemented will deliver data that has been adopted to appropriate formats. The parametric information about a seismic event is delivered using a seismology- specific Extensible mark-up Language(XML) format called QuakeML (https://quake.ethz.ch/quakeml), which has been formalized and implemented in coordination with global earthquake-information agencies. Uniform Resource Identifiers (URIs) are used to assign identifiers to (1) seismic-event parameters described by QuakeML, and (2) generic resources, for example, authorities, locations providers, location methods, software adopted, and so on, described by use of a data model constructed with the resource description framework (RDF) and accessible as a service. The European-Mediterranean Seismological Center (EMSC) has implemented a unique event identifier (UNID) that will create the seismic event URI used by the QuakeML data model. Access to data such as broadband waveform, accelerometric data and stations inventories will be also provided through a set of Web services that will wrap the middleware used by the seismological observatory or institute that is supplying the data. Each single application of the portal consists of a Java-based JSR-168-standard portlet (often provided with interactive maps for data discovery). In specific cases, it will be possible to distribute the deployment of the portlets among the data providers, such as seismological agencies, because of the adoption, within the distributed architecture of the NERIES portal of the Web Services for Remote Portlets (WSRP) standard for presentation-oriented web services The purpose of the portal is to provide to the user his own environment where he can surf and retrieve the data of interest, offering a set of shopping carts with storage and management facilities. This approach involves having the user interact with dedicated tools in order to compose personalized datasets that can be downloaded or combined with other information available either through the NERIES network of Web services or through the user's own carts. Administrative applications also are provided to perform monitoring tasks such as retrieving service statistics or scheduling submitted data requests. An administrative tool is included that allows the RDF model to be extended, within certain constraints, with new classes and properties.
Limitation of degree information for analyzing the interaction evolution in online social networks
NASA Astrophysics Data System (ADS)
Shang, Ke-Ke; Yan, Wei-Sheng; Xu, Xiao-Ke
2014-04-01
Previously many studies on online social networks simply analyze the static topology in which the friend relationship once established, then the links and nodes will not disappear, but this kind of static topology may not accurately reflect temporal interactions on online social services. In this study, we define four types of users and interactions in the interaction (dynamic) network. We found that active, disappeared, new and super nodes (users) have obviously different strength distribution properties and this result also can be revealed by the degree characteristics of the unweighted interaction and friendship (static) networks. However, the active, disappeared, new and super links (interactions) only can be reflected by the strength distribution in the weighted interaction network. This result indicates the limitation of the static topology data on analyzing social network evolutions. In addition, our study uncovers the approximately stable statistics for the dynamic social network in which there are a large variation for users and interaction intensity. Our findings not only verify the correctness of our definitions, but also helped to study the customer churn and evaluate the commercial value of valuable customers in online social networks.
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
Group-oriented coordination models for distributed client-server computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Hughes, Craig S.
1994-01-01
This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.
A Content Markup Language for Data Services
NASA Astrophysics Data System (ADS)
Noviello, C.; Acampa, P.; Mango Furnari, M.
Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.
Channel access schemes and fiber optic configurations for integrated-services local area networks
NASA Astrophysics Data System (ADS)
Nassehi, M. Mehdi
1987-03-01
Local Area Networks are in common use for data communications and have enjoyed great success. Recently, there is a growing interest in using a single network to support many applications in addition to traditional data traffic. These additional applications introduce new requirements in terms of volume of traffic and real-time delivery of data which are not met by existing networks. To satisfy these requirements, a high-bandwidth tranmission medium, such as fiber optics, and a distributed channel access scheme for the efficient sharing of the bandwidth among the various applications are needed. As far as the throughput-delay requirements of the various application are concerned, a network structure along with a distributed channel access are proposed which incorporate appropriate scheduling policies for the transmission of outstanding messages on the network. A dynamic scheduling policy was devised which outperforms all existing policies in terms of minimizing the expected cost per message. A broadcast mechanism was devised for the efficient dissemination of all relevant information. Fiber optic technology is considered for the high-bandwidth transmisison medium.
NASA Technical Reports Server (NTRS)
Nassehi, M. Mehdi
1987-01-01
Local Area Networks are in common use for data communications and have enjoyed great success. Recently, there is a growing interest in using a single network to support many applications in addition to traditional data traffic. These additional applications introduce new requirements in terms of volume of traffic and real-time delivery of data which are not met by existing networks. To satisfy these requirements, a high-bandwidth tranmission medium, such as fiber optics, and a distributed channel access scheme for the efficient sharing of the bandwidth among the various applications are needed. As far as the throughput-delay requirements of the various application are concerned, a network structure along with a distributed channel access are proposed which incorporate appropriate scheduling policies for the transmission of outstanding messages on the network. A dynamic scheduling policy was devised which outperforms all existing policies in terms of minimizing the expected cost per message. A broadcast mechanism was devised for the efficient dissemination of all relevant information. Fiber optic technology is considered for the high-bandwidth transmisison medium.
NSI directed to continue SPAN's functions
NASA Technical Reports Server (NTRS)
Rounds, Fred
1991-01-01
During a series of network management retreats in June and July 1990, representatives from NASA Headquarters Codes O and S agreed on networking roles and responsibilities for their respective organizations. The representatives decided that NASA Science Internet (NSI) will assume management of both the Space Physics Analysis Network (SPAN) and the NASA Science Network (NSN). SPAN is now known as the NSI/DECnet, and NSN is now known as the NSI/IP. Some management functions will be distributed between Ames Research Center (ARC) and Goddard Space Flight Center (GSFC). NSI at ARC has the lead role for requirements generation and networking engineering. Advanced Applications and the Network Information Center is being developed at GSFC. GSFC will lead the NSI User Services, but NSI at Ames will continue to provide the User Services during the transition. The transition will be made as transparent as possible for the users. DECnet service will continue, but is now directly managed by NSI at Ames. NSI will continue to work closely with routing center managers at other NASA centers, and has formed a transition team to address the change in management. An NSI/DECnet working group had also been formed as a separate engineering group within NSI to plan the transition to Phase 5, DECnet's approach to Open System Integration (OSI). Transition is not expected for a year or more due to delays in produce releases. Plans to upgrade speeds in tail circuits and the backbone are underway. The proposed baseline service for new connections is up to 56 Kbps; 9.6 Kbps lines will gradually be upgraded as requirements dictate. NSI is in the process of consolidating protocol traffic, tail circuits, and the backbone. Currently NSI's backbone is fractional T1; NSI will go to full T1 service as soon as it is feasible.
Ensuring Data Storage Security in Tree cast Routing Architecture for Sensor Networks
NASA Astrophysics Data System (ADS)
Kumar, K. E. Naresh; Sagar, U. Vidya; Waheed, Mohd. Abdul
2010-10-01
In this paper presents recent advances in technology have made low-cost, low-power wireless sensors with efficient energy consumption. A network of such nodes can coordinate among themselves for distributed sensing and processing of certain data. For which, we propose an architecture to provide a stateless solution in sensor networks for efficient routing in wireless sensor networks. This type of architecture is known as Tree Cast. We propose a unique method of address allocation, building up multiple disjoint trees which are geographically inter-twined and rooted at the data sink. Using these trees, routing messages to and from the sink node without maintaining any routing state in the sensor nodes is possible. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, this routing architecture moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this paper, we focus on data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in this architecture, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
Raghuram, Jayaram; Miller, David J; Kesidis, George
2014-07-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates.
Raghuram, Jayaram; Miller, David J.; Kesidis, George
2014-01-01
We propose a method for detecting anomalous domain names, with focus on algorithmically generated domain names which are frequently associated with malicious activities such as fast flux service networks, particularly for bot networks (or botnets), malware, and phishing. Our method is based on learning a (null hypothesis) probability model based on a large set of domain names that have been white listed by some reliable authority. Since these names are mostly assigned by humans, they are pronounceable, and tend to have a distribution of characters, words, word lengths, and number of words that are typical of some language (mostly English), and often consist of words drawn from a known lexicon. On the other hand, in the present day scenario, algorithmically generated domain names typically have distributions that are quite different from that of human-created domain names. We propose a fully generative model for the probability distribution of benign (white listed) domain names which can be used in an anomaly detection setting for identifying putative algorithmically generated domain names. Unlike other methods, our approach can make detections without considering any additional (latency producing) information sources, often used to detect fast flux activity. Experiments on a publicly available, large data set of domain names associated with fast flux service networks show encouraging results, relative to several baseline methods, with higher detection rates and low false positive rates. PMID:25685511
Enhancing the NS-2 Network Simulator for Near Real-Time Control Feedback and Distributed Simulation
2009-03-21
Communication Mediator, see mediator Constant Bit Rate, see cbr Emulation, 8 Georgia Tech Network Simulator, see GT- NetS Globlal Mobile Information ...PAGE Form ApprovedOMB No. 0704–0188 The public reporting burden for this collection of information is estimated to average 1 hour per response...for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704–0188
Analysis of a PWM Resonant Buck Chopper for Use as a Ship Service Converter Module
1999-01-01
zonal architecture [2] has a number of advantages over the current radial distribution architecture. The radial network includes generators supplying...Several representative topologies are considered in this section. The literature is replete with softswitching dc-dc converter topologies and control...differs from a conventional PWM buck by the addition of a resonant network consisting of inductor Lr, capacitor Q, an auxiliary switch Sr, an auxiliary
A mission operations architecture for the 21st century
NASA Technical Reports Server (NTRS)
Tai, W.; Sweetnam, D.
1996-01-01
An operations architecture is proposed for low cost missions beyond the year 2000. The architecture consists of three elements: a service based architecture; a demand access automata; and distributed science hubs. The service based architecture is based on a set of standard multimission services that are defined, packaged and formalized by the deep space network and the advanced multi-mission operations system. The demand access automata is a suite of technologies which reduces the need to be in contact with the spacecraft, and thus reduces operating costs. The beacon signaling, the virtual emergency room, and the high efficiency tracking automata technologies are described. The distributed science hubs provide information system capabilities to the small science oriented flight teams: individual access to all traditional mission functions and services; multimedia intra-team communications, and automated direct transparent communications between the scientists and the instrument.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
NASA Astrophysics Data System (ADS)
Arenaccio, S.; Vernucci, A.; Padovani, R.; Arcidiacono, A.
Results of a detailed comparative performance assessment between two candidate access solutions for the provision of land-mobile services, i.e., FDMA and CDMA, for the European Land-Mobile Satellite Services (LMSS) provision are presented. The design of the CDMA access system and the network architecture, system procedures, network control, operation in fading environments, and implementation aspects of the system are described. The CDMA system is shown to yield superior traffic capability, despite the absence of polarization reuse due to payload design, especially in the second-generation era (multiple spot-beams). In this case, the advantage was found to be largely dependent on the traffic distribution across spot beams. Power control techniques are proposed to cope with the geographical disadvantage suffered by mobile stations located at the beam borders to compensate for fadings.
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.
1983-01-01
The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
NASA Astrophysics Data System (ADS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Al-Kinani, G.
1983-08-01
The potential United States domestic telecommunications demand for satellite provided customer premises voice, data and video services through the year 2000 were forecast, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by Computer premises services systems, identification of that portion of the satellite market addressabble by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a market distribution model and a network optimization model. Forecasts were developed for; 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
A distributed incentive compatible pricing mechanism for P2P networks
NASA Astrophysics Data System (ADS)
Zhang, Jie; Zhao, Zheng; Xiong, Xiao; Shi, Qingwei
2007-09-01
Peer-to-Peer (P2P) systems are currently receiving considerable interest. However, as experience with P2P networks shows, the selfish behaviors of peers may lead to serious problems of P2P network, such as free-riding and white-washing. In order to solve these problems, there are increasing considerations on reputation system design in the study of P2P networks. Most of the existing works is concerning probabilistic estimation or social networks to evaluate the trustworthiness for a peer to others. However, these models can not be efficient all the time. In this paper, our aim is to provide a general mechanism that can maximize P2P networks social welfare in a way of Vickrey-Clarke-Groves family, while assuming every peer in P2P networks is rational and selfish, which means they only concern about their own outcome. This mechanism has some desirable properties using an O(n) algorithm: (1) incentive compatibility, every peer truly report its connection type; (2) individually rationality; and (3) fully decentralized, we design a multiple-principal multiple-agent model, concerning about the service provider and service requester individually.
7 CFR 251.4 - Availability of commodities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... existing food bank networks and other organizations whose ongoing primary function is to facilitate the... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION THE EMERGENCY FOOD ASSISTANCE PROGRAM § 251.4...
An Optimization of the Maintenance Assets Distribution Network in the Argentine Air Force
2015-03-26
Air Force (2010). Manual de Conduccion Logistica . Buenos Aires: HQ Argentine Air Force. Argentine Air Force (2012). El vuelo del condor: 1912-2012...recommendation was made to consider organic or private transportation and reduce transportation time in order to improve responsiveness and drive down...determine overall transportation demand and capacity required for a defined level of service, and to evaluate the tradeoffs between costs and service
Telecommunications Services Required by Distributed and Interconnected Office Centers.
1980-07-20
systems and communications management systems which are on the market . It is expected that these systems and the capabilities they offer will be available...saw the possibilities of marketing the service, but was delayed in its implementation because the high capacity communication network to support the...Jersey 07666. [181 Washburn, C, Unfolding Electronic Mail Market Leads to Integrated Info Systems, Communications News, November 1979, page 56. /191
BACTrack: A Surveillance Technique for Detecting and Locating Bioagent Attacks
2003-06-10
Implementation • Location History – Location tracking/storage using cell - phone network (geo-location mandated by 2006) • Subscription Services...Reporting – User reports symptoms through automated cell - phone interface using password Individual reports only releasable with password Summary...Earlier detection and location relative to medical surveillance • The cell - phone location based service market can offer a means to implement BACTrack and to distribute its costs
ERIC Educational Resources Information Center
Cavenaugh, David
This document presents a review of the current consumer relations activites of the National Library Service (NLS) for the Blind and Physically Handicapped of the Library of Congress, and an overall plan to improve NLS receipt of user suggestions, comments, opinions, or complaints through libraries which form the nationwide NLS distribution system.…
Latif, Rabia; Abbas, Haider; Assar, Saïd
2014-11-01
Wireless Body Area Networks (WBANs) have emerged as a promising technology that has shown enormous potential in improving the quality of healthcare, and has thus found a broad range of medical applications from ubiquitous health monitoring to emergency medical response systems. The huge amount of highly sensitive data collected and generated by WBAN nodes requires an ascendable and secure storage and processing infrastructure. Given the limited resources of WBAN nodes for storage and processing, the integration of WBANs and cloud computing may provide a powerful solution. However, despite the benefits of cloud-assisted WBAN, several security issues and challenges remain. Among these, data availability is the most nagging security issue. The most serious threat to data availability is a distributed denial of service (DDoS) attack that directly affects the all-time availability of a patient's data. The existing solutions for standalone WBANs and sensor networks are not applicable in the cloud. The purpose of this review paper is to identify the most threatening types of DDoS attacks affecting the availability of a cloud-assisted WBAN and review the state-of-the-art detection mechanisms for the identified DDoS attacks.
A comparative study of 11 local health department organizational networks.
Merrill, Jacqueline; Keeling, Jonathan W; Carley, Kathleen M
2010-01-01
Although the nation's local health departments (LHDs) share a common mission, variability in administrative structures is a barrier to identifying common, optimal management strategies. There is a gap in understanding what unifying features LHDs share as organizations that could be leveraged systematically for achieving high performance. To explore sources of commonality and variability in a range of LHDs by comparing intraorganizational networks. We used organizational network analysis to document relationships between employees, tasks, knowledge, and resources within LHDs, which may exist regardless of formal administrative structure. A national sample of 11 LHDs from seven states that differed in size, geographic location, and governance. Relational network data were collected via an on-line survey of all employees in 11 LHDs. A total of 1062 out of 1239 employees responded (84% response rate). Network measurements were compared using coefficient of variation. Measurements were correlated with scores from the National Public Health Performance Assessment and with LHD demographics. Rankings of tasks, knowledge, and resources were correlated across pairs of LHDs. We found that 11 LHDs exhibited compound organizational structures in which centralized hierarchies were coupled with distributed networks at the point of service. Local health departments were distinguished from random networks by a pattern of high centralization and clustering. Network measurements were positively associated with performance for 3 of 10 essential services (r > 0.65). Patterns in the measurements suggest how LHDs adapt to the population served. Shared network patterns across LHDs suggest where common organizational management strategies are feasible. This evidence supports national efforts to promote uniform standards for service delivery to diverse populations.
Holding-time-aware asymmetric spectrum allocation in virtual optical networks
NASA Astrophysics Data System (ADS)
Lyu, Chunjian; Li, Hui; Liu, Yuze; Ji, Yuefeng
2017-10-01
Virtual optical networks (VONs) have been considered as a promising solution to support current high-capacity dynamic traffic and achieve rapid applications deployment. Since most of the network services (e.g., high-definition video service, cloud computing, distributed storage) in VONs are provisioned by dedicated data centers, needing different amount of bandwidth resources in both directions, the network traffic is mostly asymmetric. The common strategy, symmetric provisioning of traffic in optical networks, leads to a waste of spectrum resources in such traffic patterns. In this paper, we design a holding-time-aware asymmetric spectrum allocation module based on SDON architecture and an asymmetric spectrum allocation algorithm based on the module is proposed. For the purpose of reducing spectrum resources' waste, the algorithm attempts to reallocate the idle unidirectional spectrum slots in VONs, which are generated due to the asymmetry of services' bidirectional bandwidth. This part of resources can be exploited by other requests, such as short-time non-VON requests. We also introduce a two-dimensional asymmetric resource model for maintaining idle spectrum resources information of VON in spectrum and time domains. Moreover, a simulation is designed to evaluate the performance of the proposed algorithm, and results show that our proposed asymmetric spectrum allocation algorithm can improve the resource waste and reduce blocking probability.
Network harness: bundles of routes in public transport networks
NASA Astrophysics Data System (ADS)
Berche, B.; von Ferber, C.; Holovatch, T.
2009-12-01
Public transport routes sharing the same grid of streets and tracks are often found to proceed in parallel along shorter or longer sequences of stations. Similar phenomena are observed in other networks built with space consuming links such as cables, vessels, pipes, neurons, etc. In the case of public transport networks (PTNs) this behavior may be easily worked out on the basis of sequences of stations serviced by each route. To quantify this behavior we use the recently introduced notion of network harness. It is described by the harness distribution P(r, s): the number of sequences of s consecutive stations that are serviced by r parallel routes. For certain PTNs that we have analyzed we observe that the harness distribution may be described by power laws. These power laws indicate a certain level of organization and planning which may be driven by the need to minimize the costs of infrastructure and secondly by the fact that points of interest tend to be clustered in certain locations of a city. This effect may be seen as a result of the strong interdependence of the evolutions of both the city and its PTN. To further investigate the significance of the empirical results we have studied one- and two-dimensional models of randomly placed routes modeled by different types of walks. While in one dimension an analytic treatment was successful, the two dimensional case was studied by simulations showing that the empirical results for real PTNs deviate significantly from those expected for randomly placed routes.
Santos, Cleuzieli Moraes Dos; Barbieri, Ana Rita; Gonçalves, Crhistinne Cavalheiro Maymone; Tsuha, Daniel Henrique
2017-06-12
In the context of public health policies, healthcare network is a strategy that aims to promote people's equitable access to services and to reduce fragmentation. The aim of this study was to evaluate the degree of development of components in a healthcare network for hypertension. This was an ex-ante, cross-sectional evaluative study focused on the implementation of a healthcare network for persons with chronic diseases, applying a questionnaire to 17 health administrators from the municipalities (counties) comprising the largest health district in Mato Grosso do Sul State, Brazil. The questionnaire consisted of 65 questions covering the five components: Primary Health Care; Specialized Care; Support Systems; Logistics Systems; and Governance. The study conducted descriptive statistical tests and the classification of services provided in each component using the Friedman test, followed by the Student-Newman-Keuls post hoc test, with significance set at 5%. The results were distributed in quartiles and presented in boxplot graphs. Correlations were established between the dimensions. According to the findings, the components are in an intermediate degree of implementation, with low development of the items needed for establishing networks. Primary Health Care does not coordinate the care, and the Specialized Care and Governance components showed the worst results. The findings indicate predominance of installed services that still fall short of the necessary practices for establishing healthcare networks, which can compromise their implementation.
Twitter web-service for soft agent reporting in persistent surveillance systems
NASA Astrophysics Data System (ADS)
Rababaah, Haroun; Shirkhodaie, Amir
2010-04-01
Persistent surveillance is an intricate process requiring monitoring, gathering, processing, tracking, and characterization of many spatiotemporal events occurring concurrently. Data associated with events can be readily attained by networking of hard (physical) sensors. Sensors may have homogeneous or heterogeneous (hybrid) sensing modalities with different communication bandwidth requirements. Complimentary to hard sensors are human observers or "soft sensors" that can report occurrences of evolving events via different communication devices (e.g., texting, cell phones, emails, instant messaging, etc.) to the command control center. However, networking of human observers in ad-hoc way is rather a difficult task. In this paper, we present a Twitter web-service for soft agent reporting in persistent surveillance systems (called Web-STARS). The objective of this web-service is to aggregate multi-source human observations in hybrid sensor networks rapidly. With availability of Twitter social network, such a human networking concept can not only be realized for large scale persistent surveillance systems (PSS), but also, it can be employed with proper interfaces to expedite rapid events reporting by human observers. The proposed technique is particularly suitable for large-scale persistent surveillance systems with distributed soft and hard sensor networks. The efficiency and effectiveness of the proposed technique is measured experimentally by conducting several simulated persistent surveillance scenarios. It is demonstrated that by fusion of information from hard and soft agents improves understanding of common operating picture and enhances situational awareness.
Dalley, C; Basarir, H; Wright, J G; Fernando, M; Pearson, D; Ward, S E; Thokula, P; Krishnankutty, A; Wilson, G; Dalton, A; Talley, P; Barnett, D; Hughes, D; Porter, N R; Reilly, J T; Snowden, J A
2015-04-01
Specialist Integrated Haematological Malignancy Diagnostic Services (SIHMDS) were introduced as a standard of care within the UK National Health Service to reduce diagnostic error and improve clinical outcomes. Two broad models of service delivery have become established: 'co-located' services operating from a single-site and 'networked' services, with geographically separated laboratories linked by common management and information systems. Detailed systematic cost analysis has never been published on any established SIHMDS model. We used Activity Based Costing (ABC) to construct a cost model for our regional 'networked' SIHMDS covering a two-million population based on activity in 2011. Overall estimated annual running costs were £1 056 260 per annum (£733 400 excluding consultant costs), with individual running costs for diagnosis, staging, disease monitoring and end of treatment assessment components of £723 138, £55 302, £184 152 and £94 134 per annum, respectively. The cost distribution by department was 28.5% for haematology, 29.5% for histopathology and 42% for genetics laboratories. Costs of the diagnostic pathways varied considerably; pathways for myelodysplastic syndromes and lymphoma were the most expensive and the pathways for essential thrombocythaemia and polycythaemia vera being the least. ABC analysis enables estimation of running costs of a SIHMDS model comprised of 'networked' laboratories. Similar cost analyses for other SIHMDS models covering varying populations are warranted to optimise quality and cost-effectiveness in delivery of modern haemato-oncology diagnostic services in the UK as well as internationally. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Ahnn, Jong Hoon; Potkonjak, Miodrag
2013-10-01
Although mobile health monitoring where mobile sensors continuously gather, process, and update sensor readings (e.g. vital signals) from patient's sensors is emerging, little effort has been investigated in an energy-efficient management of sensor information gathering and processing. Mobile health monitoring with the focus of energy consumption may instead be holistically analyzed and systematically designed as a global solution to optimization subproblems. This paper presents an attempt to decompose the very complex mobile health monitoring system whose layer in the system corresponds to decomposed subproblems, and interfaces between them are quantified as functions of the optimization variables in order to orchestrate the subproblems. We propose a distributed and energy-saving mobile health platform, called mHealthMon where mobile users publish/access sensor data via a cloud computing-based distributed P2P overlay network. The key objective is to satisfy the mobile health monitoring application's quality of service requirements by modeling each subsystem: mobile clients with medical sensors, wireless network medium, and distributed cloud services. By simulations based on experimental data, we present the proposed system can achieve up to 10.1 times more energy-efficient and 20.2 times faster compared to a standalone mobile health monitoring application, in various mobile health monitoring scenarios applying a realistic mobility model.
Huh, S J; Shirato, H; Hashimoto, S; Shimizu, S; Kim, D Y; Ahn, Y C; Choi, D; Miyasaka, K; Mizuno, J
2000-07-01
This study introduces the integrated service digital network (ISDN)-based international teleradiotherapy system (THERAPIS) in radiation oncology between hospitals in Seoul, South Korea and in Sapporo, Japan. THERAPIS has the following functions: (1) exchange of patient's image data, (2) real-time teleconference, and (3) communication of the treatment planning, dose calculation and distribution, and of portal verification images between the remote hospitals. Our preliminary results of applications on eight patients demonstrated that the international telecommunication using THERAPIS was clinically useful and satisfactory with sufficient bandwidth for the transfer of patient data for clinical use in radiation oncology.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Digital services using quadrature amplitude modulation (QAM) over CATV analog DWDM system
NASA Astrophysics Data System (ADS)
Yeh, JengRong; Selker, Mark D.; Trail, J.; Piehler, David; Levi, Israel
2000-04-01
Dense Wavelength Division Multiplexing (DWDM) has recently gained great popularity as it provides a cost effective way to increase the transmission capacity of the existing fiber cable plant. For a long time, Dense WDM was exclusively used for baseband digital applications, predominantly in terrestrial long haul networks and in some cases in metropolitan and enterprise networks. Recently, the performance of DWDM components and frequency-stabilized lasers has substantially improved while the costs have down significantly. This makes a variety of new optical network architectures economically viable. The first commercial 8- wavelength DWDM system designed for Hybrid Fiber Coax networks was reported in 1998. This type of DWDM system utilizes Sub-Carrier Multiplexing (SCM) of Quadrature Amplitude Modulated (QAM) signals to transport IP data digital video broadcast and Video on Demand on ITU grid lightwave carriers. The ability of DWDM to provide scalable transmission capacity in the optical layer with SCM granularity is now considered by many to be the most promising technology for future transport and distribution of broadband multimedia services.
Traffic handling capability of a broadband indoor wireless network using CDMA multiple access
NASA Astrophysics Data System (ADS)
Zhang, Chang G.; Hafez, H. M.; Falconer, David D.
1994-05-01
CDMA (code division multiple access) may be an attractive technique for wireless access to broadband services because of its multiple access simplicity and other appealing features. In order to investigate traffic handling capabilities of a future network providing a variety of integrated services, this paper presents a study of a broadband indoor wireless network supporting high-speed traffic using CDMA multiple access. The results are obtained through the simulation of an indoor environment and the traffic capabilities of the wireless access to broadband 155.5 MHz ATM-SONET networks using the mm-wave band. A distributed system architecture is employed and the system performance is measured in terms of call blocking probability and dropping probability. The impacts of the base station density, traffic load, average holding time, and variable traffic sources on the system performance are examined. The improvement of system performance by implementing various techniques such as handoff, admission control, power control and sectorization are also investigated.
Analytical approach to cross-layer protocol optimization in wireless sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in terms of the concatenated protocol parameters. New source-to-destination routes are sought that optimize cross-layer interdependencies to achieve the "best available" performance in the WSN. The protocol design, modified from a known reactive protocol, adapts the achievable performance to the transient network conditions and resource levels. Control of network behavior is realized through the conditional rates of the MVPPs. Optimal cross-layer protocol parameters are determined by stochastic dynamic programming conditions derived from models of transient packetized sensor data flows. Moreover, the defining conditions for WSN configurations, grouping sensor nodes into clusters and establishing data aggregation at processing nodes within those clusters, lead to computationally tractable solutions to the stochastic differential equations that describe network dynamics. Closed-form solution characteristics provide an alternative to the "directed diffusion" methods for resource-efficient WSN protocols published previously by other researchers. Performance verification of the resulting cross-layer designs is found by embedding the optimality conditions for the protocols in actual WSN scenarios replicated in a wireless network simulation environment. Performance tradeoffs among protocol parameters remain for a sequel to the paper.
Integrated Services Digital Network (ISDN).
1985-06-01
U.S. Digital Networks," Telephony, February 6, 1984. Gerla, Mario and Pazos -Rangel, Rodolfo A ., "Bandwidth Allocation and Routing in ISDN’s," IEEE...111116o 1312. jf IL5 11111.4 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963 A • .4 i U-~ U. NAVAL POSTGRADUATE SCHOOL G ,.Monterey...Thesis Advisor: C. R. Jones P: A -nroved for nublic release; distribution unlimited 85o 09 s0 082 1--- S **. -: -* *.- -. ..-. * SECURITY CLASSIFICATION
The Teletón Centers Of Child Rehabilitation Mexico: 'El amor y la ciencia al servicio del hombre'.
Carbajal, Alejandro Parodi
2008-07-01
This paper describes the foundation and distribution of the Teleton and Oritel networks of rehabilitation. An essential component of the Pan-American network, as reported by Dr. Jorge Hernandez Franco, Teleton follows a humanistic perspective in offering a comprehensive biopsychosocial model of therapy and care. Public engagement is central to the success of Teleton and highlights the importance of increasing public understanding of rehabilitation needs and service delivery.
Brückner, G K; Linnane, S; Diaz, F; Vallat, B
2007-01-01
Two separate questionnaires were distributed to 20 OIE Collaborating Centres and 160 OIE Reference Laboratories to assess the current status of networking and collaboration among OIE Reference Laboratories and between OIE Reference Laboratories and OIE Collaborating Centres. The questionnaire for the OIE Reference Laboratories contained 7 sections with questions on networking between laboratories, reporting of information, biosecurity quality control, and financing. Emphasis was placed in obtaining information on inter-laboratory relationships and exchange of expertise, training needs and sharing of data and information. The questionnaire for the OIE Collaborating Centres contained six sections with the emphasis on aspects related to awareness of services that can be provided, expertise that could be made available, sharing of information and the relationship with the national veterinary services of the countries concerned. The responses to the questionnaires were collated, categorised and statistically evaluated to allow for tentative inferences on the data provided. Valuable information emanated from the data identifying the current status of networking and indicating possible shortcomings that could be addressed to improve networking.
A portal for the ocean biogeographic information system
Zhang, Yunqing; Grassle, J. F.
2002-01-01
Since its inception in 1999 the Ocean Biogeographic Information System (OBIS) has developed into an international science program as well as a globally distributed network of biogeographic databases. An OBIS portal at Rutgers University provides the links and functional interoperability among member database systems. Protocols and standards have been established to support effective communication between the portal and these functional units. The portal provides distributed data searching, a taxonomy name service, a GIS with access to relevant environmental data, biological modeling, and education modules for mariners, students, environmental managers, and scientists. The portal will integrate Census of Marine Life field projects, national data archives, and other functional modules, and provides for network-wide analyses and modeling tools.
Demonstration and field trial of a resilient hybrid NG-PON test-bed
NASA Astrophysics Data System (ADS)
Prat, Josep; Polo, Victor; Schrenk, Bernhard; Lazaro, Jose A.; Bonada, Francesc; Lopez, Eduardo T.; Omella, Mireia; Saliou, Fabienne; Le, Quang T.; Chanclou, Philippe; Leino, Dmitri; Soila, Risto; Spirou, Spiros; Costa, Liliana; Teixeira, Antonio; Tosi-Beleffi, Giorgio M.; Klonidis, Dimitrios; Tomkos, Ioannis
2014-10-01
A multi-layer next generation PON prototype has been built and tested, to show the feasibility of extended hybrid DWDM/TDM-XGPON FTTH networks with resilient optically-integrated ring-trees architecture, supporting broadband multimedia services. It constitutes a transparent common platform for the coexistence of multiple operators sharing the optical infrastructure of the central metro ring, passively combining the access and the metropolitan network sections. It features 32 wavelength connections at 10 Gbps, up to 1000 users distributed in 16 independent resilient sub-PONs over 100 km. This paper summarizes the network operation, demonstration and field trial results.
Kotevska, Olivera; Kusne, A. Gilad; Samarov, Daniel V.; Lbath, Ahmed; Battou, Abdella
2017-01-01
Today’s cities generate tremendous amounts of data, thanks to a boom in affordable smart devices and sensors. The resulting big data creates opportunities to develop diverse sets of context-aware services and systems, ensuring smart city services are optimized to the dynamic city environment. Critical resources in these smart cities will be more rapidly deployed to regions in need, and those regions predicted to have an imminent or prospective need. For example, crime data analytics may be used to optimize the distribution of police, medical, and emergency services. However, as smart city services become dependent on data, they also become susceptible to disruptions in data streams, such as data loss due to signal quality reduction or due to power loss during data collection. This paper presents a dynamic network model for improving service resilience to data loss. The network model identifies statistically significant shared temporal trends across multivariate spatiotemporal data streams and utilizes these trends to improve data prediction performance in the case of data loss. Dynamics also allow the system to respond to changes in the data streams such as the loss or addition of new information flows. The network model is demonstrated by city-based crime rates reported in Montgomery County, MD, USA. A resilient network is developed utilizing shared temporal trends between cities to provide improved crime rate prediction and robustness to data loss, compared with the use of single city-based auto-regression. A maximum improvement in performance of 7.8% for Silver Spring is found and an average improvement of 5.6% among cities with high crime rates. The model also correctly identifies all the optimal network connections, according to prediction error minimization. City-to-city distance is designated as a predictor of shared temporal trends in crime and weather is shown to be a strong predictor of crime in Montgomery County. PMID:29250476
Kotevska, Olivera; Kusne, A Gilad; Samarov, Daniel V; Lbath, Ahmed; Battou, Abdella
2017-01-01
Today's cities generate tremendous amounts of data, thanks to a boom in affordable smart devices and sensors. The resulting big data creates opportunities to develop diverse sets of context-aware services and systems, ensuring smart city services are optimized to the dynamic city environment. Critical resources in these smart cities will be more rapidly deployed to regions in need, and those regions predicted to have an imminent or prospective need. For example, crime data analytics may be used to optimize the distribution of police, medical, and emergency services. However, as smart city services become dependent on data, they also become susceptible to disruptions in data streams, such as data loss due to signal quality reduction or due to power loss during data collection. This paper presents a dynamic network model for improving service resilience to data loss. The network model identifies statistically significant shared temporal trends across multivariate spatiotemporal data streams and utilizes these trends to improve data prediction performance in the case of data loss. Dynamics also allow the system to respond to changes in the data streams such as the loss or addition of new information flows. The network model is demonstrated by city-based crime rates reported in Montgomery County, MD, USA. A resilient network is developed utilizing shared temporal trends between cities to provide improved crime rate prediction and robustness to data loss, compared with the use of single city-based auto-regression. A maximum improvement in performance of 7.8% for Silver Spring is found and an average improvement of 5.6% among cities with high crime rates. The model also correctly identifies all the optimal network connections, according to prediction error minimization. City-to-city distance is designated as a predictor of shared temporal trends in crime and weather is shown to be a strong predictor of crime in Montgomery County.
Design of Provider-Provisioned Website Protection Scheme against Malware Distribution
NASA Astrophysics Data System (ADS)
Yagi, Takeshi; Tanimoto, Naoto; Hariu, Takeo; Itoh, Mitsutaka
Vulnerabilities in web applications expose computer networks to security threats, and many websites are used by attackers as hopping sites to attack other websites and user terminals. These incidents prevent service providers from constructing secure networking environments. To protect websites from attacks exploiting vulnerabilities in web applications, service providers use web application firewalls (WAFs). WAFs filter accesses from attackers by using signatures, which are generated based on the exploit codes of previous attacks. However, WAFs cannot filter unknown attacks because the signatures cannot reflect new types of attacks. In service provider environments, the number of exploit codes has recently increased rapidly because of the spread of vulnerable web applications that have been developed through cloud computing. Thus, generating signatures for all exploit codes is difficult. To solve these problems, our proposed scheme detects and filters malware downloads that are sent from websites which have already received exploit codes. In addition, to collect information for detecting malware downloads, web honeypots, which automatically extract the communication records of exploit codes, are used. According to the results of experiments using a prototype, our scheme can filter attacks automatically so that service providers can provide secure and cost-effective network environments.
Research on information security system of waste terminal disposal process
NASA Astrophysics Data System (ADS)
Zhou, Chao; Wang, Ziying; Guo, Jing; Guo, Yajuan; Huang, Wei
2017-05-01
Informatization has penetrated the whole process of production and operation of electric power enterprises. It not only improves the level of lean management and quality service, but also faces severe security risks. The internal network terminal is the outermost layer and the most vulnerable node of the inner network boundary. It has the characteristics of wide distribution, long depth and large quantity. The user and operation and maintenance personnel technical level and security awareness is uneven, which led to the internal network terminal is the weakest link in information security. Through the implementation of security of management, technology and physics, we should establish an internal network terminal security protection system, so as to fully protect the internal network terminal information security.
High-speed digital wireless battlefield network
NASA Astrophysics Data System (ADS)
Dao, Son K.; Zhang, Yongguang; Shek, Eddie C.; van Buer, Darrel
1999-07-01
In the past two years, the Digital Wireless Battlefield Network consortium that consists of HRL Laboratories, Hughes Network Systems, Raytheon, and Stanford University has participated in the DARPA TRP program to leverage the efforts in the development of commercial digital wireless products for use in the 21st century battlefield. The consortium has developed an infrastructure and application testbed to support the digitized battlefield. The consortium has implemented and demonstrated this network system. Each member is currently utilizing many of the technology developed in this program in commercial products and offerings. These new communication hardware/software and the demonstrated networking features will benefit military systems and will be applicable to the commercial communication marketplace for high speed voice/data multimedia distribution services.
Two-dimensional priority-based dynamic resource allocation algorithm for QoS in WDM/TDM PON networks
NASA Astrophysics Data System (ADS)
Sun, Yixin; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Rao, Lan
2018-01-01
Wavelength division multiplexing/time division multiplexing (WDM/TDM) passive optical networks (PON) is being viewed as a promising solution for delivering multiple services and applications. The hybrid WDM / TDM PON uses the wavelength and bandwidth allocation strategy to control the distribution of the wavelength channels in the uplink direction, so that it can ensure the high bandwidth requirements of multiple Optical Network Units (ONUs) while improving the wavelength resource utilization. Through the investigation of the presented dynamic bandwidth allocation algorithms, these algorithms can't satisfy the requirements of different levels of service very well while adapting to the structural characteristics of mixed WDM / TDM PON system. This paper introduces a novel wavelength and bandwidth allocation algorithm to efficiently utilize the bandwidth and support QoS (Quality of Service) guarantees in WDM/TDM PON. Two priority based polling subcycles are introduced in order to increase system efficiency and improve system performance. The fixed priority polling subcycle and dynamic priority polling subcycle follow different principles to implement wavelength and bandwidth allocation according to the priority of different levels of service. A simulation was conducted to study the performance of the priority based polling in dynamic resource allocation algorithm in WDM/TDM PON. The results show that the performance of delay-sensitive services is greatly improved without degrading QoS guarantees for other services. Compared with the traditional dynamic bandwidth allocation algorithms, this algorithm can meet bandwidth needs of different priority traffic class, achieve low loss rate performance, and ensure real-time of high priority traffic class in terms of overall traffic on the network.
The National Virtual Observatory
NASA Astrophysics Data System (ADS)
Hanisch, Robert J.
2001-06-01
The National Virtual Observatory is a distributed computational facility that will provide access to the ``virtual sky''-the federation of astronomical data archives, object catalogs, and associated information services. The NVO's ``virtual telescope'' is a common framework for requesting, retrieving, and manipulating information from diverse, distributed resources. The NVO will make it possible to seamlessly integrate data from the new all-sky surveys, enabling cross-correlations between multi-Terabyte catalogs and providing transparent access to the underlying image or spectral data. Success requires high performance computational systems, high bandwidth network services, agreed upon standards for the exchange of metadata, and collaboration among astronomers, astronomical data and information service providers, information technology specialists, funding agencies, and industry. International cooperation at the onset will help to assure that the NVO simultaneously becomes a global facility. .
NASA Technical Reports Server (NTRS)
Panangadan, Anand; Monacos, Steve; Burleigh, Scott; Joswig, Joseph; James, Mark; Chow, Edward
2012-01-01
In this paper, we describe the architecture of both the PATS and SAP systems and how these two systems interoperate with each other forming a unified capability for deploying intelligence in hostile environments with the objective of providing actionable situational awareness of individuals. The SAP system works in concert with the UICDS information sharing middleware to provide data fusion from multiple sources. UICDS can then publish the sensor data using the OGC's Web Mapping Service, Web Feature Service, and Sensor Observation Service standards. The system described in the paper is able to integrate a spatially distributed sensor system, operating without the benefit of the Web infrastructure, with a remote monitoring and control system that is equipped to take advantage of SWE.
2010-04-01
delivery models: • Software as a Service ( SaaS ): The consumer uses an application, but does not control the operating system, hardware or network...suggestions for reducing this burden, to Washington Headquarters Services , Directorate for Information Operations and Reports, 1215 Jefferson Davis...AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES Presented at the 22nd Systems and Software
Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila
2015-11-01
Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Providing the full DDF link protection for bus-connected SIEPON based system architecture
NASA Astrophysics Data System (ADS)
Hwang, I.-Shyan; Pakpahan, Andrew Fernando; Liem, Andrew Tanny; Nikoukar, AliAkbar
2016-09-01
Currently a massive amount of traffic per second is delivered through EPON systems, one of the prominent access network technologies for delivering the next generation network. Therefore, it is vital to keep the EPON optical distribution network (ODN) working by providing the necessity protection mechanism in the deployed devices; otherwise, when failures occur it will cause a great loss for both network operators and business customers. In this paper, we propose a bus-connected architecture to protect and recover distribution drop fiber (DDF) link faults or transceiver failures at ONU(s) in SIEPON system. The proposed architecture provides a cost-effective architecture, which delivers the high fault-tolerance in handling multiple DDF faults, while also providing flexibility in choosing the backup ONU assignments. Simulation results show that the proposed architecture provides the reliability and maintains quality of service (QoS) performance in terms of mean packet delay, system throughput, packet loss and EF jitter when DDF link failures occur.
Anti-social networking: crowdsourcing and the cyber defence of national critical infrastructures.
Johnson, Chris W
2014-01-01
We identify four roles that social networking plays in the 'attribution problem', which obscures whether or not cyber-attacks were state-sponsored. First, social networks motivate individuals to participate in Distributed Denial of Service attacks by providing malware and identifying potential targets. Second, attackers use an individual's social network to focus attacks, through spear phishing. Recipients are more likely to open infected attachments when they come from a trusted source. Third, social networking infrastructures create disposable architectures to coordinate attacks through command and control servers. The ubiquitous nature of these architectures makes it difficult to determine who owns and operates the servers. Finally, governments recruit anti-social criminal networks to launch attacks on third-party infrastructures using botnets. The closing sections identify a roadmap to increase resilience against the 'dark side' of social networking.
Building and measuring a high performance network architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kramer, William T.C.; Toole, Timothy; Fisher, Chuck
2001-04-20
Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less
Lost in the city: revisiting Milgram's experiment in the age of social networks.
Szüle, János; Kondor, Dániel; Dobos, László; Csabai, István; Vattay, Gábor
2014-01-01
As more and more users access social network services from smart devices with GPS receivers, the available amount of geo-tagged information makes repeating classical experiments possible on global scales and with unprecedented precision. Inspired by the original experiments of Milgram, we simulated message routing within a representative sub-graph of the network of Twitter users with about 6 million geo-located nodes and 122 million edges. We picked pairs of users from two distant metropolitan areas and tried to find a route between them using local geographic information only; our method was to forward messages to a friend living closest to the target. We found that the examined network is navigable on large scales, but navigability breaks down at the city scale and the network becomes unnavigable on intra-city distances. This means that messages usually arrived to the close proximity of the target in only 3-6 steps, but only in about 20% of the cases was it possible to find a route all the way to the recipient, in spite of the network being connected. This phenomenon is supported by the distribution of link lengths; on larger scales the distribution behaves approximately as P(d) ≈ 1/d, which was found earlier by Kleinberg to allow efficient navigation, while on smaller scales, a fractal structure becomes apparent. The intra-city correlation dimension of the network was found to be D2 = 1.25, less than the dimension D2 = 1.78 of the distribution of the population.
Optimal Regulation of Virtual Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea
This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less
An historical overview of the National Network of Libraries of Medicine, 1985-2015.
Speaker, Susan L
2018-04-01
The National Network of Libraries of Medicine (NNLM), established as the Regional Medical Library Program in 1965, has a rich and remarkable history. The network's first twenty years were documented in a detailed 1987 history by Alison Bunting, AHIP, FMLA. This article traces the major trends in the network's development since then: reconceiving the Regional Medical Library staff as a "field force" for developing, marketing, and distributing a growing number of National Library of Medicine (NLM) products and services; subsequent expansion of outreach to health professionals who are unaffiliated with academic medical centers, particularly those in public health; the advent of the Internet during the 1990s, which brought the migration of NLM and NNLM resources and services to the World Wide Web, and a mandate to encourage and facilitate Internet connectivity in the network; and the further expansion of the NLM and NNLM mission to include providing consumer health resources to satisfy growing public demand. The concluding section discusses the many challenges that NNLM staff faced as they transformed the network from a system that served mainly academic medical researchers to a larger, denser organization that offers health information resources to everyone.
Experimental service of 3DTV broadcasting relay in Korea
NASA Astrophysics Data System (ADS)
Hur, Namho; Ahn, Chung-Hyun; Ahn, Chieteuk
2002-11-01
This paper introduces 3D HDTV relay broadcasting experiments of 2002 FIFA World Cup Korea/Japan using a terrestrial and satellite network. We have developed 3D HDTV cameras, 3D HDTV video multiplexer/demultiplexer, a 3D HDTV receiver, and a 3D HDTV OB van for field productions. By using a terrestrial and satellite network, we distributed a compressed 3D HDTV signal to predetermined demonstration venues which are approved by host broadcast services (HBS), KirchMedia, and FIFA. In this case, we transmitted a 40Mbps MPEG-2 transport stream (DVB-ASI) over a DS-3 network specified in ITU-T Rec. G.703. The video/audio compression formats are MPEG-2 main-profile, high-level and Dolby Digital AC-3 respectively. Then at venues, the recovered left and right images by the 3D HDTV receiver are displayed on a screen with polarized beam projectors.
Quantum key distribution in multicore fibre for secure radio access networks
NASA Astrophysics Data System (ADS)
Llorente, Roberto; Provot, Antoine; Morant, Maria
2018-01-01
Broadband access in optical domain usually focuses in providing a pervasive cost-effective high bitrate communication in a given area. Nowadays, it is of utmost interest also to be able to provide a secure communication to the costumers in the area. Wireless access networks rely on optical domain for both fronthaul and backhaul of the radio access network (C-RAN). Multicore fiber (MCF) has been proposed as a promising candidate for the optical media of choice in nextgeneration wireless. The capacity demand of next-generation 5G networks makes interesting the use of high-capacity optical solutions as space-division multiplexing of different signals over MCF media. This work addresses secure MCF communication supporting C-RAN architectures. The paper proposes the use of one core in the MCF to transport securely an optical quantum key encoding altogether with end-to-end wireless signal transmitted in the remaining cores in radio-over-fiber (RoF). The RoF wireless signals are suitable for radio access fronthaul and backhaul. The theoretical principle and simulation analysis of quantum key distribution (QKD) are presented in this paper. The potential impact of optical RoF transmission crosstalk impairments is assessed experimentally considering different cellular signals on the remaining optical cores in the MCF. The experimental results report fronthaul performance over a four-core optical fiber with RoF transmission of full-standard CDMA signals providing 3.5G services in one core, HSPA+ signals providing 3.9G services in the second core and 3GPP LTEAdvanced signals providing 4G services in the third core, considering that the QKD signal is allocated in the fourth core.
Ray, Nicolas; Ebener, Steeve
2008-01-01
Background Access to health care can be described along four dimensions: geographic accessibility, availability, financial accessibility and acceptability. Geographic accessibility measures how physically accessible resources are for the population, while availability reflects what resources are available and in what amount. Combining these two types of measure into a single index provides a measure of geographic (or spatial) coverage, which is an important measure for assessing the degree of accessibility of a health care network. Results This paper describes the latest version of AccessMod, an extension to the Geographical Information System ArcView 3.×, and provides an example of application of this tool. AccessMod 3 allows one to compute geographic coverage to health care using terrain information and population distribution. Four major types of analysis are available in AccessMod: (1) modeling the coverage of catchment areas linked to an existing health facility network based on travel time, to provide a measure of physical accessibility to health care; (2) modeling geographic coverage according to the availability of services; (3) projecting the coverage of a scaling-up of an existing network; (4) providing information for cost effectiveness analysis when little information about the existing network is available. In addition to integrating travelling time, population distribution and the population coverage capacity specific to each health facility in the network, AccessMod can incorporate the influence of landscape components (e.g. topography, river and road networks, vegetation) that impact travelling time to and from facilities. Topographical constraints can be taken into account through an anisotropic analysis that considers the direction of movement. We provide an example of the application of AccessMod in the southern part of Malawi that shows the influences of the landscape constraints and of the modes of transportation on geographic coverage. Conclusion By incorporating the demand (population) and the supply (capacities of heath care centers), AccessMod provides a unifying tool to efficiently assess the geographic coverage of a network of health care facilities. This tool should be of particular interest to developing countries that have a relatively good geographic information on population distribution, terrain, and health facility locations. PMID:19087277
The Heliophysics Integrated Observatory
NASA Astrophysics Data System (ADS)
Csillaghy, A.; Bentley, R. D.
2009-12-01
HELIO is a new Europe-wide, FP7-funded distributed network of services that will address the needs of a broad community of researchers in heliophysics. This new research field explores the “Sun-Solar System Connection” and requires the joint exploitation of solar, heliospheric, magnetospheric and ionospheric observations. HELIO will provide the most comprehensive integrated information system in this domain; it will coordinate access to the distributed resources needed by the community, and will provide access to services to mine and analyse the data. HELIO will be designed as a Service-oriented Architecture. The initial infrastructure will include services based on metadata and data servers deployed by the European Grid of Solar Observations (EGSO). We will extend these to address observations from all the disciplines of heliophysics; differences in the way the domains describe and handle the data will be resolved using semantic mapping techniques. Processing and storage services will allow the user to explore the data and create the products that meet stringent standards of interoperability. These capabilities will be orchestrated with the data and metadata services using the Taverna workflow tool. HELIO will address the challenges along the FP7 I3 activities model: (1) Networking: we will cooperate closely with the community to define new standards for heliophysics and the required capabilities of the HELIO system. (2) Services: we will integrate the services developed by the project and other groups to produce an infrastructure that can easily be extended to satisfy the growing and changing needs of the community. (3) Joint Research: we will develop search tools that span disciplinary boundaries and explore new types of user-friendly interfaces HELIO will be a key component of a worldwide effort to integrate heliophysics data and will coordinate closely with international organizations to exploit synergies with complementary domains.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wu-chi; Crawfis, Roger, Weide, Bruce
2002-02-01
In this project, the authors propose the research, development, and distribution of a stackable component-based multimedia streaming protocol middleware service. The goals of this stackable middleware interface include: (1) The middleware service will provide application writers and scientists easy to use interfaces that support their visualization needs. (2) The middleware service will support a variety of image compression modes. Currently, many of the network adaptation protocols for video have been developed with DCT-based compression algorithms like H.261, MPEG-1, or MPEG-2 in mind. It is expected that with advanced scientific computing applications that the lossy compression of the image data willmore » be unacceptable in certain instances. The middleware service will support several in-line lossless compression modes for error-sensitive scientific visualization data. (3) The middleware service will support two different types of streaming video modes: one for interactive collaboration of scientists and a stored video streaming mode for viewing prerecorded animations. The use of two different streaming types will allow the quality of the video delivered to the user to be maximized. Most importantly, this service will happen transparently to the user (with some basic controls exported to the user for domain specific tweaking). In the spirit of layered network protocols (like ISO and TCP/IP), application writers should not have to know a large amount about lower level network details. Currently, many example video streaming players have their congestion management techniques tightly integrated into the video player itself and are, for the most part, ''one-off'' applications. As more networked multimedia and video applications are written in the future, a larger percentage of these programmers and scientist will most likely know little about the underlying networking layer. By providing a simple, powerful, and semi-transparent middleware layer, the successful completion of this project will help serve as a catalyst to support future video-based applications, particularly those of advanced scientific computing applications.« less
On buffer overflow duration in a finite-capacity queueing system with multiple vacation policy
NASA Astrophysics Data System (ADS)
Kempa, Wojciech M.
2017-12-01
A finite-buffer queueing system with Poisson arrivals and generally distributed processing times, operating under multiple vacation policy, is considered. Each time when the system becomes empty, the service station takes successive independent and identically distributed vacation periods, until, at the completion epoch of one of them, at least one job waiting for service is detected in the buffer. Applying analytical approach based on the idea of embedded Markov chain, integral equations and linear algebra, the compact-form representation for the cumulative distribution function (CDF for short) of the first buffer overflow duration is found. Hence, the formula for the CDF of next such periods is obtained. Moreover, probability distributions of the number of job losses in successive buffer overflow periods are found. The considered queueing system can be efficienly applied in modelling energy saving mechanisms in wireless network communication.
A Comparative Study of 11 Local Health Department Organizational Networks
Merrill, Jacqueline; Keeling, Jonathan W.; Carley, Kathleen M.
2013-01-01
Context Although the nation’s local health departments (LHDs) share a common mission, variability in administrative structures is a barrier to identifying common, optimal management strategies. There is a gap in understanding what unifying features LHDs share as organizations that could be leveraged systematically for achieving high performance. Objective To explore sources of commonality and variability in a range of LHDs by comparing intraorganizational networks. Intervention We used organizational network analysis to document relationships between employees, tasks, knowledge, and resources within LHDs, which may exist regardless of formal administrative structure. Setting A national sample of 11 LHDs from seven states that differed in size, geographic location, and governance. Participants Relational network data were collected via an on-line survey of all employees in 11 LHDs. A total of 1 062 out of 1 239 employees responded (84% response rate). Outcome Measures Network measurements were compared using coefficient of variation. Measurements were correlated with scores from the National Public Health Performance Assessment and with LHD demographics. Rankings of tasks, knowledge, and resources were correlated across pairs of LHDs. Results We found that 11 LHDs exhibited compound organizational structures in which centralized hierarchies were coupled with distributed networks at the point of service. Local health departments were distinguished from random networks by a pattern of high centralization and clustering. Network measurements were positively associated with performance for 3 of 10 essential services (r > 0.65). Patterns in the measurements suggest how LHDs adapt to the population served. Conclusions Shared network patterns across LHDs suggest where common organizational management strategies are feasible. This evidence supports national efforts to promote uniform standards for service delivery to diverse populations. PMID:20445462
NASA Astrophysics Data System (ADS)
Schneider, J.; Klein, A.; Mannweiler, C.; Schotten, H. D.
2011-08-01
In the future, sensors will enable a large variety of new services in different domains. Important application areas are service adaptations in fixed and mobile environments, ambient assisted living, home automation, traffic management, as well as management of smart grids. All these applications will share a common property, the usage of networked sensors and actuators. To ensure an efficient deployment of such sensor-actuator networks, concepts and frameworks for managing and distributing sensor data as well as for triggering actuators need to be developed. In this paper, we present an architecture for integrating sensors and actuators into the future Internet. In our concept, all sensors and actuators are connected via gateways to the Internet, that will be used as comprehensive transport medium. Additionally, an entity is needed for registering all sensors and actuators, and managing sensor data requests. We decided to use a hierarchical structure, comparable to the Domain Name Service. This approach realizes a cost-efficient architecture disposing of "plug and play" capabilities and accounting for privacy issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linvill, Carl; Brutkoski, Donna
Mexico's energy reform will have far-reaching effects on how people produce and consume electricity in the country. Market liberalization will open the door to an increasing number of options for Mexican residential, commercial, and industrial consumers, and distributed generation (DG), which for Mexico includes generators of less than 500 kilowatts (kW) of capacity connected to the distribution network. Distributed generation is an option for consumers who want to produce their own electricity and provide electricity services to others. This report seeks to provide guidance to Mexican officials on designing DG economic and regulatory policies.
The Aerospace Energy Systems Laboratory: A BITBUS networking application
NASA Technical Reports Server (NTRS)
Glover, Richard D.; Oneill-Rood, Nora
1989-01-01
The NASA Ames-Dryden Flight Research Facility developed a computerized aircraft battery servicing facility called the Aerospace Energy Systems Laboratory (AESL). This system employs distributed processing with communications provided by a 2.4-megabit BITBUS local area network. Customized handlers provide real time status, remote command, and file transfer protocols between a central system running the iRMX-II operating system and ten slave stations running the iRMX-I operating system. The hardware configuration and software components required to implement this BITBUS application are required.
[Research on tumor information grid framework].
Zhang, Haowei; Qin, Zhu; Liu, Ying; Tan, Jianghao; Cao, Haitao; Chen, Youping; Zhang, Ke; Ding, Yuqing
2013-10-01
In order to realize tumor disease information sharing and unified management, we utilized grid technology to make the data and software resources which distributed in various medical institutions for effective integration so that we could make the heterogeneous resources consistent and interoperable in both semantics and syntax aspects. This article describes the tumor grid framework, the type of the service being packaged in Web Service Description Language (WSDL) and extensible markup language schemas definition (XSD), the client use the serialized document to operate the distributed resources. The service objects could be built by Unified Modeling Language (UML) as middle ware to create application programming interface. All of the grid resources are registered in the index and released in the form of Web Services based on Web Services Resource Framework (WSRF). Using the system we can build a multi-center, large sample and networking tumor disease resource sharing framework to improve the level of development in medical scientific research institutions and the patient's quality of life.
Forecasting short-term data center network traffic load with convolutional neural networks.
Mozo, Alberto; Ordozgoiti, Bruno; Gómez-Canaval, Sandra
2018-01-01
Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution.
Disseminating educational innovations in health care practice: training versus social networks.
Jippes, Erik; Achterkamp, Marjolein C; Brand, Paul L P; Kiewiet, Derk Jan; Pols, Jan; van Engelen, Jo M L
2010-05-01
Improvements and innovation in health service organization and delivery have become more and more important due to the gap between knowledge and practice, rising costs, medical errors, and the organization of health care systems. Since training and education is widely used to convey and distribute innovative initiatives, we examined the effect that following an intensive Teach-the-Teacher training had on the dissemination of a new structured competency-based feedback technique of assessing clinical competencies among medical specialists in the Netherlands. We compared this with the effect of the structure of the social network of medical specialists, specifically the network tie strength (strong ties versus weak ties). We measured dissemination of the feedback technique by using a questionnaire filled in by Obstetrics & Gynecology and Pediatrics residents (n=63). Data on network tie strength was gathered with a structured questionnaire given to medical specialists (n=81). Social network analysis was used to compose the required network coefficients. We found a strong effect for network tie strength and no effect for the Teach-the-Teacher training course on the dissemination of the new structured feedback technique. This paper shows the potential that social networks have for disseminating innovations in health service delivery and organization. Further research is needed into the role and structure of social networks on the diffusion of innovations between departments and the various types of innovations involved. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Forecasting short-term data center network traffic load with convolutional neural networks
Ordozgoiti, Bruno; Gómez-Canaval, Sandra
2018-01-01
Efficient resource management in data centers is of central importance to content service providers as 90 percent of the network traffic is expected to go through them in the coming years. In this context we propose the use of convolutional neural networks (CNNs) to forecast short-term changes in the amount of traffic crossing a data center network. This value is an indicator of virtual machine activity and can be utilized to shape the data center infrastructure accordingly. The behaviour of network traffic at the seconds scale is highly chaotic and therefore traditional time-series-analysis approaches such as ARIMA fail to obtain accurate forecasts. We show that our convolutional neural network approach can exploit the non-linear regularities of network traffic, providing significant improvements with respect to the mean absolute and standard deviation of the data, and outperforming ARIMA by an increasingly significant margin as the forecasting granularity is above the 16-second resolution. In order to increase the accuracy of the forecasting model, we exploit the architecture of the CNNs using multiresolution input distributed among separate channels of the first convolutional layer. We validate our approach with an extensive set of experiments using a data set collected at the core network of an Internet Service Provider over a period of 5 months, totalling 70 days of traffic at the one-second resolution. PMID:29408936
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1991-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.
Sustaining Teacher Control in a Blog-Based Personal Learning Environment
ERIC Educational Resources Information Center
Tomberg, Vladimir; Laanpere, Mart; Ley, Tobias; Normak, Peeter
2013-01-01
Various tools and services based on Web 2.0 (mainly blogs, wikis, social networking tools) are increasingly used in formal education to create personal learning environments, providing self-directed learners with more freedom, choice, and control over their learning. In such distributed and personalized learning environments, the traditional role…
Solar Energy Innovation Network | Solar Research | NREL
Coordinated Control Algorithms for Distributed Battery Energy Storage Systems to Provide Grid Support Services local governments, nonprofits, innovative companies, and system operators-with analytical support from a Affordability of Renewable Energy through Options Analysis and Systems Design (or "Options Analysis"
Mitigating Distributed Denial of Service Attacks with Dynamic Resource Pricing
2001-10-01
should be nearly comparable to a system that does not use the payment mechanisms. There is prior work on how pricing can be used to influence consumer ... behavior , how to integrate pricing mechanisms with OS and network resource management mechanisms. In this paper, we instead focus on how pricing
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1983-01-01
Development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by CPS systems, identification of that portion of the satellite market addressable by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level were achieved. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
NASA Astrophysics Data System (ADS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1983-08-01
Development of a forecast of the total domestic telecommunications demand, identification of that portion of the telecommunications demand suitable for transmission by satellite systems, identification of that portion of the satellite market addressable by CPS systems, identification of that portion of the satellite market addressable by Ka-band CPS system, and postulation of a Ka-band CPS network on a nationwide and local level were achieved. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
NASA Astrophysics Data System (ADS)
Shamugam, Veeramani; Murray, I.; Leong, J. A.; Sidhu, Amandeep S.
2016-03-01
Cloud computing provides services on demand instantly, such as access to network infrastructure consisting of computing hardware, operating systems, network storage, database and applications. Network usage and demands are growing at a very fast rate and to meet the current requirements, there is a need for automatic infrastructure scaling. Traditional networks are difficult to automate because of the distributed nature of their decision making process for switching or routing which are collocated on the same device. Managing complex environments using traditional networks is time-consuming and expensive, especially in the case of generating virtual machines, migration and network configuration. To mitigate the challenges, network operations require efficient, flexible, agile and scalable software defined networks (SDN). This paper discuss various issues in SDN and suggests how to mitigate the network management related issues. A private cloud prototype test bed was setup to implement the SDN on the OpenStack platform to test and evaluate the various network performances provided by the various configurations.
NASA Technical Reports Server (NTRS)
Falke, Stefan; Husar, Rudolf
2011-01-01
The goal of this REASoN applications and technology project is to deliver and use Earth Science Enterprise (ESE) data and tools in support of air quality management. Its scope falls within the domain of air quality management and aims to develop a federated air quality information sharing network that includes data from NASA, EPA, US States and others. Project goals were achieved through a access of satellite and ground observation data, web services information technology, interoperability standards, and air quality community collaboration. In contributing to a network of NASA ESE data in support of particulate air quality management, the project will develop access to distributed data, build Web infrastructure, and create tools for data processing and analysis. The key technologies used in the project include emerging web services for developing self describing and modular data access and processing tools, and service oriented architecture for chaining web services together to assemble customized air quality management applications. The technology and tools required for this project were developed within DataFed.net, a shared infrastructure that supports collaborative atmospheric data sharing and processing web services. Much of the collaboration was facilitated through community interactions through the Federation of Earth Science Information Partners (ESIP) Air Quality Workgroup. The main activities during the project that successfully advanced DataFed, enabled air quality applications and established community-oriented infrastructures were: develop access to distributed data (surface and satellite), build Web infrastructure to support data access, processing and analysis create tools for data processing and analysis foster air quality community collaboration and interoperability.
Responding to the Event Deluge
NASA Technical Reports Server (NTRS)
Williams, Roy D.; Barthelmy, Scott D.; Denny, Robert B.; Graham, Matthew J.; Swinbank, John
2012-01-01
We present the VOEventNet infrastructure for large-scale rapid follow-up of astronomical events, including selection, annotation, machine intelligence, and coordination of observations. The VOEvent.standard is central to this vision, with distributed and replicated services rather than centralized facilities. We also describe some of the event brokers, services, and software that .are connected to the network. These technologies will become more important in the coming years, with new event streams from Gaia, LOF AR, LIGO, LSST, and many others
AUTOMATED UTILITY SERVICE AREA ASSESSMENT UNDER EMERGENCY CONDITIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
G. TOOLE; S. LINGER
2001-01-01
All electric utilities serve power to their customers through a variety of functional levels, notably substations. The majority of these components consist of distribution substations operating at lower voltages while a small fraction are transmission substations. There is an associated geographical area that encompasses customers who are served, defined as the service area. Analysis of substation service areas is greatly complicated by several factors: distribution networks are often highly interconnected which allows a multitude of possible switching operations; also, utilities dynamically alter the network topology in order to respond to emergency events. As a result, the service area for amore » substation can change radically. A utility will generally attempt to minimize the number of customers outaged by switching effected loads to alternate substations. In this manner, all or a portion of a disabled substation's load may be served by one or more adjacent substations. This paper describes a suite of analytical tools developed at Los Alamos National Laboratory (LANL), which address the problem of determining how a utility might respond to such emergency events. The estimated outage areas derived using the tools are overlaid onto other geographical and electrical layers in a geographic information system (GIS) software application. The effects of a power outage on a population, other infrastructures, or other physical features, can be inferred by the proximity of these features to the estimated outage area.« less
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
International GPS (Global Positioning System) Service for Geodynamics
NASA Technical Reports Server (NTRS)
Zumberge, J. F. (Editor); Liu, R. (Editor); Neilan, R. E. (Editor)
1995-01-01
The International GPS (Global Positioning System) Service for Geodynamics (IGS) began formal operation on January 1, 1994. This first annual report is divided into sections, which mirror different aspects of the service. Section (1) contains general information, including the history of the IGS, its organization, and the global network of GPS tracking sites; (2) contains information on the Central Bureau Information System; (3) describes the International Earth Rotation Service (IERS); (4) details collecting and distributing IGS data in Data Center reports; (6) describes how the IGS Analysis Centers generate their products; (7) contains miscellaneous contributions from other organizations that share common interests with the IGS.
Integrating distributed multimedia systems and interactive television networks
NASA Astrophysics Data System (ADS)
Shvartsman, Alex A.
1996-01-01
Recent advances in networks, storage and video delivery systems are about to make commercial deployment of interactive multimedia services over digital television networks a reality. The emerging components individually have the potential to satisfy the technical requirements in the near future. However, no single vendor is offering a complete end-to-end commercially-deployable and scalable interactive multimedia applications systems over digital/analog television systems. Integrating a large set of maturing sub-assemblies and interactive multimedia applications is a major task in deploying such systems. Here we deal with integration issues, requirements and trade-offs in building delivery platforms and applications for interactive television services. Such integration efforts must overcome lack of standards, and deal with unpredictable development cycles and quality problems of leading- edge technology. There are also the conflicting goals of optimizing systems for video delivery while enabling highly interactive distributed applications. It is becoming possible to deliver continuous video streams from specific sources, but it is difficult and expensive to provide the ability to rapidly switch among multiple sources of video and data. Finally, there is the ever- present challenge of integrating and deploying expensive systems whose scalability and extensibility is limited, while ensuring some resiliency in the face of inevitable changes. This proceedings version of the paper is an extended abstract.
NASA Technical Reports Server (NTRS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1984-01-01
The overall purpose was to forecast the potential United States domestic telecommunications demand for satellite provided customer promises voice, data and video services through the year 2000, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: (1) development of a forecast of the total domestic telecommunications demand; (2) identification of that portion of the telecommunications demand suitable for transmission by satellite systems; (3) identification of that portion of the satellite market addressable by consumer promises service (CPS) systems; (4) identification of that portion of the satellite market addressable by Ka-band CPS system; and (5) postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
NASA Astrophysics Data System (ADS)
Kratochvil, D.; Bowyer, J.; Bhushan, C.; Steinnagel, K.; Kaushal, D.; Al-Kinani, G.
1984-03-01
The overall purpose was to forecast the potential United States domestic telecommunications demand for satellite provided customer promises voice, data and video services through the year 2000, so that this information on service demand would be available to aid in NASA program planning. To accomplish this overall purpose the following objectives were achieved: (1) development of a forecast of the total domestic telecommunications demand; (2) identification of that portion of the telecommunications demand suitable for transmission by satellite systems; (3) identification of that portion of the satellite market addressable by consumer promises service (CPS) systems; (4) identification of that portion of the satellite market addressable by Ka-band CPS system; and (5) postulation of a Ka-band CPS network on a nationwide and local level. The approach employed included the use of a variety of forecasting models, a parametric cost model, a market distribution model and a network optimization model. Forecasts were developed for: 1980, 1990, and 2000; voice, data and video services; terrestrial and satellite delivery modes; and C, Ku and Ka-bands.
Coordinated microgrid investment and planning process considering the system operator
Armendáriz, M.; Heleno, M.; Cardoso, G.; ...
2017-05-12
Nowadays, a significant number of distribution systems are facing problems to accommodate more photovoltaic (PV) capacity, namely due to the overvoltages during the daylight periods. This has an impact on the private investments in distributed energy resources (DER), since it occurs exactly when the PV prices are becoming attractive, and the opportunity to an energy transition based on solar technologies is being wasted. In particular, this limitation of the networks is a barrier for larger consumers, such as commercial and public buildings, aiming at investing in PV capacity and start operating as microgrids connected to the MV network. To addressmore » this challenge, this paper presents a coordinated approach to the microgrid investment and planning problem, where the system operator and the microgrid owner collaborate to improve the voltage control capabilities of the distribution network, increasing the PV potential. The results prove that this collaboration has the benefit of increasing the value of the microgrid investments while improving the quality of service of the system and it should be considered in the future regulatory framework.« less
Coordinated microgrid investment and planning process considering the system operator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armendáriz, M.; Heleno, M.; Cardoso, G.
Nowadays, a significant number of distribution systems are facing problems to accommodate more photovoltaic (PV) capacity, namely due to the overvoltages during the daylight periods. This has an impact on the private investments in distributed energy resources (DER), since it occurs exactly when the PV prices are becoming attractive, and the opportunity to an energy transition based on solar technologies is being wasted. In particular, this limitation of the networks is a barrier for larger consumers, such as commercial and public buildings, aiming at investing in PV capacity and start operating as microgrids connected to the MV network. To addressmore » this challenge, this paper presents a coordinated approach to the microgrid investment and planning problem, where the system operator and the microgrid owner collaborate to improve the voltage control capabilities of the distribution network, increasing the PV potential. The results prove that this collaboration has the benefit of increasing the value of the microgrid investments while improving the quality of service of the system and it should be considered in the future regulatory framework.« less
Specialized care for people with AIDS in the state of Ceara, Brazil
Pedrosa, Nathália Lima; Santos, Vanessa da Frota; Paiva, Simone de Sousa; Galvão, Marli Teresinha Gimeniz; de Almeida, Rosa Lívia Freitas; Kerr, Ligia Regina Franco Sansigolo
2015-01-01
OBJECTIVE To analyze if the distribution of specialized care services for HIV/AIDS is associated with AIDS rates. METHODS Ecological study, for which the distribution of 10 specialized care services in the Ceara state, Northeastern Brazil, was obtained, and the mean rates of the disease were estimated per mesoregion. We evaluated 7,896 individuals who had been diagnosed with AIDS, were aged 13 years or older, lived in Ceara, and had been informed of their condition between 2001 and 2011. Maps were constructed to verify the relationship between the distribution of AIDS cases and institutionalized support networks in the 2001-2006 and 2007-2011 periods. BoxMap and LisaMap were used for data analysis. The Voronoi diagram was applied for the distribution of the studied services. RESULTS Specialized care services concentrated in AIDS clusters in the metropolitan area. The Noroeste Cearense and west of the Sertoes Cearenses had high AIDS rates, but a low number of specialized care services over time. Two of these services were implemented where clusters of the disease exist in the second period. The application of the Voronoi diagram showed that the specialized care services located outside the metropolitan area covered a large territory. We identified one polygon that had no services. CONCLUSIONS The scenario of AIDS cases spread away from major urban areas demands the creation of social support services in areas other than the capital and the metropolitan area of the state; this can reduce access barriers to these institutions. It is necessary to create specialized care services for HIV/AIDS in the Noroeste Cearense and north of Jaguaribe. PMID:26487292
Demonstrating NaradaBrokering as a Middleware Fabric for Grid-based Remote Visualization Services
NASA Astrophysics Data System (ADS)
Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.
2003-12-01
Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. Here we demonstrate our approach - based on a distributed brokering infrastructure, NaradaBrokering [1] - that relies on distributed, asynchronous and loosely coupled interactions to meet the requirements and constraints of RVS. In our approach to RVS, services advertise their capabilities to the broker network that manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. We will demonstrate implementation of concepts that we outlined in the oral presentation. This would involve two or more visualization servers interacting asynchronously with multiple clients through NaradaBrokering. The communicating entities may exchange SOAP [2] (Simple Object Access Protocol) messages. SOAP is a lightweight protocol for exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that describes what is in a message and how to process it, rules for expressing instances of application-defined data types, and a convention for representing remote invocation related operations. Furthermore, we will also demonstrate how clients can retrieve their results after prolonged disconnects or after any failures that might have taken place. The entities, services and clients alike, are not limited by the geographical distances that separate them. We are planning to test this system in the context of trans-Atlantic links separating interacting entities. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org {[2]} Newcomer, E., 2002, Understanding web services: XML, WSDL, SOAP, and UDDI, Addison Wesley Professional.
Documentary of MFENET, a national computer network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shuttleworth, B.O.
1977-06-01
The national Magnetic Fusion Energy Computer Network (MFENET) is a newly operational star network of geographically separated heterogeneous hosts and a communications subnetwork of PDP-11 processors. Host processors interfaced to the subnetwork currently include a CDC 7600 at the Central Computer Center (CCC) and several DECsystem-10's at User Service Centers (USC's). The network was funded by a U.S. government agency (ERDA) to provide in an economical manner the needed computational resources to magnetic confinement fusion researchers. Phase I operation of MFENET distributed the processing power of the CDC 7600 among the USC's through the provision of file transport between anymore » two hosts and remote job entry to the 7600. Extending the capabilities of Phase I, MFENET Phase II provided interactive terminal access to the CDC 7600 from the USC's. A file management system is maintained at the CCC for all network users. The history and development of MFENET are discussed, with emphasis on the protocols used to link the host computers and the USC software. Comparisons are made of MFENET versus ARPANET (Advanced Research Projects Agency Computer Network) and DECNET (Digital Distributed Network Architecture). DECNET and MFENET host-to host, host-to-CCP, and link protocols are discussed in detail. The USC--CCP interface is described briefly. 43 figures, 2 tables.« less
The GGOS Global Space Geodesy Network and its Evolution
NASA Astrophysics Data System (ADS)
Pearlman, M. R.; Pavlis, E. C.; Ma, C.; Noll, C. E.; Neilan, R. E.; Stowers, D. A.; Wetzel, S.
2013-12-01
The improvements in the reference frame and other space geodesy data products spelled out in the GGOS 2020 plan will evolve over time as new space geodesy sites enhance the global distribution of the network and new technologies are implemented at the sites thus enabling improved data processing and analysis. The goal of 30 globally distributed core sites with VLBI, SLR, GNSS and DORIS (where available) will take time to materialize. Co-location sites with less than the full core complement will continue to play a very important role in filling out the network while it is evolving and even after full implementation. GGOS through its Call for Participation, bi-lateral and multi-lateral discussions and work through the IAG Services has been encouraging current groups to upgrade and new groups to join the activity. Simulations examine the projected accuracy and stability of the network that would exist in five- and ten-years time, were the proposed expansion to fully materialize by then. Over the last year additional sites have joined the GGOS network, and ground techniques have continued to make progress in new technology systems. This talk will give an update on the current expansion of the global network and the projection for the network configuration that we forecast over the next 10 years.
Hybrid evolutionary computing model for mobile agents of wireless Internet multimedia
NASA Astrophysics Data System (ADS)
Hortos, William S.
2001-03-01
The ecosystem is used as an evolutionary paradigm of natural laws for the distributed information retrieval via mobile agents to allow the computational load to be added to server nodes of wireless networks, while reducing the traffic on communication links. Based on the Food Web model, a set of computational rules of natural balance form the outer stage to control the evolution of mobile agents providing multimedia services with a wireless Internet protocol WIP. The evolutionary model shows how mobile agents should behave with the WIP, in particular, how mobile agents can cooperate, compete and learn from each other, based on an underlying competition for radio network resources to establish the wireless connections to support the quality of service QoS of user requests. Mobile agents are also allowed to clone themselves, propagate and communicate with other agents. A two-layer model is proposed for agent evolution: the outer layer is based on the law of natural balancing, the inner layer is based on a discrete version of a Kohonen self-organizing feature map SOFM to distribute network resources to meet QoS requirements. The former is embedded in the higher OSI layers of the WIP, while the latter is used in the resource management procedures of Layer 2 and 3 of the protocol. Algorithms for the distributed computation of mobile agent evolutionary behavior are developed by adding a learning state to the agent evolution state diagram. When an agent is in an indeterminate state, it can communicate to other agents. Computing models can be replicated from other agents. Then the agents transitions to the mutating state to wait for a new information-retrieval goal. When a wireless terminal or station lacks a network resource, an agent in the suspending state can change its policy to submit to the environment before it transitions to the searching state. The agents learn the facts of agent state information entered into an external database. In the cloning process, two agents on a host station sharing a common goal can be merged or married to compose a new agent. Application of the two-layer set of algorithms for mobile agent evolution, performed in a distributed processing environment, is made to the QoS management functions of the IP multimedia IM sub-network of the third generation 3G Wideband Code-division Multiple Access W-CDMA wireless network.
NASA Astrophysics Data System (ADS)
Papers are presented on ISDN, mobile radio systems and techniques for digital connectivity, centralized and distributed algorithms in computer networks, communications networks, quality assurance and impact on cost, adaptive filters in communications, the spread spectrum, signal processing, video communication techniques, and digital satellite services. Topics discussed include performance evaluation issues for integrated protocols, packet network operations, the computer network theory and multiple-access, microwave single sideband systems, switching architectures, fiber optic systems, wireless local communications, modulation, coding, and synchronization, remote switching, software quality, transmission, and expert systems in network operations. Consideration is given to wide area networks, image and speech processing, office communications application protocols, multimedia systems, customer-controlled network operations, digital radio systems, channel modeling and signal processing in digital communications, earth station/on-board modems, computer communications system performance evaluation, source encoding, compression, and quantization, and adaptive communications systems.
A Serviced-based Approach to Connect Seismological Infrastructures: Current Efforts at the IRIS DMC
NASA Astrophysics Data System (ADS)
Ahern, Tim; Trabant, Chad
2014-05-01
As part of the COOPEUS initiative to build infrastructure that connects European and US research infrastructures, IRIS has advocated for the development of Federated services based upon internationally recognized standards using web services. By deploying International Federation of Digital Seismograph Networks (FDSN) endorsed web services at multiple data centers in the US and Europe, we have shown that integration within seismological domain can be realized. By deploying identical methods to invoke the web services at multiple centers this approach can significantly ease the methods through which a scientist can access seismic data (time series, metadata, and earthquake catalogs) from distributed federated centers. IRIS has developed an IRIS federator that helps a user identify where seismic data from global seismic networks can be accessed. The web services based federator can build the appropriate URLs and return them to client software running on the scientists own computer. These URLs are then used to directly pull data from the distributed center in a very peer-based fashion. IRIS is also involved in deploying web services across horizontal domains. As part of the US National Science Foundation's (NSF) EarthCube effort, an IRIS led EarthCube Building Block's project is underway. When completed this project will aid in the discovery, access, and usability of data across multiple geoscienece domains. This presentation will summarize current IRIS efforts in building vertical integration infrastructure within seismology working closely with 5 centers in Europe and 2 centers in the US, as well as how we are taking first steps toward horizontal integration of data from 14 different domains in the US, in Europe, and around the world.
A Novel College Network Resource Management Method using Cloud Computing
NASA Astrophysics Data System (ADS)
Lin, Chen
At present information construction of college mainly has construction of college networks and management information system; there are many problems during the process of information. Cloud computing is development of distributed processing, parallel processing and grid computing, which make data stored on the cloud, make software and services placed in the cloud and build on top of various standards and protocols, you can get it through all kinds of equipments. This article introduces cloud computing and function of cloud computing, then analyzes the exiting problems of college network resource management, the cloud computing technology and methods are applied in the construction of college information sharing platform.
An integrated multimedia medical information network system.
Yamamoto, K; Makino, J; Sasagawa, N; Nagira, M
1998-01-01
An integrated multimedia medical information network system at Shimane Medical university has been developed to organize medical information generated from each section and provide information services useful for education, research and clinical practice. The report describes the outline of our system. It is designed to serve as a distributed database for electronic medical records and images. We are developing the MML engine that is to be linked to the world wide web (WWW) network system. To the users, this system will present an integrated multimedia representation of the patient records, providing access to both the image and text-based data required for an effective clinical decision making and medical education.
``Just Another Distribution Channel?''
NASA Astrophysics Data System (ADS)
Lemstra, Wolter; de Leeuw, Gerd-Jan; van de Kar, Els; Brand, Paul
The telecommunications-centric business model of mobile operators is under attack due to technological convergence in the communication and content industries. This has resulted in a plethora of academic contributions on the design of new business models and service platform architectures. However, a discussion of the challenges that operators are facing in adopting these models is lacking. We assess these challenges by considering the mobile network as part of the value system of the content industry. We will argue that from the perspective of a content provider the mobile network is ‘just another’ distribution channel. Strategic options available for the mobile communication operators are to deliver an excellent distribution channel for content delivery or to move upwards in the value chain by becoming a content aggregator. To become a mobile content aggregator operators will have to develop or acquire complementary resources and capabilities. Whether this strategic option is sustainable remains open.
Chang, Hsin Hsin; Chang, Ching Sheng
2008-01-01
Background Enhancing service efficiency and quality has always been one of the most important factors to heighten competitiveness in the health care service industry. Thus, how to utilize information technology to reduce work load for staff and expeditiously improve work efficiency and healthcare service quality is presently the top priority for every healthcare institution. In this fast changing modern society, e-health care systems are currently the best possible way to achieve enhanced service efficiency and quality under the restraint of healthcare cost control. The electronic medical record system and the online appointment system are the core features in employing e-health care systems in the technology-based service encounters. Methods This study implemented the Service Encounters Evaluation Model, the European Customer Satisfaction Index, the Attribute Model and the Overall Affect Model for model inference. A total of 700 copies of questionnaires from two authoritative southern Taiwan medical centers providing the electronic medical record system and the online appointment system service were distributed, among which 590 valid copies were retrieved with a response rate of 84.3%. We then used SPSS 11.0 and the Linear Structural Relationship Model (LISREL 8.54) to analyze and evaluate the data. Results The findings are as follows: (1) Technology-based service encounters have a positive impact on service quality, but not patient satisfaction; (2) After experiencing technology-based service encounters, the cognition of the service quality has a positive effect on patient satisfaction; and (3) Network security contributes a positive moderating effect on service quality and patient satisfaction. Conclusion It revealed that the impact of electronic workflow (online appointment system service) on service quality was greater than electronic facilities (electronic medical record systems) in technology-based service encounters. Convenience and credibility are the most important factors of service quality in technology-based service encounters that patients demand. Due to the openness of networks, patients worry that transaction information could be intercepted; also, the credibility of the hospital involved is even a bigger concern, as patients have a strong sense of distrust. Therefore, in the operation of technology-based service encounters, along with providing network security, it is essential to build an atmosphere of psychological trust. PMID:18419820
Chang, Hsin Hsin; Chang, Ching Sheng
2008-04-17
Enhancing service efficiency and quality has always been one of the most important factors to heighten competitiveness in the health care service industry. Thus, how to utilize information technology to reduce work load for staff and expeditiously improve work efficiency and healthcare service quality is presently the top priority for every healthcare institution. In this fast changing modern society, e-health care systems are currently the best possible way to achieve enhanced service efficiency and quality under the restraint of healthcare cost control. The electronic medical record system and the online appointment system are the core features in employing e-health care systems in the technology-based service encounters. This study implemented the Service Encounters Evaluation Model, the European Customer Satisfaction Index, the Attribute Model and the Overall Affect Model for model inference. A total of 700 copies of questionnaires from two authoritative southern Taiwan medical centers providing the electronic medical record system and the online appointment system service were distributed, among which 590 valid copies were retrieved with a response rate of 84.3%. We then used SPSS 11.0 and the Linear Structural Relationship Model (LISREL 8.54) to analyze and evaluate the data. The findings are as follows: (1) Technology-based service encounters have a positive impact on service quality, but not patient satisfaction; (2) After experiencing technology-based service encounters, the cognition of the service quality has a positive effect on patient satisfaction; and (3) Network security contributes a positive moderating effect on service quality and patient satisfaction. It revealed that the impact of electronic workflow (online appointment system service) on service quality was greater than electronic facilities (electronic medical record systems) in technology-based service encounters. Convenience and credibility are the most important factors of service quality in technology-based service encounters that patients demand. Due to the openness of networks, patients worry that transaction information could be intercepted; also, the credibility of the hospital involved is even a bigger concern, as patients have a strong sense of distrust. Therefore, in the operation of technology-based service encounters, along with providing network security, it is essential to build an atmosphere of psychological trust.
Towards the distribution network of time and frequency
NASA Astrophysics Data System (ADS)
Lipiński, M.; Krehlik, P.; Śliwczyński, Ł.; Buczek, Ł.; Kołodziej, J.; Nawrocki, J.; Nogaś, P.; Dunst, P.; Lemański, D.; Czubla, A.; Pieczerak, J.; Adamowicz, W.; Pawszak, T.; Igalson, J.; Binczewski, A.; Bogacki, W.; Ostapowicz, P.; Stroiński, M.; Turza, K.
2014-05-01
In the paper the genesis, current stage and perspectives of the OPTIME project are described. The main goal of the project is to demonstrate that the newdeveloped at AGH technology of fiber optic transfer of the atomic clocks reference signals is ready to be used in building the domestic Time and Frequency distribution network. In the first part we summarize the two-year continuous operation of 420 kmlong link connecting the Laboratory of Time and Frequency at Central Office of Measures GUM in Warsaw and Time Service Laboratory at Astrogeodynamic Obserwatory AOS in Borowiec near Poznan. For the first time, we are reporting the two year comparison of UTC(PL) and UTC(AOS) atomic timescales with this link, and we refer it to the results of comparisons performed by GPS-based methods. We also address some practical aspects of maintaining time and frequency dissemination over fiber optical network. In the second part of the paper the concept of the general architecture of the distribution network with two Reference Time and Frequency Laboratories and local repositories is proposed. Moreover the brief project of the second branch connecting repositories in Poznan Polish Supercomputing and Networking Center and Torun Nicolaus Copernicus University with the first end-users in Torun such as National Laboratory of Atomic, Molecular and Optical Physics and Nicolaus Copernicus Astronomical Center is described. In the final part the perspective of developing the network both in the domestic range as far as extention with the international connections possibilities are presented.
Enabling NVM for Data-Intensive Scientific Services
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carns, Philip; Jenkins, John; Seo, Sangmin
Specialized, transient data services are playing an increasingly prominent role in data-intensive scientific computing. These services offer flexible, on-demand pairing of applications with storage hardware using semantics that are optimized for the problem domain. Concurrent with this trend, upcoming scientific computing and big data systems will be deployed with emerging NVM technology to achieve the highest possible price/productivity ratio. Clearly, therefore, we must develop techniques to facilitate the confluence of specialized data services and NVM technology. In this work we explore how to enable the composition of NVM resources within transient distributed services while still retaining their essential performance characteristics.more » Our approach involves eschewing the conventional distributed file system model and instead projecting NVM devices as remote microservices that leverage user-level threads, RPC services, RMA-enabled network transports, and persistent memory libraries in order to maximize performance. We describe a prototype system that incorporates these concepts, evaluate its performance for key workloads on an exemplar system, and discuss how the system can be leveraged as a component of future data-intensive architectures.« less
Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less
Seasonal change of topology and resilience of ecological networks in wetlandscapes
NASA Astrophysics Data System (ADS)
Bin, Kim; Park, Jeryang
2017-04-01
Wetlands distributed in a landscape provide various ecosystem services including habitat for flora and fauna, hydrologic controls, and biogeochemical processes. Hydrologic regime of each wetland at a given landscape varies by hydro-climatic and geological conditions as well as the bathymetry, forming a certain pattern in the wetland area distribution and spatial organization. However, its large-scale pattern also changes over time as this wetland complex is subject to stochastic hydro-climatic forcing in various temporal scales. Consequently, temporal variation in the spatial structure of wetlands inevitably affects the dispersal ability of species depending on those wetlands as habitat. Here, we numerically show (1) the spatiotemporal variation of wetlandscapes by forcing seasonally changing stochastic rainfall and (2) the corresponding ecological networks which either deterministically or stochastically forming the dispersal ranges. We selected four vernal pool regions with distinct climate conditions in California. The results indicate that the spatial structure of wetlands in a landscape by measuring the wetland area frequency distribution changes by seasonal hydro-climatic condition but eventually recovers to the initial state. However, the corresponding ecological networks, which the structure and function change by the change of distances between wetlands, and measured by degree distribution and network efficiency, may not recover to the initial state especially in the regions with high seasonal dryness index. Moreover, we observed that the changes in both the spatial structure of wetlands in a landscape and the corresponding ecological networks exhibit hysteresis over seasons. Our analysis indicates that the hydrologic and ecological resilience of a wetlandcape may be low in a dry region with seasonal hydro-climatic forcing. Implications of these results for modelling ecological networks depending on hydrologic systems especially for conservation purposes are discussed.
Cybercare 2.0: meeting the challenge of the global burden of disease in 2030.
Rosen, Joseph M; Kun, Luis; Mosher, Robyn E; Grigg, Elliott; Merrell, Ronald C; Macedonia, Christian; Klaudt-Moreau, Julien; Price-Smith, Andrew; Geiling, James
In this paper, we propose to advance and transform today's healthcare system using a model of networked health care called Cybercare. Cybercare means "health care in cyberspace" - for example, doctors consulting with patients via videoconferencing across a distributed network; or patients receiving care locally - in neighborhoods, "minute clinics," and homes - using information technologies such as telemedicine, smartphones, and wearable sensors to link to tertiary medical specialists. This model contrasts with traditional health care, in which patients travel (often a great distance) to receive care from providers in a central hospital. The Cybercare model shifts health care provision from hospital to home; from specialist to generalist; and from treatment to prevention. Cybercare employs advanced technology to deliver services efficiently across the distributed network - for example, using telemedicine, wearable sensors and cell phones to link patients to specialists and upload their medical data in near-real time; using information technology (IT) to rapidly detect, track, and contain the spread of a global pandemic; or using cell phones to manage medical care in a disaster situation. Cybercare uses seven "pillars" of technology to provide medical care: genomics; telemedicine; robotics; simulation, including virtual and augmented reality; artificial intelligence (AI), including intelligent agents; the electronic medical record (EMR); and smartphones. All these technologies are evolving and blending. The technologies are integrated functionally because they underlie the Cybercare network, and/or form part of the care for patients using that distributed network. Moving health care provision to a networked, distributed model will save money, improve outcomes, facilitate access, improve security, increase patient and provider satisfaction, and may mitigate the international global burden of disease. In this paper we discuss how Cybercare is being implemented now, and envision its growth by 2030.
Building AN International Polar Data Coordination Network
NASA Astrophysics Data System (ADS)
Pulsifer, P. L.; Yarmey, L.; Manley, W. F.; Gaylord, A. G.; Tweedie, C. E.
2013-12-01
In the spirit of the World Data Center system developed to manage data resulting from the International Geophysical Year of 1957-58, the International Polar Year 2007-2009 (IPY) resulted in significant progress towards establishing an international polar data management network. However, a sustained international network is still evolving. In this paper we argue that the fundamental building blocks for such a network exist and that the time is right to move forward. We focus on the Arctic component of such a network with linkages to Antarctic network building activities. A review of an important set of Network building blocks is presented: i) the legacy of the IPY data and information service; ii) global data management services with a polar component (e.g. World Data System); iii) regional systems (e.g. Arctic Observing Viewer; iv) nationally focused programs (e.g. Arctic Observing Viewer, Advanced Cooperative Arctic Data and Information Service, Polar Data Catalogue, Inuit Knowledge Centre); v) programs focused on the local (e.g. Exchange for Local Observations and Knowledge of the Arctic, Geomatics and Cartographic Research Centre). We discuss current activities and results with respect to three priority areas needed to establish a strong and effective Network. First, a summary of network building activities reports on a series of productive meetings, including the Arctic Observing Summit and the Polar Data Forum, that have resulted in a core set of Network nodes and participants and a refined vision for the Network. Second, we recognize that interoperability for information sharing fundamentally relies on the creation and adoption of community-based data description standards and data delivery mechanisms. There is a broad range of interoperability frameworks and specifications available; however, these need to be adapted for polar community needs. Progress towards Network interoperability is reviewed, and a prototype distributed data systems is demonstrated. We discuss remaining challenges. Lastly, to establish a sustainable Arctic Data Coordination Network (ADCN) as part of a broader polar Network will require adequate continued resources. We conclude by outlining proposed business models for the emerging Arctic Data Coordination Network and a broader polar Network.
NEXUS - Resilient Intelligent Middleware
NASA Astrophysics Data System (ADS)
Kaveh, N.; Hercock, R. Ghanea
Service-oriented computing, a composition of distributed-object computing, component-based, and Web-based concepts, is becoming the widespread choice for developing dynamic heterogeneous software assets available as services across a network. One of the major strengths of service-oriented technologies is the high abstraction layer and large granularity level at which software assets are viewed compared to traditional object-oriented technologies. Collaboration through encapsulated and separately defined service interfaces creates a service-oriented environment, whereby multiple services can be linked together through their interfaces to compose a functional system. This approach enables better integration of legacy and non-legacy services, via wrapper interfaces, and allows for service composition at a more abstract level especially in cases such as vertical market stacks. The heterogeneous nature of service-oriented technologies and the granularity of their software components makes them a suitable computing model in the pervasive domain.
An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids
NASA Technical Reports Server (NTRS)
Nugent, Richard O.; Tucker, Richard W.
1988-01-01
MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.
Incentive-Rewarding Mechanism for User-position Control in Mobile Services
NASA Astrophysics Data System (ADS)
Yoshino, Makoto; Sato, Kenichiro; Shinkuma, Ryoichi; Takahashi, Tatsuro
When the number of users in a service area increases in mobile multimedia services, no individual user can obtain satisfactory radio resources such as bandwidth and signal power because the resources are limited and shared. A solution for such a problem is user-position control. In the user-position control, the operator informs users of better communication areas (or spots) and navigates them to these positions. However, because of subjective costs caused by subjects moving from their original to a new position, they do not always attempt to move. To motivate users to contribute their resources in network services that require resource contributions for users, incentive-rewarding mechanisms have been proposed. However, there are no mechanisms that distribute rewards appropriately according to various subjective factors involving users. Furthermore, since the conventional mechanisms limit how rewards are paid, they are applicable only for the network service they targeted. In this paper, we propose a novel incentive-rewarding mechanism to solve these problems, using an external evaluator and interactive learning agents. We also investigated ways of appropriately controlling rewards based on user contributions and system service quality. We applied the proposed mechanism and reward control to the user-position control, and demonstrated its validity.
Workload Characterization and Performance Implications of Large-Scale Blog Servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeon, Myeongjae; Kim, Youngjae; Hwang, Jeaho
With the ever-increasing popularity of social network services (SNSs), an understanding of the characteristics of these services and their effects on the behavior of their host servers is critical. However, there has been a lack of research on the workload characterization of servers running SNS applications such as blog services. To fill this void, we empirically characterized real-world web server logs collected from one of the largest South Korean blog hosting sites for 12 consecutive days. The logs consist of more than 96 million HTTP requests and 4.7 TB of network traffic. Our analysis reveals the followings: (i) The transfermore » size of non-multimedia files and blog articles can be modeled using a truncated Pareto distribution and a log-normal distribution, respectively; (ii) User access for blog articles does not show temporal locality, but is strongly biased towards those posted with image or audio files. We additionally discuss the potential performance improvement through clustering of small files on a blog page into contiguous disk blocks, which benefits from the observed file access patterns. Trace-driven simulations show that, on average, the suggested approach achieves 60.6% better system throughput and reduces the processing time for file access by 30.8% compared to the best performance of the Ext4 file system.« less
Technology Developments Integrating a Space Network Communications Testbed
NASA Technical Reports Server (NTRS)
Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee
2006-01-01
As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enable its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions. It can simulate entire networks and can interface with external (testbed) systems. The key technology developments enabling the integration of MACHETE into a distributed testbed are the Monitor and Control module and the QualNet IP Network Emulator module. Specifically, the Monitor and Control module establishes a standard interface mechanism to centralize the management of each testbed component. The QualNet IP Network Emulator module allows externally generated network traffic to be passed through MACHETE to experience simulated network behaviors such as propagation delay, data loss, orbital effects and other communications characteristics, including entire network behaviors. We report a successful integration of MACHETE with a space communication testbed modeling a lunar exploration scenario. This document is the viewgraph slides of the presentation.
ERIC Educational Resources Information Center
Ford, Tracy
2012-01-01
Many institutions implement a distributed antenna system (DAS) as part of a holistic approach to providing better wireless coverage and capacity on campus. A DAS provides wireless service within a particular area or structure via a network of separate antenna nodes that are connected to a common source through fiber or coaxial cable. Because DAS…
Drop-in Security for Distributed and Portable Computing Elements.
ERIC Educational Resources Information Center
Prevelakis, Vassilis; Keromytis, Angelos
2003-01-01
Proposes the use of a special purpose drop-in firewall/VPN gateway called Sieve, that can be inserted between the mobile workstation and the network to provide individualized security services for that particular station. Discusses features and advantages of the system and demonstrates how Sieve was used in various application areas such as at…
The Substance Abuse Counseling Workforce: Education, Preparation, and Certification
ERIC Educational Resources Information Center
Rieckmann, Traci; Farentinos, Christiane; Tillotson, Carrie J.; Kocarnik, Jonathan; McCarty, Dennis
2011-01-01
The National Drug Abuse Treatment Clinical Trials Network (CTN) is an alliance of drug abuse treatment programs and research centers testing new interventions and implementation factors for treating alcohol and drug use disorders. A workforce survey distributed to those providing direct services in 295 treatment units in the CTN obtained responses…
1992-04-01
Proceedings of Tri-Service Data Fusion Symposium, Johns Hopkins University, May 1989. 39. F. Rosenblatt. Principles of Neurodynamics : Perceptrons and the...104 47. David E. Rummelhart and James L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition , volume 1. The
A study on some urban bus transport networks
NASA Astrophysics Data System (ADS)
Chen, Yong-Zhou; Li, Nan; He, Da-Ren
2007-03-01
In this paper, we present the empirical investigation results on the urban bus transport networks (BTNs) of four major cities in China. In BTN, nodes are bus stops. Two nodes are connected by an edge when the stops are serviced by a common bus route. The empirical results show that the degree distributions of BTNs take exponential function forms. Other two statistical properties of BTNs are also considered, and they are suggested as the distributions of so-called “the number of stops in a bus route” (represented by S) and “the number of bus routes a stop joins” (by R). The distributions of R also show exponential function forms, while the distributions of S follow asymmetric, unimodal functions. To explain these empirical results and attempt to simulate a possible evolution process of BTN, we introduce a model by which the analytic and numerical result obtained agrees well with the empirical facts. Finally, we also discuss some other possible evolution cases, where the degree distribution shows a power law or an interpolation between the power law and the exponential decay.
Using the ACR/NEMA standard with TCP/IP and Ethernet
NASA Astrophysics Data System (ADS)
Chimiak, William J.; Williams, Rodney C.
1991-07-01
There is a need for a consolidated picture archival and communications system (PACS) in hospitals. At the Bowman Gray School of Medicine of Wake Forest University (BGSM), the authors are enhancing the ACR/NEMA Version 2 protocol using UNIX sockets and TCP/IP to greatly improve connectivity. Initially, nuclear medicine studies using gamma cameras are to be sent to PACS. The ACR/NEMA Version 2 protocol provides the functionality of the upper three layers of the open system interconnection (OSI) model in this implementation. The images, imaging equipment information, and patient information are then sent in ACR/NEMA format to a software socket. From there it is handed to the TCP/IP protocol, which provides the transport and network service. TCP/IP, in turn, uses the services of IEEE 802.3 (Ethernet) to complete the connectivity. The advantage of this implementation is threefold: (1) Only one I/O port is consumed by numerous nuclear medicine cameras, instead of a physical port for each camera. (2) Standard protocols are used which maximize interoperability with ACR/NEMA compliant PACSs. (3) The use of sockets allows a migration path to the transport and networking services of OSIs TP4 and connectionless network service as well as the high-performance protocol being considered by the American National Standards Institute (ANSI) and the International Standards Organization (ISO) -- the Xpress Transfer Protocol (XTP). The use of sockets also gives access to ANSI's Fiber Distributed Data Interface (FDDI) as well as other high-speed network standards.
A Multi-Objective Partition Method for Marine Sensor Networks Based on Degree of Event Correlation.
Huang, Dongmei; Xu, Chenyixuan; Zhao, Danfeng; Song, Wei; He, Qi
2017-09-21
Existing marine sensor networks acquire data from sea areas that are geographically divided, and store the data independently in their affiliated sea area data centers. In the case of marine events across multiple sea areas, the current network structure needs to retrieve data from multiple data centers, and thus severely affects real-time decision making. In this study, in order to provide a fast data retrieval service for a marine sensor network, we use all the marine sensors as the vertices, establish the edge based on marine events, and abstract the marine sensor network as a graph. Then, we construct a multi-objective balanced partition method to partition the abstract graph into multiple regions and store them in the cloud computing platform. This method effectively increases the correlation of the sensors and decreases the retrieval cost. On this basis, an incremental optimization strategy is designed to dynamically optimize existing partitions when new sensors are added into the network. Experimental results show that the proposed method can achieve the optimal layout for distributed storage in the process of disaster data retrieval in the China Sea area, and effectively optimize the result of partitions when new buoys are deployed, which eventually will provide efficient data access service for marine events.
Water level ingest, archive and processing system - an integral part of NOAA's tsunami database
NASA Astrophysics Data System (ADS)
McLean, S. J.; Mungov, G.; Dunbar, P. K.; Price, D. J.; Mccullough, H.
2013-12-01
The National Oceanic and Atmospheric Administration (NOAA), National Geophysical Data Center (NGDC) and collocated World Data Service for Geophysics (WDS) provides long-term archive, data management, and access to national and global tsunami data. Archive responsibilities include the NOAA Global Historical Tsunami event and runup database, damage photos, as well as other related hazards data. Beginning in 2008, NGDC was given the responsibility of archiving, processing and distributing all tsunami and hazards-related water level data collected from NOAA observational networks in a coordinated and consistent manner. These data include the Deep-ocean Assessment and Reporting of Tsunami (DART) data provided by the National Data Buoy Center (NDBC), coastal-tide-gauge data from the National Ocean Service (NOS) network and tide-gauge data from the two National Weather Service (NWS) Tsunami Warning Centers (TWCs) regional networks. Taken together, this integrated archive supports tsunami forecast, warning, research, mitigation and education efforts of NOAA and the Nation. Due to the variety of the water level data, the automatic ingest system was redesigned, along with upgrading the inventory, archive and delivery capabilities based on modern digital data archiving practices. The data processing system was also upgraded and redesigned focusing on data quality assessment in an operational manner. This poster focuses on data availability highlighting the automation of all steps of data ingest, archive, processing and distribution. Examples are given from recent events such as the October 2012 hurricane Sandy, the Feb 06, 2013 Solomon Islands tsunami, and the June 13, 2013 meteotsunami along the U.S. East Coast.
NASA Astrophysics Data System (ADS)
Abidi, Dhafer
TTEthernet is a deterministic network technology that makes enhancements to Layer 2 Quality-of-Service (QoS) for Ethernet. The components that implement its services enrich the Ethernet functionality with distributed fault-tolerant synchronization, robust temporal partitioning bandwidth and synchronous communication with fixed latency and low jitter. TTEthernet services can facilitate the design of scalable, robust, less complex distributed systems and architectures tolerant to faults. Simulation is nowadays an essential step in critical systems design process and represents a valuable support for validation and performance evaluation. CoRE4INET is a project bringing together all TTEthernet simulation models currently available. It is based on the extension of models of OMNeT ++ INET framework. Our objective is to study and simulate the TTEthernet protocol on a flight management subsystem (FMS). The idea is to use CoRE4INET to design the simulation model of the target system. The problem is that CoRE4INET does not offer a task scheduling tool for TTEthernet network. To overcome this problem we propose an adaptation for simulation purposes of a task scheduling approach based on formal specification of network constraints. The use of Yices solver allowed the translation of the formal specification into an executable program to generate the desired transmission plan. A case study allowed us at the end to assess the impact of the arrangement of Time-Triggered frames offsets on the performance of each type of the system traffic.
SeaDataNet network services monitoring: Definition and Implementation of Service availability index
NASA Astrophysics Data System (ADS)
Lykiardopoulos, Angelos; Mpalopoulou, Stavroula; Vavilis, Panagiotis; Pantazi, Maria; Iona, Sissy
2014-05-01
SeaDataNet (SDN) is a standardized system for managing large and diverse data sets collected by the oceanographic fleets and the automatic observation systems. The SeaDataNet network is constituted of national oceanographic data centres of 35 countries, active in data collection. SeaDataNetII project's objective is to upgrade the present SeaDataNet infrastructure into an operationally robust and state-of-the-art infrastructure; therefore Network Monitoring is a step to this direction. The term Network Monitoring describes the use of system that constantly monitors a computer network for slow or failing components and that notifies the network administrator in case of outages. Network monitoring is crucial when implementing widely distributed systems over the Internet and in real-time systems as it detects malfunctions that may occur and notifies the system administrator who can immediately respond and correct the problem. In the framework of SeaDataNet II project a monitoring system was developed in order to monitor the SeaDataNet components. The core system is based on Nagios software. Some plug-ins were developed to support SeaDataNet modules. On the top of Nagios Engine a web portal was developed in order to give access to local administrators of SeaDataNet components, to view detailed logs of their own service(s). Currently the system monitors 35 SeaDataNet Download Managers, 9 SeaDataNet Services, 25 GeoSeas Download Managers and 23 UBSS Download Managers . Taking advantage of the continuous monitoring of SeaDataNet system components a total availability index will be implemented. The term availability can be defined as the ability of a functional unit to be in a state to perform a required function under given conditions at a given instant of time or over a given time interval, assuming that the required external resources are provided. Availability measures can be considered as a are very important benefit becauseT - The availability trends that can be extracted from the stored availability measurements will give an indication of the condition of the service modules. - Will help in planning upgrades planning - and the maintenance of the network service. - It is a prerequisite in case of signing a Service Level Agreement. To construct the service availability index, a method for measuring availability of SeaDataNet network is developed and a database is implemented to store the measured values. Although the measurements of availability of a single component in a network service can be considered as simple (is a percentage of time in a year that the service is available to the users), the ipmlementation of a method to measure the total availability of a composite system can be complicated and there is no a standardized method to deal with it. The method followed to calculate the total availability index in case of SeaDataNet can be described as follows: The whole system was divided in operational modules providing a single service in which the availability can be measured by monitoring portal. Next the dependences between these modules were defined in order to formulate the influence of availability of each module against the whole system. For each module a weight coefficient depending on module's involvement in total system productivity was defined. A mathematical formula was developed to measure the index.
The Global Space Geodesy Network and the Essential Role of Latin America Sites
NASA Astrophysics Data System (ADS)
Pearlman, M. R.; Ma, C.; Neilan, R.; Noll, C. E.; Pavlis, E. C.; Wetzel, S.
2013-05-01
The improvements in the reference frame and other space geodesy data products spelled out in the GGOS 2020 plan will evolve over time as new space geodesy sites enhance the global distribution of the network, and new technologies are implemented at current and new sites, thus enabling improved data processing and analysis. The goal of 30 globally distributed core sites with VLBI, SLR, GNSS and DORIS (where available) will take time to materialize. Co-location sites with less than the full core complement will continue to play a very important role in filling out the network while it is evolving and even after full implementation. GGOS, through its Call for Participation, bi-lateral and multi-lateral discussions, and work through the scientific Services have been encouraging current groups to upgrade and new groups to join the activity. This talk will give an update on the current expansion of the global network and the projection for the network configuration that we forecast over the next 10 years based on discussions and planning that has already occurred. We will also discuss some of the historical contributions to the reference frame from sites in Latin America and need for new sites in the future.
NASA Astrophysics Data System (ADS)
Jadhao, Ramrao D.; Gupta, Rajesh
2018-03-01
Calibration of hydraulic model of a water distribution network is required to match the model results of flows and pressures with those obtained in the field. This is a challenging task considering the involvement of a large number of parameters. Having more precise data helps in reducing time and results in better calibration as shown herein with a case study of one hydraulic zone served from the Ramnagar Ground Service Reservoir in Nagpur City. Flow and pressure values for the entire day were obtained through data loggers. Network details regarding pipe lengths, diameters, installation year and material were obtained with the largest possible accuracy. Locations of consumers on the network were noted and average nodal consumptions were obtained from the billing records. The non-revenue water losses were uniformly allocated to all junctions. Valve positions and their operating status were noted from the field and used. The pipe roughness coefficients were adjusted to match the model values with field values of pressures at observation nodes by minimizing the sum of square of difference between them. This paper aims at describing the entire process from collection of the required data to the calibration of the network.
SCM: A method to improve network service layout efficiency with network evolution.
Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng
2017-01-01
Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of "software defined network + network function virtualization" (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.
Addressing tomorrow's DMO technical challenges today
NASA Astrophysics Data System (ADS)
Milligan, James R.
2009-05-01
Distributed Mission Operations (DMO) is essentially a type of networked training that pulls in participants from all the armed services and, increasingly, allies to permit them to "game" and rehearse highly complex campaigns, using a mix of local, distant, and virtual players. The United States Air Force Research Laboratory (AFRL) is pursuing Science and Technology (S&T) solutions to address technical challenges associated with distributed communications and information management as DMO continues to progressively scale up the number, diversity, and geographic dispersal of participants in training and rehearsal exercises.
NASA Astrophysics Data System (ADS)
Volcke, P.; Pequegnat, C.; Grunberg, M.; Lecointre, A.; Bzeznik, B.; Wolyniec, D.; Engels, F.; Maron, C.; Cheze, J.; Pardo, C.; Saurel, J. M.; André, F.
2015-12-01
RESIF is a nationwide french project aimed at building a high quality observation system to observe and understand the inner earth. RESIF deals with permanent seismic networks data as well as mobile networks data, including dense/semi-dense arrays. RESIF project is distributed among different nodes providing qualified data to the main datacentre in Université Grenoble Alpes, France. Data control and qualification is performed by each individual nodes : the poster will provide some insights on RESIF broadband seismic component data quality control. We will then present data that has been recently made publicly available. Data is distributed through worldwide FDSN and european EIDA standards protocols. A new web portal is now opened to explore and download seismic data and metadata. The RESIF datacentre is also now connected to Grenoble University High Performance Computing (HPC) facility : a typical use-case will be presented using iRODS technologies. The use of dense observation networks is increasing, bringing challenges in data growth and handling : we will present an example where HDF5 data format was used as an alternative to usual seismology data formats.
Multicast for savings in cache-based video distribution
NASA Astrophysics Data System (ADS)
Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf
1999-12-01
Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.
Tao, Yuan; Liu, Juan
2005-01-01
The Internet has already deflated our world of working and living into a very small scope, thus bringing out the concept of Earth Village, in which people could communicate and co-work though thousands' miles far away from each other. This paper describes a prototype, which is just like an Earth Lab for bioinformatics, based on Web services framework to build up a network architecture for bioinformatics research and for world wide biologists to easily implement enormous, complex processes, and effectively share and access computing resources and data, regardless of how heterogeneous the format of the data is and how decentralized and distributed these resources are around the world. A diminutive and simplified example scenario is given out to realize the prototype after that.
Mandellos, George J; Koutelakis, George V; Panagiotakopoulos, Theodor C; Koukias, Andreas M; Koukias, Mixalis N; Lymberopoulos, Dimitrios K
2008-01-01
Early and specialized pre-hospital patient treatment improves outcome in terms of mortality and morbidity, in emergency cases. This paper focuses on the design and implementation of a telemedicine system that supports diverse types of endpoints including moving transports (MT) (ambulances, ships, planes, etc.), handheld devices and fixed units, using diverse communication networks. Target of the above telemedicine system is the pre-hospital patient treatment. While vital sign transmission is prior to other services provided by the telemedicine system (videoconference, remote management, voice calls etc.), a predefined algorithm controls provision and quality of the other services. A distributed database system controlled by a central server, aims to manage patient attributes, exams and incidents handled by different Telemedicine Coordination Centers (TCC).
Accelerator infrastructure in Europe: EuCARD 2011
NASA Astrophysics Data System (ADS)
Romaniuk, Ryszard S.
2011-10-01
The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.
Resource Sharing via Planed Relay for [InlineEquation not available: see fulltext.
NASA Astrophysics Data System (ADS)
Shen, Chong; Rea, Susan; Pesch, Dirk
2008-12-01
We present an improved version of adaptive distributed cross-layer routing algorithm (ADCR) for hybrid wireless network with dedicated relay stations ([InlineEquation not available: see fulltext.]) in this paper. A mobile terminal (MT) may borrow radio resources that are available thousands mile away via secure multihop RNs, where RNs are placed at pre-engineered locations in the network. In rural places such as mountain areas, an MT may also communicate with the core network, when intermediate MTs act as relay node with mobility. To address cross-layer network layers routing issues, the cascaded ADCR establishes routing paths across MTs, RNs, and cellular base stations (BSs) and provides appropriate quality of service (QoS). We verify the routing performance benefits of [InlineEquation not available: see fulltext.] over other networks by intensive simulation.
Bringing simulation to engineers in the field: a Web 2.0 approach.
Haines, Robert; Khan, Kashif; Brooke, John
2009-07-13
Field engineers working on water distribution systems have to implement day-to-day operational decisions. Since pipe networks are highly interconnected, the effects of such decisions are correlated with hydraulic and water quality conditions elsewhere in the network. This makes the provision of predictive decision support tools (DSTs) for field engineers critical to optimizing the engineering work on the network. We describe how we created DSTs to run on lightweight mobile devices by using the Web 2.0 technique known as Software as a Service. We designed our system following the architectural style of representational state transfer. The system not only displays static geographical information system data for pipe networks, but also dynamic information and prediction of network state, by invoking and displaying the results of simulations running on more powerful remote resources.
A Bloom Filter-Powered Technique Supporting Scalable Semantic Discovery in Data Service Networks
NASA Astrophysics Data System (ADS)
Zhang, J.; Shi, R.; Bao, Q.; Lee, T. J.; Ramachandran, R.
2016-12-01
More and more Earth data analytics software products are published onto the Internet as a service, in the format of either heavyweight WSDL service or lightweight RESTful API. Such reusable data analytics services form a data service network, which allows Earth scientists to compose (mashup) services into value-added ones. Therefore, it is important to have a technique that is capable of helping Earth scientists quickly identify appropriate candidate datasets and services in the global data service network. Most existing services discovery techniques, however, mainly rely on syntax or semantics-based service matchmaking between service requests and available services. Since the scale of the data service network is increasing rapidly, the run-time computational cost will soon become a bottleneck. To address this issue, this project presents a way of applying network routing mechanism to facilitate data service discovery in a service network, featuring scalability and performance. Earth data services are automatically annotated in Web Ontology Language for Services (OWL-S) based on their metadata, semantic information, and usage history. Deterministic Annealing (DA) technique is applied to dynamically organize annotated data services into a hierarchical network, where virtual routers are created to represent semantic local network featuring leading terms. Afterwards Bloom Filters are generated over virtual routers. A data service search request is transformed into a network routing problem in order to quickly locate candidate services through network hierarchy. A neural network-powered technique is applied to assure network address encoding and routing performance. A series of empirical study has been conducted to evaluate the applicability and effectiveness of the proposed approach.
SCM: A method to improve network service layout efficiency with network evolution
Zhao, Qi; Zhang, Chuanhao
2017-01-01
Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of “software defined network + network function virtualization” (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently. PMID:29267299
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1992-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.
NASA Astrophysics Data System (ADS)
Yu, Bailang; Wu, Jianping
2006-10-01
Spatial Information Grid (SIG) is an infrastructure that has the ability to provide the services for spatial information according to users' needs by means of collecting, sharing, organizing and processing the massive distributed spatial information resources. This paper presents the architecture, technologies and implementation of the Shanghai City Spatial Information Application and Service System, a SIG based platform, which is an integrated platform that serves for administration, planning, construction and development of the city. In the System, there are ten categories of spatial information resources, including city planning, land-use, real estate, river system, transportation, municipal facility construction, environment protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services are offered as a means of GIS Web Services. The resources and services are all distributed in different web-based nodes. A single database is created to store the metadata of all the spatial information. A portal site is published as the main user interface of the System. There are three main functions in the portal site. First, users can search the metadata and consequently acquire the distributed data by using the searching results. Second, some spatial processing web applications that developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis etc, are offered to use. Third, GIS Web Services currently available in the System can be searched and new ones can be registered. The System has been working efficiently in Shanghai Government Network since 2005.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-05
... Business Networks Services, Inc., Specialist-Tech Customer Service, Philadelphia, PA; Verizon Business Networks Services, Inc., Specialist-Tech Customer Service, Tampa, Florida; Amended Certification Regarding... Business Networks Services, Inc., Order Management Division, Philadelphia, Pennsylvania and Verizon...
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-03-21
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.
NASA Astrophysics Data System (ADS)
Friedman, Roy; Kermarrec, Anne-Marie; Miranda, Hugo; Rodrigues, Luís
Gossip-based networking has emerged as a viable approach to disseminate information reliably and efficiently in large-scale systems. Initially introduced for database replication [222], the applicability of the approach extends much further now. For example, it has been applied for data aggregation [415], peer sampling [416] and publish/subscribe systems [845]. Gossip-based protocols rely on a periodic peer-wise exchange of information in wired systems. By changing the way each peer is selected for the gossip communication, and which data are exchanged and processed [451], gossip systems can be used to perform different distributed tasks, such as, among others: overlay maintenance, distributed computation, and information dissemination (a collection of papers on gossip can be found in [451]). In a wired setting, the peer sampling service, allowing for a random or specific peer selection, is often provided as an independent service, able to operate independently from other gossip-based services [416].
NASA Astrophysics Data System (ADS)
Habib, Akka; Abdelhamid, Bouzidi; Said, Housni
2018-05-01
Because of the absence of regulations and specific national norms, the unilaterally applied indicators for performance evaluation of water distribution management services are insufficient. This does not pave the way for a clear visibility of water resources. The indicators are also so heterogeneous that they are not in equilibrium with the applied management patterns. In fact: 1- The performance (yield and Linear loss index) of drinking water networks presents a discrepancy between operators and lack of homogeneity in terms of parameters put in its equation. Hence, It these indicators lose efficiency and reliability; 2- Liquid sanitation service has to go beyond the quantitative evaluation target in order to consider the qualitative aspects of water. To reach this aim, a reasonable enlargement of performance indicators is of paramount importance in order to better manage water resource which is becoming scarce and insufficient.
Insight to the express transport network
NASA Astrophysics Data System (ADS)
Yang, Hua; Nie, Yuchao; Zhang, Hongbin; Di, Zengru; Fan, Ying
2009-09-01
The express delivery industry is developing rapidly in recent years and has attracted attention in many fields. Express shipment service requires that parcels be delivered in a limited time with a low operation cost, which requests a high level and efficient express transport network (ETN). The ETN is constructed based on the public transport networks, especially the airline network. It is similar to the airline network in some aspects, while it has its own feature. With the complex network theory, the topological properties of the ETN are analyzed deeply. We find that the ETN has the small-world property, with disassortative mixing behavior and rich club phenomenon. It also shows difference from the airline network in some features, such as edge density and average shortest path. Analysis on the corresponding distance-weighted network shows that the distance distribution displays a truncated power-law behavior. At last, an evolving model, which takes both geographical constraint and preference attachment into account, is proposed. The model shows similar properties with the empirical results.
NASA Astrophysics Data System (ADS)
Agustin, I. W.; Sumantri, Y.
2017-03-01
Malang as the National Activity Centre (PKN) led to increased economic growth and increased the demand for goods both primary and tertiary goods. Demand of goods which is increasing and also diversing will certainly have an impact on the process of transportation of goods involving a freight forwarder. Shipping of goods is part of the supply chain, which handles the flow of goods, distribution and delivery service or commonly called the courier. Fulfilling the request of goods would require Logistics Service Provider (LSP) that distribute goods from point of origin to destination. Delays in the distribution of goods will slow(DOWN) economic growth in Malang, therefore focused studies on the movement of goods which includes the election of the delivery route is needed. The purpose of this study is to get the delivery route for LSP by identifying its patterns of freight transport movement and to analyze the network performance of the road that is passed by freight transportation. Data collection techniques in this research are interviews, questionnaires and observations of moving-car and traffic counting to get the volume of traffic. The study used road’s performance analysis to get the level of service (LOS) of roads which are used by the freight transportation of LSP and Dijkstra’s algorithm analysis to determine the delivery routes. The results showed that the Level of Service of the roads (LOS) is at the level of D to F which indicates that the chosen roads experience instability of traffic flow even reach a critical condition. Therefore by considering delivery routes selection both of existing condition and analysis result as well as the condition of the road network in Malang, then given alternative is by deliverying goods on the chosen routes but not at peak hour.
Production, Service and Trade Enterprise EKOREX Co. Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wlodkowski, A.
1995-12-31
In the first period of its activity the business employed skilled and experienced specialists from the ex-Military College for Army Chemical Engineers in Cracow; therefore, the enterprise dealt chiefly with the elimination of environmental contamination. Nowadays, the enterprise`s operational range comprises: consulting and training services related with ecology; study on environmental contamination; participation in the US program of low emission elimination in Cracow; designing, consulting in the realization of projects {open_quotes}GEF{close_quotes} (Global Environmental Facility); designing, construction, servicing, operating the sewerage and water treatment plants, boiler-houses, incinerators etc.; and designing of heat networks, exchanger junctions, central heating and household hot watermore » installations. Since 1991 employees have individually participated in making the program and in testing boilers and fuels verified in the boiler houses covered by the Polish - US program of reduction of low emission sources in Cracow. We have actively joined the program of elimination of heating network boiler houses (industrial and local) by designing (for the Cracow cogeneration plant and MPEC) new connections among some structures and the municiple thermal distribution network and exchangers stations. In 1994, 47 such designs were made and have been working on successive projects to be carried out in Cracow.« less
Water leakage management by district metered areas at water distribution networks.
Özdemir, Özgür
2018-03-01
The aim of this study is to design a district metered area (DMA) at water distribution network (WDN) for determination and reduction of water losses in the city of Malatya, Turkey. In the application area, a pilot DMA zone was built by analyzing the existing WDN, topographic map, length of pipes, number of customers, service connections, and valves. In the DMA, International Water Association standard water balance was calculated considering inflow rates and billing records. The ratio of water losses in DMAs was determined as 82%. Moreover, 3124 water meters of 2805 customers were examined while 50% of water meters were detected as faulty. This study revealed that DMA application is useful for the determination of water loss rate in WDNs and identify a cost-effective leakage reduction program.
Thurston, Sarah; Chakraborty, Nirali M; Hayes, Brendan; Mackay, Anna; Moon, Pierre
2015-06-17
In many low- and middle-income countries, a majority of people seek health care from the private sector. However, fragmentation, poor economies of scale, inadequate financing, political opposition, a bias toward curative services, and weak regulatory and quality control systems pose serious challenges for the private sector. Social franchising addresses a number of these challenges by organizing small, independent health care businesses into quality-assured networks. Global franchisors Marie Stopes International (MSI) and Population Services International (PSI) have rapidly scaled their family planning social franchising programs in recent years, jointly delivering over 10.8 million couple-years of protection (CYPs) in 2014-up 26% from 8.6 million CYPs just 1 year prior. Drawing on experience across MSI's 17 and PSI's 25 social franchise networks across Africa, Asia, and Latin America and the Caribbean, this article documents the organizations' operational approaches, challenges faced, and solutions implemented. The organizations provide intensive capacity building and support for private-sector providers, including clinical training, branding, monitoring quality of franchised services, and commodity support. In addition, franchising programs engage providers and clients through behavior change communication (BCC) and demand generation activities to raise awareness and to attract clients, and they implement initiatives to ensure services are affordable for the lowest-income clients. Social franchise programs offer the private sector a collective platform to better engage government in health policy advocacy and for integrating into new public health care financing and procurement mechanisms. The future of social franchising will require developing approaches to scale-up and sustain the model cost-effectively, selectively integrating other health services into the franchise package, and being responsive to evolving health care financing approaches with the potential to contribute to universal health coverage. © Thurston et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly cited. To view a copy of the license, visit http://creativecommons.org/licenses/by/3.0/
The GÉANT network: addressing current and future needs of the HEP community
NASA Astrophysics Data System (ADS)
Capone, Vincenzo; Usman, Mian
2015-12-01
The GÉANT infrastructure is the backbone that serves the scientific communities in Europe for their data movement needs and their access to international research and education networks. Using the extensive fibre footprint and infrastructure in Europe the GÉANT network delivers a portfolio of services aimed to best fit the specific needs of the users, including Authentication and Authorization Infrastructure, end-to-end performance monitoring, advanced network services (dynamic circuits, L2-L3VPN, MD-VPN). This talk will outline the factors that help the GÉANT network to respond to the needs of the High Energy Physics community, both in Europe and worldwide. The Pan-European network provides the connectivity between 40 European national research and education networks. In addition, GÉANT also connects the European NRENs to the R&E networks in other world region and has reach to over 110 NREN worldwide, making GÉANT the best connected Research and Education network, with its multiple intercontinental links to different continents e.g. North and South America, Africa and Asia-Pacific. The High Energy Physics computational needs have always had (and will keep having) a leading role among the scientific user groups of the GÉANT network: the LHCONE overlay network has been built, in collaboration with the other big world REN, specifically to address the peculiar needs of the LHC data movement. Recently, as a result of a series of coordinated efforts, the LHCONE network has been expanded to the Asia-Pacific area, and is going to include some of the main regional R&E network in the area. The LHC community is not the only one that is actively using a distributed computing model (hence the need for a high-performance network); new communities are arising, as BELLE II. GÉANT is deeply involved also with the BELLE II Experiment, to provide full support to their distributed computing model, along with a perfSONAR-based network monitoring system. GÉANT has also coordinated the setup of the network infrastructure to perform the BELLE II Trans-Atlantic Data Challenge, and has been active on helping the BELLE II community to sort out their end-to-end performance issues. In this talk we will provide information about the current GÉANT network architecture and of the international connectivity, along with the upcoming upgrades and the planned and foreseeable improvements. We will also describe the implementation of the solutions provided to support the LHC and BELLE II experiments.
Coordinating complex decision support activities across distributed applications
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1994-01-01
Knowledge-based technologies have been applied successfully to automate planning and scheduling in many problem domains. Automation of decision support can be increased further by integrating task-specific applications with supporting database systems, and by coordinating interactions between such tools to facilitate collaborative activities. Unfortunately, the technical obstacles that must be overcome to achieve this vision of transparent, cooperative problem-solving are daunting. Intelligent decision support tools are typically developed for standalone use, rely on incompatible, task-specific representational models and application programming interfaces (API's), and run on heterogeneous computing platforms. Getting such applications to interact freely calls for platform independent capabilities for distributed communication, as well as tools for mapping information across disparate representations. Symbiotics is developing a layered set of software tools (called NetWorks! for integrating and coordinating heterogeneous distributed applications. he top layer of tools consists of an extensible set of generic, programmable coordination services. Developers access these services via high-level API's to implement the desired interactions between distributed applications.
Towards an integrated EU data system within AtlantOS project
NASA Astrophysics Data System (ADS)
Pouliquen, Sylvie; Harscoat, Valerie; Waldmann, Christoph; Koop-Jakobsen, ketill
2017-04-01
The H2020 AtlantOS project started in June 2015 and aims to optimise and enhance the Integrated Atlantic Ocean Observing Systems (IAOOS). One goal is to ensure that data from different and diverse in-situ observing networks are readily accessible and useable to the wider community, international ocean science community and other stakeholders in this field. To achieve that, the strategy is to move towards an integrated data system within AtlantOS that harmonises work flows, data processing and distribution across the in-situ observing network systems, and integrates in-situ observations in existing European and international data infrastructures (Copernicus marine service, SeaDataNet NODCs, EMODnet, OBIS, GEOSS) so called Integrators. The targeted integrated system will deal with data management challenges for efficient and reliable data service to users: • Quality control commons for heterogeneous and nearly real time data • Standardisation of mandatory metadata for efficient data exchange • Interoperability of network and integrator data management systems Presently the situation is that the data acquired by the different in situ observing networks contributing to the AtlantOS project are processed and distributed using different methodologies and means. Depending on the network data management organization, the data are either processed following recommendations elaborated y the network teams and accessible through a unique portal (FTP or Web), or are processed by individual scientific researchers and made available through National Data Centres or directly at institution level. Some datasets are available through Integrators, such as Copernicus or EMODnet, but connected through ad-hoc links. To facilitate the access to the Atlantic observations and avoid "mixing pears with apples", it has been necessary to agree on (1) the EOVs list and definition across the Networks, (2) a minimum set of common vocabularies for metadata and data description to be used by all the Networks, and (3) a minimum level of Near Real Time Quality Control Procedures for selected EOVs. Then a data exchange backbone has been defined and is being setting up to facilitate discovery, viewing and downloading by the users. Some tools will be recommended to help Network plugging their data on this backbone and facilitate integration in the Integrators. Finally, existing services to the users for data discovery, viewing and downloading will be enhanced to ease access to existing observations. An initial working phase relying on existing international standards and protocols, involving data providers, both Networks and Integrators, and dealing with data harmonisation and integration objectives, has led to agreements and recommendations .The setup phase has started, both on Networks and Integrators sides, to adapt the existing systems in order to move toward this integrated EU data system within AtlantOS as well as collaboration with international partners arpound the ATlantic Ocean.
2014-01-01
Background The UK continues to experience a rise in the number of anabolic steroid-using clients attending harm reduction services such as needle and syringe programmes. Methods The present study uses interviews conducted with harm reduction service providers as well as illicit users of anabolic steroids from different areas of England and Wales to explore harm reduction for this group of drug users, focussing on needle distribution policies and harm reduction interventions developed specifically for this population of drug users. Results The article addresses the complexity of harm reduction service delivery, highlighting different models of needle distribution, such as peer-led distribution networks, as well as interventions available in steroid clinics, including liver function testing of anabolic steroid users. Aside from providing insights into the function of interventions available to steroid users, along with principles adopted by service providers, the study found significant tensions and dilemmas in policy implementation due to differing perspectives between service providers and service users relating to practices, risks and effective interventions. Conclusion The overarching finding of the study was the tremendous variability across harm reduction delivery sites in terms of available measures and mode of operation. Further research into the effectiveness of different policies directed towards people who use anabolic steroids is critical to the development of harm reduction. PMID:24986546
McKean, Cristina; Law, James; Laing, Karen; Cockerill, Maria; Allon-Smith, Jan; McCartney, Elspeth; Forbes, Joan
2017-07-01
Effective co-practice is essential to deliver services for children with speech, language and communication needs (SLCN). The necessary skills, knowledge and resources are distributed amongst professionals and agencies. Co-practice is complex and a number of barriers, such as 'border disputes' and poor awareness of respective priorities, have been identified. However social-relational aspects of co-practice have not been explored in sufficient depth to make recommendations for improvements in policy and practice. Here we apply social capital theory to data from practitioners: an analytical framework with the potential to move beyond descriptions of socio-cultural phenomena to inform change. Co-practice in a local authority site was examined to understand: (1) the range of social capital relations extant in the site's co-practice; (2) how these relations affected the abilities of the network to collaborate; (3) whether previously identified barriers to co-practice remain; (4) the nature of any new complexities that may have emerged; and (5) how inter-professional social capital might be fostered. A qualitative case study of SLCN provision within one local authority in England and its linked NHS partner was completed through face-to-face semi-structured interviews with professionals working with children with SLCN across the authority. Interviews, exploring barriers and facilitators to interagency working and social capital themes, were transcribed, subjected to thematic analysis using iterative methods and a thematic framework derived. We identified a number of characteristics important for the effective development of trust, reciprocity and negotiated co-practice at different levels of social capital networks: macro-service governance and policy; meso-school sites; and micro-intra-practitioner knowledge and skills. Barriers to co-practice differed from those found in earlier studies. Some negative aspects of complexity were evident, but only where networked professionalism and trust was absent between professions. Where practitioners embraced and services and systems enabled more fluid forms of collaboration, then trust and reciprocity developed. Highly collaborative forms of co-practice, inherently more complex at the service governance, macro-level, bring benefits. At the meso-level of the school and support team network there was greater capacity to individualize co-practice to the needs of the child. Capacity was increased at the micro-level of knowledge and skills to harness the overall resource distributed amongst members of the inter-professional team. The development of social capital, networks of trust across SLCN support teams, should be a priority at all levels-for practitioners, services, commissioners and schools. © 2016 Royal College of Speech and Language Therapists.
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel
Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.
2017-01-01
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.
Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo
2017-12-07
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.
Network design for telemedicine--e-health using satellite technology.
Graschew, Georgi; Roelofs, Theo A; Rakowsky, Stefan; Schlag, Peter M
2008-01-01
Over the last decade various international Information and Communications Technology networks have been created for a global access to high-level medical care. OP 2000 has designed and validated the high-end interactive video communication system WinVicos especially for telemedical applications, training of the physician in a distributed environment, teleconsultation and second opinion. WinVicos is operated on a workstation (WoTeSa) using standard hardware components and offers a superior image quality at a moderate transmission bandwidth of up to 2 Mbps. WoTeSa / WinVicos have been applied for IP-based communication in different satellite-based telemedical networks. In the DELTASS-project a disaster scenario was analysed and an appropriate telecommunication system for effective rescue measures for the victims was set up and evaluated. In the MEDASHIP project an integrated system for telemedical services (teleconsultation, teleelectro-cardiography, telesonography) on board of cruise ships and ferries has been set up. EMISPHER offers an equal access for most of the countries of the Euro-Mediterranean area to on-line services for health care in the required quality of service. E-learning applications, real-time telemedicine and shared management of medical assistance have been realized. The innovative developments in ICT with the aim of realizing a ubiquitous access to medical resources for everyone at any time and anywhere (u-Health) bear the risk of creating and amplifying a digital divide in the world. Therefore we have analyzed how the objective needs of the heterogeneous partners can be joined with the result that there is a need for real integration of the various platforms and services. A virtual combination of applications serves as the basic idea for the Virtual Hospital. The development of virtual hospitals and digital medicine helps to bridge the digital divide between different regions of the world and enables equal access to high-level medical care. Pre-operative planning, intra-operative navigation and minimally-invasive surgery require a digital and virtual environment supporting the perception of the physician. As data and computing resources in a virtual hospital are distributed over many sites the concept of the Grid should be integrated with other communication networks and platforms.
Implementation of Distributed Services for a Deep Sea Moored Instrument Network
NASA Astrophysics Data System (ADS)
Oreilly, T. C.; Headley, K. L.; Risi, M.; Davis, D.; Edgington, D. R.; Salamy, K. A.; Chaffey, M.
2004-12-01
The Monterey Ocean Observing System (MOOS) is a moored observatory network consisting of interconnected instrument nodes on the sea surface, midwater, and deep sea floor. We describe Software Infrastructure and Applications for MOOS ("SIAM"), which implement the management, control, and data acquisition infrastructure for the moored observatory. Links in the MOOS network include fiber-optic and 10-BaseT copper connections between the at-sea nodes. A Globalstar satellite transceiver or 900 MHz Freewave terrestrial line-of-sight RF modem provides the link to shore. All of these links support Internet protocols, providing TCP/IP connectivity throughout a system that extends from shore to sensor nodes at the air-sea interface, through the oceanic water column to a benthic network of sensor nodes extending across the deep sea floor. Exploiting this TCP/IP infrastructure as well as capabilities provided by MBARI's MOOS mooring controller, we use powerful Internet software technologies to implement a distributed management, control and data acquisition system for the moored observatory. The system design meets the demanding functional requirements specified for MOOS. Nodes and their instruments are represented by Java RMI "services" having well defined software interfaces. Clients anywhere on the network can interact with any node or instrument through its corresponding service. A client may be on the same node as the service, may be on another node, or may reside on shore. Clients may be human, e.g. when a scientist on shore accesses a deployed instrument in real-time through a user interface. Clients may also be software components that interact autonomously with instruments and nodes, e.g. for purposes such as system resource management or autonomous detection and response to scientifically interesting events. All electrical power to the moored network is provided by solar and wind energy, and the RF shore-to-mooring links are intermittent and relatively low-bandwidth connections. Thus power and wireless bandwidth are limited resources that constrain our choice of service technologies and wireless access strategy. We describe and evaluate system performance in light of actual deployment of observatory elements in Monterey Bay, and discuss how the system can be developed further. We also consider management and control strategies for the cable-to-shore observatory known as MARS ("Monterey Accelerated Research System"). The MARS cable will provide high power and continuous high-bandwidth connectivity between seafloor instrument nodes and shore, thus removing key limitations of the moored observatory. Moreover MARS functional requirements may differ significantly from MOOS requirements. In light of these differences, we discuss how elements of our MOOS moored observatory architecture might be adapted to MARS.
Hughes, Jane; Scrimshire, Ashley; Steinberg, Laura; Yiannoullou, Petros; Newton, Katherine; Hall, Claire; Pearce, Lyndsay; Macdonald, Andrew
2017-05-01
The management of blunt splenic injuries (BSI) has evolved toward strategies that avoid splenectomy. There is growing adoption of interventional radiology (IR) techniques in non-operative management of BSI, with evidence suggesting a corresponding reduction in emergency laparotomy requirements and increased splenic preservation rates. Currently there are no UK national guidelines for the management of blunt splenic injury. This may lead to variations in management, despite the reorganisation of trauma services in England in 2012. A survey was distributed through the British Society of Interventional Radiologists to all UK members aiming to identify availability of IR services in England, radiologists' practice, and attitudes toward management of BSI. 116 responses from respondents working in 23 of the 26 Regional Trauma Networks in England were received. 79% provide a single dedicated IR service but over 50% cover more than one hospital within the network. All offer arterial embolisation for BSI. Only 25% follow guidelines. In haemodynamically stable patients, an increasing trend for embolisation was seen as grade of splenic injury increased from 1 to 4 (12.5%-82.14%, p<0.01). In unstable patients or those with radiological evidence of bleeding, significantly more respondents offer embolisation for grade 1-3 injuries (p<0.01), compared to stable patients. Significantly fewer respondents offer embolisation for grade 5 versus 4 injuries in unstable patients or with evidence of bleeding. Splenic embolisation is offered for a variety of injury grades, providing the patient remains stable. Variation in interventional radiology services remain despite the introduction of regional trauma networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
QoS Challenges and Opportunities in Wireless Sensor/Actuator Networks
Xia, Feng
2008-01-01
A wireless sensor/actuator network (WSAN) is a group of sensors and actuators that are geographically distributed and interconnected by wireless networks. Sensors gather information about the state of physical world. Actuators react to this information by performing appropriate actions. WSANs thus enable cyber systems to monitor and manipulate the behavior of the physical world. WSANs are growing at a tremendous pace, just like the exploding evolution of Internet. Supporting quality of service (QoS) will be of critical importance for pervasive WSANs that serve as the network infrastructure of diverse applications. To spark new research and development interests in this field, this paper examines and discusses the requirements, critical challenges, and open research issues on QoS management in WSANs. A brief overview of recent progress is given. PMID:27879755
Applications of CCSDS recommendations to Integrated Ground Data Systems (IGDS)
NASA Technical Reports Server (NTRS)
Mizuta, Hiroshi; Martin, Daniel; Kato, Hatsuhiko; Ihara, Hirokazu
1993-01-01
This paper describes an application of the CCSDS Principle Network (CPH) service model to communications network elements of a postulated Integrated Ground Data System (IGDS). Functions are drawn principally from COSMICS (Cosmic Information and Control System), an integrated space control infrastructure, and the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). From functional requirements, this paper derives a set of five communications network partitions which, taken together, support proposed space control infrastructures and data distribution systems. Our functional analysis indicates that the five network partitions derived in this paper should effectively interconnect the users, centers, processors, and other architectural elements of an IGDS. This paper illustrates a useful application of the CCSDS (Consultive Committee for Space Data Systems) Recommendations to ground data system development.
NASA Astrophysics Data System (ADS)
Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.
2017-05-01
In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.
NASA Astrophysics Data System (ADS)
Nakamura, Hirotaka; Suzuki, Hiro; Kani, Jun-Ichi; Iwatsuki, Katsumi
2006-05-01
This paper proposes and demonstrates a reliable wide-area wavelength-division-multiplexing passive optical network (WDM-PON) with a wavelength-shifted protection scheme. This protection scheme utilizes the cyclic property of 2 × N athermal arrayed-waveguide grating and two kinds of wavelength allocations, each of which is assigned for working and protection, respectively. Compared with conventional protection schemes, this scheme does not need a 3-dB optical coupler, thus leading to ensure the large loss budget that is suited for wide-area WDM-PONs. It also features a passive access node and does not have a protection function in the optical network unit (ONU). The feasibility of the proposed scheme is experimentally confirmed by the carrier-distributed WDM-PON with gigabit Ethernet interface (GbE-IF) and 10-GbE-IF, in which the ONU does not employ a light source, and all wavelengths for upstream signals are centralized and distributed from the central office.
Naval Research Laboratory Fact Book 2012
2012-11-01
Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion
An Efficient Resource Management System for a Streaming Media Distribution Network
ERIC Educational Resources Information Center
Cahill, Adrian J.; Sreenan, Cormac J.
2006-01-01
This paper examines the design and evaluation of a TV on Demand (TVoD) system, consisting of a globally accessible storage architecture where all TV content broadcast over a period of time is made available for streaming. The proposed architecture consists of idle Internet Service Provider (ISP) servers that can be rented and released dynamically…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katipamula, Srinivas; Gowri, Krishnan; Hernandez, George
This paper describes one such reference process that can be deployed to provide continuous automated conditioned-based maintenance management for buildings that have BIM, a building automation system (BAS) and a computerized maintenance management software (CMMS) systems. The process can be deployed using an open source transactional network platform, VOLTTRON™, designed for distributed sensing and controls and supports both energy efficiency and grid services.
Physical Layer Ethernet Clock Synchronization
2010-11-01
42 nd Annual Precise Time and Time Interval (PTTI) Meeting 77 PHYSICAL LAYER ETHERNET CLOCK SYNCHRONIZATION Reinhard Exel, Georg...oeaw.ac.at Nikolaus Kerö Oregano Systems, Mohsgasse 1, 1030 Wien, Austria E-mail: nikolaus.keroe@oregano.at Abstract Clock synchronization ...is a service widely used in distributed networks to coordinate data acquisition and actions. As the requirement to achieve tighter synchronization
ERIC Educational Resources Information Center
Piele, Philip K., Ed.
This volume contains 22 presentations delivered at the 1999 Intelevent Conference held in Edinburgh, Scotland. The proceedings were compiled, printed and distributed by the ERIC Clearinghouse on Educational Management at the University of Oregon. Papers delivered at the conference include the following: the inevitable globalization of…
Understanding the implementation of evidence-based care: a structural network approach.
Parchman, Michael L; Scoglio, Caterina M; Schumm, Phillip
2011-02-24
Recent study of complex networks has yielded many new insights into phenomenon such as social networks, the internet, and sexually transmitted infections. The purpose of this analysis is to examine the properties of a network created by the 'co-care' of patients within one region of the Veterans Health Affairs. Data were obtained for all outpatient visits from 1 October 2006 to 30 September 2008 within one large Veterans Integrated Service Network. Types of physician within each clinic were nodes connected by shared patients, with a weighted link representing the number of shared patients between each connected pair. Network metrics calculated included edge weights, node degree, node strength, node coreness, and node betweenness. Log-log plots were used to examine the distribution of these metrics. Sizes of k-core networks were also computed under multiple conditions of node removal. There were 4,310,465 encounters by 266,710 shared patients between 722 provider types (nodes) across 41 stations or clinics resulting in 34,390 edges. The number of other nodes to which primary care provider nodes have a connection (172.7) is 42% greater than that of general surgeons and two and one-half times as high as cardiology. The log-log plot of the edge weight distribution appears to be linear in nature, revealing a 'scale-free' characteristic of the network, while the distributions of node degree and node strength are less so. The analysis of the k-core network sizes under increasing removal of primary care nodes shows that about 10 most connected primary care nodes play a critical role in keeping the k-core networks connected, because their removal disintegrates the highest k-core network. Delivery of healthcare in a large healthcare system such as that of the US Department of Veterans Affairs (VA) can be represented as a complex network. This network consists of highly connected provider nodes that serve as 'hubs' within the network, and demonstrates some 'scale-free' properties. By using currently available tools to explore its topology, we can explore how the underlying connectivity of such a system affects the behavior of providers, and perhaps leverage that understanding to improve quality and outcomes of care.
Principles for scaling of distributed direct potable water reuse systems: a modeling study.
Guo, Tianjiao; Englehardt, James D
2015-05-15
Scaling of direct potable water reuse (DPR) systems involves tradeoffs of treatment facility economy-of-scale, versus cost and energy of conveyance including energy for upgradient distribution of treated water, and retention of wastewater thermal energy. In this study, a generalized model of the cost of DPR as a function of treatment plant scale, assuming futuristic, optimized conveyance networks, was constructed for purposes of developing design principles. Fractal landscapes representing flat, hilly, and mountainous topographies were simulated, with urban, suburban, and rural housing distributions placed by modified preferential growth algorithm. Treatment plants were allocated by agglomerative hierarchical clustering, networked to buildings by minimum spanning tree. Simulations assume advanced oxidation-based DPR system design, with 20-year design life and capability to mineralize chemical oxygen demand below normal detection limits, allowing implementation in regions where disposal of concentrate containing hormones and antiscalants is not practical. Results indicate that total DPR capital and O&M costs in rural areas, where systems that return nutrients to the land may be more appropriate, are high. However, costs in urban/suburban areas are competitive with current water/wastewater service costs at scales of ca. one plant per 10,000 residences. This size is relatively small, and costs do not increase significantly until plant service areas fall below 100 to 1000 homes. Based on these results, distributed DPR systems are recommended for consideration for urban/suburban water and wastewater system capacity expansion projects. Copyright © 2015 Elsevier Ltd. All rights reserved.
Research on Holographic Evaluation of Service Quality in Power Data Network
NASA Astrophysics Data System (ADS)
Wei, Chen; Jing, Tao; Ji, Yutong
2018-01-01
With the rapid development of power data network, the continuous development of the Power data application service system, more and more service systems are being put into operation. Following this, the higher requirements for network quality and service quality are raised, in the actual process for the network operation and maintenance. This paper describes the electricity network and data network services status. A holographic assessment model was presented to achieve a comprehensive intelligence assessment on the power data network and quality of service in the operation and maintenance on the power data network. This evaluation method avoids the problems caused by traditional means which performs a single assessment of network performance quality. This intelligent Evaluation method can improve the efficiency of network operation and maintenance guarantee the quality of real-time service in the power data network..
NASA Astrophysics Data System (ADS)
Johnston, William; Ernst, M.; Dart, E.; Tierney, B.
2014-04-01
Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large-scale, widely distributed data analysis systems, and the experience of the LHC can be applied to other scientific disciplines. In particular, specific analogies to the SKA will be cited in the talk.
NASA Astrophysics Data System (ADS)
Habibi, H.; Norouzi, A.; Habib, A.; Seo, D. J.
2016-12-01
To produce accurate predictions of flooding in urban areas, it is necessary to model both natural channel and storm drain networks. While there exist many urban hydraulic models of varying sophistication, most of them are not practical for real-time application for large urban areas. On the other hand, most distributed hydrologic models developed for real-time applications lack the ability to explicitly simulate storm drains. In this work, we develop a storm drain model that can be coupled with distributed hydrologic models such as the National Weather Service Hydrology Laboratory's Distributed Hydrologic Model, for real-time flash flood prediction in large urban areas to improve prediction and to advance the understanding of integrated response of natural channels and storm drains to rainfall events of varying magnitude and spatiotemporal extent in urban catchments of varying sizes. The initial study area is the Johnson Creek Catchment (40.1 km2) in the City of Arlington, TX. For observed rainfall, the high-resolution (500 m, 1 min) precipitation data from the Dallas-Fort Worth Demonstration Network of the Collaborative Adaptive Sensing of the Atmosphere radars is used.
RESTful M2M Gateway for Remote Wireless Monitoring for District Central Heating Networks
Cheng, Bo; Wei, Zesan
2014-01-01
In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented. PMID:25436650
RESTful M2M gateway for remote wireless monitoring for district central heating networks.
Cheng, Bo; Wei, Zesan
2014-11-27
In recent years, the increased interest in energy conservation and environmental protection, combined with the development of modern communication and computer technology, has resulted in the replacement of distributed heating by central heating in urban areas. This paper proposes a Representational State Transfer (REST) Machine-to-Machine (M2M) gateway for wireless remote monitoring for a district central heating network. In particular, we focus on the resource-oriented RESTful M2M gateway architecture, and present an uniform devices abstraction approach based on Open Service Gateway Initiative (OSGi) technology, and implement the resource mapping mechanism between resource address mapping mechanism between RESTful resources and the physical sensor devices, and present the buffer queue combined with polling method to implement the data scheduling and Quality of Service (QoS) guarantee, and also give the RESTful M2M gateway open service Application Programming Interface (API) set. The performance has been measured and analyzed. Finally, the conclusions and future work are presented.
Autonomous Distributed Congestion Control Scheme in WCDMA Network
NASA Astrophysics Data System (ADS)
Ahmad, Hafiz Farooq; Suguri, Hiroki; Choudhary, Muhammad Qaisar; Hassan, Ammar; Liaqat, Ali; Khan, Muhammad Umer
Wireless technology has become widely popular and an important means of communication. A key issue in delivering wireless services is the problem of congestion which has an adverse impact on the Quality of Service (QoS), especially timeliness. Although a lot of work has been done in the context of RRM (Radio Resource Management), the deliverance of quality service to the end user still remains a challenge. Therefore there is need for a system that provides real-time services to the users through high assurance. We propose an intelligent agent-based approach to guarantee a predefined Service Level Agreement (SLA) with heterogeneous user requirements for appropriate bandwidth allocation in QoS sensitive cellular networks. The proposed system architecture exploits Case Based Reasoning (CBR) technique to handle RRM process of congestion management. The system accomplishes predefined SLA through the use of Retrieval and Adaptation Algorithm based on CBR case library. The proposed intelligent agent architecture gives autonomy to Radio Network Controller (RNC) or Base Station (BS) in accepting, rejecting or buffering a connection request to manage system bandwidth. Instead of simply blocking the connection request as congestion hits the system, different buffering durations are allocated to diverse classes of users based on their SLA. This increases the opportunity of connection establishment and reduces the call blocking rate extensively in changing environment. We carry out simulation of the proposed system that verifies efficient performance for congestion handling. The results also show built-in dynamism of our system to cater for variety of SLA requirements.
S-Cube: Enabling the Next Generation of Software Services
NASA Astrophysics Data System (ADS)
Metzger, Andreas; Pohl, Klaus
The Service Oriented Architecture (SOA) paradigm is increasingly adopted by industry for building distributed software systems. However, when designing, developing and operating innovative software services and servicebased systems, several challenges exist. Those challenges include how to manage the complexity of those systems, how to establish, monitor and enforce Quality of Service (QoS) and Service Level Agreements (SLAs), as well as how to build those systems such that they can proactively adapt to dynamically changing requirements and context conditions. Developing foundational solutions for those challenges requires joint efforts of different research communities such as Business Process Management, Grid Computing, Service Oriented Computing and Software Engineering. This paper provides an overview of S-Cube, the European Network of Excellence on Software Services and Systems. S-Cube brings together researchers from leading research institutions across Europe, who join their competences to develop foundations, theories as well as methods and tools for future service-based systems.
GUEST EDITORS' INTRODUCTION: Guest Editors' introduction
NASA Astrophysics Data System (ADS)
Guerraoui, Rachid; Vinoski, Steve
1997-09-01
The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.
GSFC network operations with Tracking and Data Relay Satellites
NASA Astrophysics Data System (ADS)
Spearing, R.; Perreten, D. E.
The Tracking and Data Relay Satellite System (TDRSS) Network (TN) has been developed to provide services to all NASA User spacecraft in near-earth orbits. Three inter-relating entities will provide these services. The TN has been transformed from a network continuously changing to meet User specific requirements to a network which is flexible to meet future needs without significant changes in operational concepts. Attention is given to the evolution of the TN network, the TN capabilities-space segment, forward link services, tracking services, return link services, the three basic capabilities, single access services, multiple access services, simulation services, the White Sands Ground Terminal, the NASA communications network, and the network control center.
GSFC network operations with Tracking and Data Relay Satellites
NASA Technical Reports Server (NTRS)
Spearing, R.; Perreten, D. E.
1984-01-01
The Tracking and Data Relay Satellite System (TDRSS) Network (TN) has been developed to provide services to all NASA User spacecraft in near-earth orbits. Three inter-relating entities will provide these services. The TN has been transformed from a network continuously changing to meet User specific requirements to a network which is flexible to meet future needs without significant changes in operational concepts. Attention is given to the evolution of the TN network, the TN capabilities-space segment, forward link services, tracking services, return link services, the three basic capabilities, single access services, multiple access services, simulation services, the White Sands Ground Terminal, the NASA communications network, and the network control center.
Topology control algorithm for wireless sensor networks based on Link forwarding
NASA Astrophysics Data System (ADS)
Pucuo, Cairen; Qi, Ai-qin
2018-03-01
The research of topology control could effectively save energy and increase the service life of network based on wireless sensor. In this paper, a arithmetic called LTHC (link transmit hybrid clustering) based on link transmit is proposed. It decreases expenditure of energy by changing the way of cluster-node’s communication. The idea is to establish a link between cluster and SINK node when the cluster is formed, and link-node must be non-cluster. Through the link, cluster sends information to SINK nodes. For the sake of achieving the uniform distribution of energy on the network, prolongate the network survival time, and improve the purpose of communication, the communication will cut down much more expenditure of energy for cluster which away from SINK node. In the two aspects of improving the traffic and network survival time, we find that the LTCH is far superior to the traditional LEACH by experiments.
Evaluation Study of a Wireless Multimedia Traffic-Oriented Network Model
NASA Astrophysics Data System (ADS)
Vasiliadis, D. C.; Rizos, G. E.; Vassilakis, C.
2008-11-01
In this paper, a wireless multimedia traffic-oriented network scheme over a fourth generation system (4-G) is presented and analyzed. We conducted an extensive evaluation study for various mobility configurations in order to incorporate the behavior of the IEEE 802.11b standard over a test-bed wireless multimedia network model. In this context, the Quality of Services (QoS) over this network is vital for providing a reliable high-bandwidth platform for data-intensive sources like video streaming. Therefore, the main issues concerned in terms of QoS were the metrics for bandwidth of both dropped and lost packets and their mean packet delay under various traffic conditions. Finally, we used a generic distance-vector routing protocol which was based on an implementation of Distributed Bellman-Ford algorithm. The performance of the test-bed network model has been evaluated by using the simulation environment of NS-2.
[Innovative teleradiology network: concept and experience report].
Kämmerer, M; Bethge, O T; Antoch, G
2014-04-01
(DICOM E-MAIL provides a standardized way for exchanging DICOM objects (Digital Imaging and Communications in Medicine) and further relevant patient data for the treatment context reliably and securely via encrypted e-mails. The current version of the DICOM E-MAIL standard recommendations of the"Deutsche Röntgengesellschaft" (DRG, German Röntgen Society) defines for the first time options for setting up a special directory service for the provision and distribution of communication data of all participants in a network. By using such"telephone books", networks of any size can be operated independent of the provider. Compared to a Cross-Enterprise Document Sharing (XDS) scenario, the required infrastructure is considerably less complex and quicker to realize. Critical success factors are, in addition to the technology and an effective support, that the participants themselves contribute to the further development of the network and in this way, the network approach can be practiced.
Cloud-based image sharing network for collaborative imaging diagnosis and consultation
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Gu, Yiping; Wang, Mingqing; Sun, Jianyong; Li, Ming; Zhang, Weiqiang; Zhang, Jianguo
2018-03-01
In this presentation, we presented a new approach to design cloud-based image sharing network for collaborative imaging diagnosis and consultation through Internet, which can enable radiologists, specialists and physicians locating in different sites collaboratively and interactively to do imaging diagnosis or consultation for difficult or emergency cases. The designed network combined a regional RIS, grid-based image distribution management, an integrated video conferencing system and multi-platform interactive image display devices together with secured messaging and data communication. There are three kinds of components in the network: edge server, grid-based imaging documents registry and repository, and multi-platform display devices. This network has been deployed in a public cloud platform of Alibaba through Internet since March 2017 and used for small lung nodule or early staging lung cancer diagnosis services between Radiology departments of Huadong hospital in Shanghai and the First Hospital of Jiaxing in Zhejiang Province.
The Effects of Denial-of-Service Attacks on Secure Time-Critical Communications in the Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fengli; Li, QInghua; Mantooth, Homer Alan
2016-04-02
According to IEC 61850, many smart grid communications require messages to be delivered in a very short time. –Trip messages and sample values applied to the transmission level: 3 ms –Interlocking messages applied to the distribution level: 10 ms •Time-critical communications are vulnerable to denial-of-service (DoS) attacks –Flooding attack: Attacker floods many messages to the target network/machine. We conducted systematic, experimental study about how DoS attacks affect message delivery delays.
Distributing flight dynamics products via the World Wide Web
NASA Technical Reports Server (NTRS)
Woodard, Mark; Matusow, David
1996-01-01
The NASA Flight Dynamics Products Center (FDPC), which make available selected operations products via the World Wide Web, is reported on. The FDPC can be accessed from any host machine connected to the Internet. It is a multi-mission service which provides Internet users with unrestricted access to the following standard products: antenna contact predictions; ground tracks; orbit ephemerides; mean and osculating orbital elements; earth sensor sun and moon interference predictions; space flight tracking data network summaries; and Shuttle transport system predictions. Several scientific data bases are available through the service.
Life span in online communities
NASA Astrophysics Data System (ADS)
Grabowski, A.; Kosiński, R. A.
2010-12-01
Recently online communities have attracted great interest and have become an important medium of information exchange between users. The aim of this work is to introduce a simple model of the evolution of online communities. This model describes (a) the time evolution of users’ activity in a web service, e.g., the time evolution of the number of online friends or written posts, (b) the time evolution of the degree distribution of a social network, and (c) the time evolution of the number of active users of a web service. In the second part of the paper we investigate the influence of the users’ lifespan (i.e., the total time in which they are active in an online community) on the process of rumor propagation in evolving social networks. Viral marketing is an important application of such method of information propagation.
Life span in online communities.
Grabowski, A; Kosiński, R A
2010-12-01
Recently online communities have attracted great interest and have become an important medium of information exchange between users. The aim of this work is to introduce a simple model of the evolution of online communities. This model describes (a) the time evolution of users' activity in a web service, e.g., the time evolution of the number of online friends or written posts, (b) the time evolution of the degree distribution of a social network, and (c) the time evolution of the number of active users of a web service. In the second part of the paper we investigate the influence of the users' lifespan (i.e., the total time in which they are active in an online community) on the process of rumor propagation in evolving social networks. Viral marketing is an important application of such method of information propagation.
Failure probability analysis of optical grid
NASA Astrophysics Data System (ADS)
Zhong, Yaoquan; Guo, Wei; Sun, Weiqiang; Jin, Yaohui; Hu, Weisheng
2008-11-01
Optical grid, the integrated computing environment based on optical network, is expected to be an efficient infrastructure to support advanced data-intensive grid applications. In optical grid, the faults of both computational and network resources are inevitable due to the large scale and high complexity of the system. With the optical network based distributed computing systems extensive applied in the processing of data, the requirement of the application failure probability have been an important indicator of the quality of application and an important aspect the operators consider. This paper will present a task-based analysis method of the application failure probability in optical grid. Then the failure probability of the entire application can be quantified, and the performance of reducing application failure probability in different backup strategies can be compared, so that the different requirements of different clients can be satisfied according to the application failure probability respectively. In optical grid, when the application based DAG (directed acyclic graph) is executed in different backup strategies, the application failure probability and the application complete time is different. This paper will propose new multi-objective differentiated services algorithm (MDSA). New application scheduling algorithm can guarantee the requirement of the failure probability and improve the network resource utilization, realize a compromise between the network operator and the application submission. Then differentiated services can be achieved in optical grid.
NASA Astrophysics Data System (ADS)
Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Chen, Haoran; Zhu, Ruijie; Zhou, Quanwei; Yu, Chenbei; Cui, Rui
2017-01-01
A Virtual Network Operator (VNO) is a provider and reseller of network services from other telecommunications suppliers. These network providers are categorized as virtual because they do not own the underlying telecommunication infrastructure. In terms of business operation, VNO can provide customers with personalized services by leasing network infrastructure from traditional network providers. The unique business modes of VNO lead to the emergence of network on demand (NoD) services. The conventional network provisioning involves a series of manual operation and configuration, which leads to high cost in time. Considering the advantages of Software Defined Networking (SDN), this paper proposes a novel NoD service provisioning solution to satisfy the private network need of VNOs. The solution is first verified in the real software defined multi-domain optical networks with multi-vendor OTN equipment. With the proposed solution, NoD service can be deployed via online web portals in near-real time. It reinvents the customer experience and redefines how network services are delivered to customers via an online self-service portal. Ultimately, this means a customer will be able to simply go online, click a few buttons and have new services almost instantaneously.
2013-01-01
Introduction This paper examines the cost of quality improvements in Population Services International (PSI) Myanmar’s social franchise operations from 2007 to 2009. Methods The social franchise commodities studied were products for reproductive health, malaria, STIs, pneumonia, and diarrhea. This project applied ingredients based costing for labor, supplies, transport, and overhead. Data were gathered seven during key informant interviews with staff in the central Yangon office, examination of 3 years of payroll data, examination of a time motion study conducted by PSI, and spreadsheets recording the costs of acquiring and transporting supplies. Results In 2009 PSI Myanmar’s social franchise devoted $2.02 million towards a 94% reduction in commodity prices offered to its network of over 1700 primary care providers. These providers retained 1/3 of the subsidy as revenue and passed along the other 2/3 to their patients in the course of offering subsidized care for 1.5 million health episodes. In addition, PSI Myanmar devoted $2.09 million to support a team of franchise officers who conducted quality assurance for the private providers overseeing service quality and to distributing medical commodities. Conclusion In Myanmar, the social franchise operated by PSI spends roughly $1.00 in quality management and retailing for every $1.00 spent subsidizing medical commodities. Some services are free, but patients also pay fees for other lines of service. Overall patients contribute 1/6 as much as PSI does. Unlike other NGO’s, health services in social franchises like PSI are not all free to the patients, nor are the discounts uniformly applied. Discounts and subsidies evolve in response to public health concerns, market demand, providers’ cost structures as well as strategic objectives in maintaining the network and its portfolio of services. PMID:23826743
Bishai, David; LeFevre, Amnesty; Theuss, Marc; Boxshall, Matt; Hetherington, John D; Zaw, Min; Montagu, Dominic
2013-01-01
This paper examines the cost of quality improvements in Population Services International (PSI) Myanmar's social franchise operations from 2007 to 2009. The social franchise commodities studied were products for reproductive health, malaria, STIs, pneumonia, and diarrhea. This project applied ingredients based costing for labor, supplies, transport, and overhead. Data were gathered seven during key informant interviews with staff in the central Yangon office, examination of 3 years of payroll data, examination of a time motion study conducted by PSI, and spreadsheets recording the costs of acquiring and transporting supplies. In 2009 PSI Myanmar's social franchise devoted $2.02 million towards a 94% reduction in commodity prices offered to its network of over 1700 primary care providers. These providers retained 1/3 of the subsidy as revenue and passed along the other 2/3 to their patients in the course of offering subsidized care for 1.5 million health episodes. In addition, PSI Myanmar devoted $2.09 million to support a team of franchise officers who conducted quality assurance for the private providers overseeing service quality and to distributing medical commodities. In Myanmar, the social franchise operated by PSI spends roughly $1.00 in quality management and retailing for every $1.00 spent subsidizing medical commodities. Some services are free, but patients also pay fees for other lines of service. Overall patients contribute 1/6 as much as PSI does. Unlike other NGO's, health services in social franchises like PSI are not all free to the patients, nor are the discounts uniformly applied. Discounts and subsidies evolve in response to public health concerns, market demand, providers' cost structures as well as strategic objectives in maintaining the network and its portfolio of services.
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
Two tandem queues with general renewal input. 2: Asymptotic expansions for the diffusion model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knessl, C.; Tier, C.
1999-10-01
In Part 1 the authors formulated and solved a diffusion model for two tandem queues with exponential servers and general renewal arrivals. They thus obtained the easy traffic diffusion approximation to the steady state joint queue length distribution for this network. Here they study asymptotic and numerical properties of the diffusion approximation. In particular, analytical expressions are obtained for the tail probabilities. Both the joint distribution of the two queues and the marginal distribution of the second queue are considered. They also give numerical illustrations of how this marginal is affected by changes in the arrival and service processes.
Self-determined mechanisms in complex networks
NASA Astrophysics Data System (ADS)
Liu, Yang; Yuan, Jian; Shan, Xiuming; Ren, Yong; Ma, Zhengxin
2008-03-01
Self-organized networks are pervasive in communication systems such as the Internet, overlay networks, peer-to-peer networks, and cluster-based services. These networks evolve into complex topologies, under specific driving forces, i.e. user demands, technological innovations, design objectives and so on. Our study focuses on the driving forces behind individual evolutions of network components, and their stimulation and domination to the self-organized networks which are defined as self-determined mechanisms in this paper. Understanding forces underlying the evolution of networks should enable informed design decisions and help to avoid unwanted surprises, such as congestion collapse. A case study on the macroscopic evolution of the Internet topology of autonomous systems under a specific driving force is then presented. Using computer simulations, it is found that the power-law degree distribution can originate from a connection preference to larger numbers of users, and that the small-world property can be caused by rapid growth in the number of users. Our results provide a new feasible perspective to understand intrinsic fundamentals in the topological evolution of complex networks.
A Weibull distribution accrual failure detector for cloud computing.
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.
An historical overview of the National Network of Libraries of Medicine, 1985–2015
Speaker, Susan L.
2018-01-01
The National Network of Libraries of Medicine (NNLM), established as the Regional Medical Library Program in 1965, has a rich and remarkable history. The network’s first twenty years were documented in a detailed 1987 history by Alison Bunting, AHIP, FMLA. This article traces the major trends in the network’s development since then: reconceiving the Regional Medical Library staff as a “field force” for developing, marketing, and distributing a growing number of National Library of Medicine (NLM) products and services; subsequent expansion of outreach to health professionals who are unaffiliated with academic medical centers, particularly those in public health; the advent of the Internet during the 1990s, which brought the migration of NLM and NNLM resources and services to the World Wide Web, and a mandate to encourage and facilitate Internet connectivity in the network; and the further expansion of the NLM and NNLM mission to include providing consumer health resources to satisfy growing public demand. The concluding section discusses the many challenges that NNLM staff faced as they transformed the network from a system that served mainly academic medical researchers to a larger, denser organization that offers health information resources to everyone. PMID:29632439
NASA Astrophysics Data System (ADS)
Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.; Wartel, R.
2017-10-01
IPv4 network addresses are running out and the deployment of IPv6 networking in many places is now well underway. Following the work of the HEPiX IPv6 Working Group, a growing number of sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) are deploying dual-stack IPv6/IPv4 services. The aim of this is to support the use of IPv6-only clients, i.e. worker nodes, virtual machines or containers. The IPv6 networking protocols while they do contain features aimed at improving security also bring new challenges for operational IT security. The lack of maturity of IPv6 implementations together with the increased complexity of some of the protocol standards raise many new issues for operational security teams. The HEPiX IPv6 Working Group is producing guidance on best practices in this area. This paper considers some of the security concerns for WLCG in an IPv6 world and presents the HEPiX IPv6 working group guidance for the system administrators who manage IT services on the WLCG distributed infrastructure, for their related site security and networking teams and for developers and software engineers working on WLCG applications.
Cost-aware request routing in multi-geography cloud data centres using software-defined networking
NASA Astrophysics Data System (ADS)
Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei
2017-03-01
Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.
Dynamic Reconfiguration of a RGBD Sensor Based on QoS and QoC Requirements in Distributed Systems.
Munera, Eduardo; Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Noguera, Juan Fco Blanes
2015-07-24
The inclusion of embedded sensors into a networked system provides useful information for many applications. A Distributed Control System (DCS) is one of the clearest examples where processing and communications are constrained by the client's requirements and the capacity of the system. An embedded sensor with advanced processing and communications capabilities supplies high level information, abstracting from the data acquisition process and objects recognition mechanisms. The implementation of an embedded sensor/actuator as a Smart Resource permits clients to access sensor information through distributed network services. Smart resources can offer sensor services as well as computing, communications and peripheral access by implementing a self-aware based adaptation mechanism which adapts the execution profile to the context. On the other hand, information integrity must be ensured when computing processes are dynamically adapted. Therefore, the processing must be adapted to perform tasks in a certain lapse of time but always ensuring a minimum process quality. In the same way, communications must try to reduce the data traffic without excluding relevant information. The main objective of the paper is to present a dynamic configuration mechanism to adapt the sensor processing and communication to the client's requirements in the DCS. This paper describes an implementation of a smart resource based on a Red, Green, Blue, and Depth (RGBD) sensor in order to test the dynamic configuration mechanism presented.
Advanced systems engineering and network planning support
NASA Technical Reports Server (NTRS)
Walters, David H.; Barrett, Larry K.; Boyd, Ronald; Bazaj, Suresh; Mitchell, Lionel; Brosi, Fred
1990-01-01
The objective of this task was to take a fresh look at the NASA Space Network Control (SNC) element for the Advanced Tracking and Data Relay Satellite System (ATDRSS) such that it can be made more efficient and responsive to the user by introducing new concepts and technologies appropriate for the 1997 timeframe. In particular, it was desired to investigate the technologies and concepts employed in similar systems that may be applicable to the SNC. The recommendations resulting from this study include resource partitioning, on-line access to subsets of the SN schedule, fluid scheduling, increased use of demand access on the MA service, automating Inter-System Control functions using monitor by exception, increase automation for distributed data management and distributed work management, viewing SN operational control in terms of the OSI Management framework, and the introduction of automated interface management.
Castro, Eida M.; Jiménez, Julio C.; Quinn, Gwendolyn; García, Myra; Colón, Yesenia; Ramos, Axel; Brandon, Thomas; Simmons, Vani; Gwede, Clement; Vadaparampil, Susan; Nazario, Cruz María
2015-01-01
Objective The objectives of this study were to identify cancer-related health care services and to explore the presence of inter-organizational interactions among clinical and support oncology services in southern Puerto Rico. Methods From January through July of 2010, a survey was completed by 54 health care organizations offering clinical, supportive, or both services to cancer patients/survivors (CPS) in southern PR. Survey data were compiled and descriptive analyses performed using the software Statistical Package for a Social Science (SPSS), version 18.0. Results The distribution of the primary services provided by the participating organizations was the following: 26 had clinical services, 16 had support services, and 12 offered a combination of clinical and support services. Only 24% of the surveyed organizations offered their services exclusively to patients diagnosed with cancer. In terms of referral practices, 61% of the responses were for medical specialists, 43% were for mental health services, and 37% were referrals for primary care services. The most common reason for interacting (n = 27) was to provide a given patient both an referral and information. Conclusion Findings suggest gaps in both the availability of oncology services and the delivery of integrated health care. Lack of communication among clinical and support organizations (for cancer patients, specifically) could negatively impact the quality of the services that they offer. Further network analysis studies are needed to confirm these gaps. Until systemic, structural changes occur, more efforts are needed to facilitate communication and collaboration among these kinds of organization. PMID:25249352
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-09
...--Support Services for Community Services Division Networks AGENCY: National Institute of Corrections, U.S... cooperative agreement will provide support services to NIC Community Services Division sponsored networks. The networks are designed for NIC to assist in meeting the needs of the field of community corrections by...
Etemadi, Manal; Gorji, Hasan Abolghasem; Kangarani, Hannaneh Mohammadi; Ashtarian, Kioomars
2017-12-01
The extent of universal health coverage in terms of financial protection is worrisome in Iran. There are challenges in health policies to guarantee financial accessibility to health services, especially for poor people. Various institutions offer support to ensure that the poor have financial access to health services. The aim of this study is to investigate the relationship network among the institutions active in this field. This study is a policy document analysis. It evaluates the country's legal documents in the field of financial support to the poor for healthcare after the Islamic Revolution in Iran. The researchers looked for the documents on the related websites and referred to the related organizations. The social network analysis approach was chosen for the analysis of the documents. Block-modelling and multi-dimensional scaling (MDS) was used to determine the network structures. The UCINET software was employed to analyse the data. Most the main actors of this network are chosen from the government budget. There is no legal communication and cooperation among some of the actors because of their improper position in the network. Seven blocks have been clustered by CONCOR in terms of the actor's degree of similarity. The social distance among the actors of the seven blocks is very short. Power distribution in the field of financial support to the poor has a fragmented structure; however, it is mainly run by a dominant block consisting of The Supreme Council of Welfare and Social Security, Health Insurance Organization, and the Ministry of Health and Medical Education. The financial support for the poor network involves multiple actors. This variety has created a series of confusions in terms of the type, level, and scope of responsibilities among the actors. The weak presence legislative and regulatory institutions and also non-governmental institutions are the main weak points of this network. Copyright © 2017 Elsevier Ltd. All rights reserved.
Symmetric reconfigurable capacity assignment in a bidirectional DWDM access network.
Ortega, Beatriz; Mora, José; Puerto, Gustavo; Capmany, José
2007-12-10
This paper presents a novel architecture for DWDM bidirectional access networks providing symmetric dynamic capacity allocation for both downlink and uplink signals. A foldback arrayed waveguide grating incorporating an optical switch enables the experimental demonstration of flexible assignment of multiservice capacity. Different analog and digital services, such as CATV, 10 GHz-tone, 155Mb/s PRBS and UMTS signals have been transmitted in order to successfully test the system performance under different scenarios of total capacity distribution from the Central Station to different Base Stations with two reconfigurable extra channels for each down and upstream direction.
Networking Ethics: A Survey of Bioethics Networks Across the U.S.
Fausett, Jennifer Kleiner; Gilmore-Szott, Eleanor; Hester, D Micah
2016-06-01
Ethics networks have emerged over the last few decades as a mechanism for individuals and institutions over various regions, cities and states to converge on healthcare-related ethical issues. However, little is known about the development and nature of such networks. In an effort to fill the gap in the knowledge about such networks, a survey was conducted that evaluated the organizational structure, missions and functions, as well as the outcomes/products of ethics networks across the country. Eighteen established bioethics networks were identified via consensus of three search processes and were approached for participation. The participants completed a survey developed for the purposes of this study and distributed via SurveyMonkey. Responses were obtained from 10 of the 18 identified and approached networks regarding topic areas of: Network Composition and Catchment Areas; Network Funding and Expenses; Personnel; Services; and Missions and Accomplishments. Bioethics networks are designed primarily to bring ethics education and support to professionals and hospitals. They do so over specifically defined areas-states, regions, or communities-and each is concerned about how to stay financially healthy. At the same time, the networks work off different organizational models, either as stand-alone organizations or as entities within existing organizational structures.
Fuller, Jeffrey; Oster, Candice; Muir Cochrane, Eimear; Dawson, Suzanne; Lawn, Sharon; Henderson, Julie; O'Kane, Deb; Gerace, Adam; McPhail, Ruth; Sparkes, Deb; Fuller, Michelle; Reed, Richard L
2015-11-11
To test a management model of facilitated reflection on network feedback as a means to engage services in problem solving the delivery of integrated primary mental healthcare to older people. Participatory mixed methods case study evaluating the impact of a network management model using organisational network feedback (through social network analysis, key informant interviews and policy review). A model of facilitated network reflection using network theory and methods. A rural community in South Australia. 32 staff from 24 services and 12 senior service managers from mental health, primary care and social care services. Health and social care organisations identified that they operated in clustered self-managed networks within sectors, with no overarching purposive older people's mental healthcare network. The model of facilitated reflection revealed service goal and role conflicts. These discussions helped local services to identify as a network, and begin the problem-solving communication and referral links. A Governance Group assisted this process. Barriers to integrated servicing through a network included service funding tied to performance of direct care tasks and the lack of a clear lead network administration organisation. A model of facilitated reflection helped organisations to identify as a network, but revealed sensitivity about organisational roles and goals, which demonstrated that conflict should be expected. Networked servicing needed a neutral network administration organisation with cross-sectoral credibility, a mandate and the resources to monitor the network, to deal with conflict, negotiate commitment among the service managers, and provide opportunities for different sectors to meet and problem solve. This requires consistency and sustained intersectoral policies that include strategies and funding to facilitate and maintain health and social care networks in rural communities. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
A routing protocol based on energy and link quality for Internet of Things applications.
Machado, Kássio; Rosário, Denis; Cerqueira, Eduardo; Loureiro, Antonio A F; Neto, Augusto; Souza, José Neuman de
2013-02-04
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.
A Routing Protocol Based on Energy and Link Quality for Internet of Things Applications
Machado, Kassio; Rosário, Denis; Cerqueira, Eduardo; Loureiro, Antonio A. F.; Neto, Augusto; de Souza, José Neuman
2013-01-01
The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare,environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols. PMID:23385410
NASA Astrophysics Data System (ADS)
Domenico, B.
2001-12-01
Universities across the nation are transforming their teaching and research efforts through increased use of a rapidly expanding menu of environmental data. With funding from the Division of Atmospheric Sciences (ATM ) of the National Science Foundation (NSF), the Unidata Program is playing a central role in this transformation by enabling universities to employ innovative computing and networking technologies to acquire such data sets in real-time and use them routinely in their classrooms and research labs. Working with the Unidata Program Center (UPC), the Unidata community, comprising over 140 departments, has built a national Internet Data Distribution (IDD ) system. The IDD allows users to "subscribe" to certain sets of data products; IDD servers then deliver the requested data to their local servers as soon as they are available from the source. With the initial national implementation in 1994, the IDD may have been the original example of Internet "push" technology. It now provides the reliability, flexibility, and efficiency required by participating institutions. As the underlying Internet technology evolves, we anticipate dramatic increases in the volume of data delivered. We plan to augment the system to better serve disciplines outside the atmospheric sciences and to incorporate anticipated new networking technologies to minimize the impact of the IDD on the underlying network. Unidata has recently undertaken a new initiative called THREDDS (THematic Real-time Environmental Distributed Data Services) under a grant from the NSF Division of Undergraduate Education NSDL (National Science, Math, Engineering, Technology Education Digital Library). The goal of THREDDs is to make it possible for students, educators and researchers to publish, contribute, find, and interact with data relating to the Earth system in a convenient, effective, and integrated fashion. Just as the World Wide Web and digital-library technologies have simplified the process of publishing and accessing multimedia documents, THREDDS will provide needed infrastructure for publishing and accessing scientific data in a similarly convenient fashion.
Analysis of random drop for gateway congestion control. M.S. Thesis
NASA Technical Reports Server (NTRS)
Hashem, Emam Salaheddin
1989-01-01
Lately, the growing demand on the Internet has prompted the need for more effective congestion control policies. Currently No Gateway Policy is used to relieve and signal congestion, which leads to unfair service to the individual users and a degradation of overall network performance. Network simulation was used to illustrate the character of Internet congestion and its causes. A newly proposed gateway congestion control policy, called Random Drop, was considered as a promising solution to the pressing problem. Random Drop relieves resource congestion upon buffer overflow by choosing a random packet from the service queue to be dropped. The random choice should result in a drop distribution proportional to the bandwidth distribution among all contending TCP connections, thus applying the necessary fairness. Nonetheless, the simulation experiments demonstrate several shortcomings with this policy. Because Random Drop is a congestion control policy, which is not applied until congestion has already occurred, it usually results in a high drop rate that hurts too many connections including well-behaved ones. Even though the number of packets dropped is different from one connection to another depending on the buffer utilization upon overflow, the TCP recovery overhead is high enough to neutralize these differences, causing unfair congestion penalties. Besides, the drop distribution itself is an inaccurate representation of the average bandwidth distribution, missing much important information about the bandwidth utilization between buffer overflow events. A modification of Random Drop to do congestion avoidance by applying the policy early was also proposed. Early Random Drop has the advantage of avoiding the high drop rate of buffer overflow. The early application of the policy removes the pressure of congestion relief and allows more accurate signaling of congestion. To be used effectively, algorithms for the dynamic adjustment of the parameters of Early Random Drop to suite the current network load must still be developed.
A proposal of optimal sampling design using a modularity strategy
NASA Astrophysics Data System (ADS)
Simone, A.; Giustolisi, O.; Laucelli, D. B.
2016-08-01
In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.
Jamming Dust: A Low-Power Distributed Jammer Network
2010-12-01
0.206 jA r f ee A r λ λ β λ −− ≤ − − (12) , again h λ = 2 to conform with results in [19]. Recall that γ tion probability. On the other hand...of service in sensor [4] 003. 2003. 2005. , pages 80-89, 2004. . King and E. EEE 802.11 under [16] darpa.mil/STO/strategic
Variable Selection Strategies for Small-area Estimation Using FIA Plots and Remotely Sensed Data
Andrew Lister; Rachel Riemann; James Westfall; Mike Hoppus
2005-01-01
The USDA Forest Service's Forest Inventory and Analysis (FIA) unit maintains a network of tens of thousands of georeferenced forest inventory plots distributed across the United States. Data collected on these plots include direct measurements of tree diameter and height and other variables. We present a technique by which FIA plot data and coregistered...
Distributed Decision Making in a Dynamic Network Environment
1990-01-01
protocols, particularly when traffic arrival statistics are varying or unknown, and loads are high. Both nonpreemptive and preemptive repeat disciplines are...The simulation model allows general value functions, continuous time operation, and preemptive or nonpreemptive service. For reasons of tractability... nonpreemptive LIFO, (4) nonpreemptive LIFO with discarding, (5) nonpreemptive HOL, (6) nonpreemp- tive HOL with discarding, (7) preemptive repeat HOL, (8
ERIC Educational Resources Information Center
Forbes, Joan; McCartney, Elspeth
2012-01-01
This paper is concerned with the operation of professional networks, norms and trust for leadership in interprofessional relationships and cultures and so the analytic of social capital is used. A mapping is outlined of the sub-types, forms and conceptual key terms in social capital theory that is then applied to explore and better understand…
ERIC Educational Resources Information Center
Lynton, Edith F.; Seldin, Joel R.
This report consists primarily of employer views on hiring practices and training needs in 17 occupations in five occupational areas (office work, distribution, service, communications, and crafts). Described first are the activities of the Labor Market Information Network that resulted in the compilation of data presented in the report. The…
KODAMA and VPC based Framework for Ubiquitous Systems and its Experiment
NASA Astrophysics Data System (ADS)
Takahashi, Kenichi; Amamiya, Satoshi; Iwao, Tadashige; Zhong, Guoqiang; Kainuma, Tatsuya; Amamiya, Makoto
Recently, agent technologies have attracted a lot of interest as an emerging programming paradigm. With such agent technologies, services are provided through collaboration among agents. At the same time, the spread of mobile technologies and communication infrastructures has made it possible to access the network anytime and from anywhere. Using agents and mobile technologies to realize ubiquitous computing systems, we propose a new framework based on KODAMA and VPC. KODAMA provides distributed management mechanisms by using the concept of community and communication infrastructure to deliver messages among agents without agents being aware of the physical network. VPC provides a method of defining peer-to-peer services based on agent communication with policy packages. By merging the characteristics of both KODAMA and VPC functions, we propose a new framework for ubiquitous computing environments. It provides distributed management functions according to the concept of agent communities, agent communications which are abstracted from the physical environment, and agent collaboration with policy packages. Using our new framework, we conducted a large-scale experiment in shopping malls in Nagoya, which sent advertisement e-mails to users' cellular phones according to user location and attributes. The empirical results showed that our new framework worked effectively for sales in shopping malls.
Ancillary-service costs for 12 US electric utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, B.; Hirst, E.
1996-03-01
Ancillary services are those functions performed by electrical generating, transmission, system-control, and distribution-system equipment and people to support the basic services of generating capacity, energy supply, and power delivery. The Federal Energy Regulatory Commission defined ancillary services as ``those services necessary to support the transmission of electric power from seller to purchaser given the obligations of control areas and transmitting utilities within those control areas to maintain reliable operations of the interconnected transmission system.`` FERC divided these services into three categories: ``actions taken to effect the transaction (such as scheduling and dispatching services) , services that are necessary to maintainmore » the integrity of the transmission system [and] services needed to correct for the effects associated with undertaking a transaction.`` In March 1995, FERC published a proposed rule to ensure open and comparable access to transmission networks throughout the country. The rule defined six ancillary services and developed pro forma tariffs for these services: scheduling and dispatch, load following, system protection, energy imbalance, loss compensation, and reactive power/voltage control.« less
BOREAS AFM-5 Level-1 Upper Air Network Data
NASA Technical Reports Server (NTRS)
Barr, Alan; Hrynkiw, Charmaine; Newcomer, Jeffrey A. (Editor); Hall, Forrest G. (Editor); Smith, David E. (Technical Monitor)
2000-01-01
The Boreal Ecosystem-Atmosphere Study (BOREAS) Airborne Fluxes and Meteorology (AFM)-5 team collected and processed data from the numerous radiosonde flights during the project. The goals of the AFM-05 team were to provide large-scale definition of the atmosphere by supplementing the existing Atmospheric Environment Service (AES) aerological network, both temporally and spatially. This data set includes basic upper-air parameters collected from the network of upper-air stations during the 1993, 1994, and 1996 field campaigns over the entire study region. The data are contained in tabular ASCII files. The level-1 upper-air network data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files also are available on a CD-ROM (see document number 20010000884).
NASA Astrophysics Data System (ADS)
Fattoruso, Grazia; Longobardi, Antonia; Pizzuti, Alfredo; Molinara, Mario; Marocco, Claudio; De Vito, Saverio; Tortorella, Francesco; Di Francia, Girolamo
2017-06-01
Rainfall data collection gathered in continuous by a distributed rain gauge network is instrumental to more effective hydro-geological risk forecasting and management services though the input estimated rainfall fields suffer from prediction uncertainty. Optimal rain gauge networks can generate accurate estimated rainfall fields. In this research work, a methodology has been investigated for evaluating an optimal rain gauges network aimed at robust hydrogeological hazard investigations. The rain gauges of the Sarno River basin (Southern Italy) has been evaluated by optimizing a two-objective function that maximizes the estimated accuracy and minimizes the total metering cost through the variance reduction algorithm along with the climatological variogram (time-invariant). This problem has been solved by using an enumerative search algorithm, evaluating the exact Pareto-front by an efficient computational time.
NASA Astrophysics Data System (ADS)
Bunus, Peter
Online social networking is an important part in the everyday life of college students. Despite the increasing popularity of online social networking among students and faculty members, its educational benefits are largely untested. This paper presents our experience in using social networking applications and video content distribution websites as a complement of traditional classroom education. In particular, the solution has been based on effective adaptation, extension and integration of Facebook, Twitter, Blogger YouTube and iTunes services for delivering educational material to students on mobile platforms like iPods and 3 rd generation mobile phones. The goals of the proposed educational platform, described in this paper, are to make the learning experience more engaging, to encourage collaborative work and knowledge sharing among students, and to provide an interactive platform for the educators to reach students and deliver lecture material in a totally new way.
Research on NGN network control technology
NASA Astrophysics Data System (ADS)
Li, WenYao; Zhou, Fang; Wu, JianXue; Li, ZhiGuang
2004-04-01
Nowadays NGN (Next Generation Network) is the hotspot for discussion and research in IT section. The NGN core technology is the network control technology. The key goal of NGN is to realize the network convergence and evolution. Referring to overlay network model core on Softswitch technology, circuit switch network and IP network convergence realized. Referring to the optical transmission network core on ASTN/ASON, service layer (i.e. IP layer) and optical transmission convergence realized. Together with the distributing feature of NGN network control technology, on NGN platform, overview of combining Softswitch and ASTN/ASON control technology, the solution whether IP should be the NGN core carrier platform attracts general attention, and this is also a QoS problem on NGN end to end. This solution produces the significant practical meaning on equipment development, network deployment, network design and optimization, especially on realizing present network smooth evolving to the NGN. This is why this paper puts forward the research topic on the NGN network control technology. This paper introduces basics on NGN network control technology, then proposes NGN network control reference model, at the same time describes a realizable network structure of NGN. Based on above, from the view of function realization, NGN network control technology is discussed and its work mechanism is analyzed.
Comprehensive security framework for the communication and storage of medical images
NASA Astrophysics Data System (ADS)
Slik, David; Montour, Mike; Altman, Tym
2003-05-01
Confidentiality, integrity verification and access control of medical imagery and associated metadata is critical for the successful deployment of integrated healthcare networks that extend beyond the department level. As medical imagery continues to become widely accessed across multiple administrative domains and geographically distributed locations, image data should be able to travel and be stored on untrusted infrastructure, including public networks and server equipment operated by external entities. Given these challenges associated with protecting large-scale distributed networks, measures must be taken to protect patient identifiable information while guarding against tampering, denial of service attacks, and providing robust audit mechanisms. The proposed framework outlines a series of security practices for the protection of medical images, incorporating Transport Layer Security (TLS), public and secret key cryptography, certificate management and a token based trusted computing base. It outlines measures that can be utilized to protect information stored within databases, online and nearline storage, and during transport over trusted and untrusted networks. In addition, it provides a framework for ensuring end-to-end integrity of image data from acquisition to viewing, and presents a potential solution to the challenges associated with access control across multiple administrative domains and institution user bases.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-06
...,968B] Verizon Business Networks Services, Inc. Senior Analysts-Sales Impletmentation (SA-SI) Birmingham, Alabama; Verizon Business Networks Services, Inc. Senior Analysts-Sales Impletmentation (SA-SI) Service Program Delivery Division San Francisco, California; Verizon Business Networks Services, Inc.Senior...
Sensor Network Middleware for Cyber-Physical Systems: Opportunities and Challenges
NASA Astrophysics Data System (ADS)
Singh, G.
2015-12-01
Wireless Sensor Network middleware typically provides abstractions for common tasks such as atomicity, synchronization and communication with the intention of isolating the developers of distributed applications from lower-level details of the underlying platforms. Developing middleware to meet the performance constraints of applications is an important challenge. Although one would like to develop generic middleware services which can be used in a variety of different applications, efficiency considerations often force developers to design middleware and algorithms customized to specific operational contexts. This presentation will discuss techniques to design middleware that is customizable to suit the performance needs of specific applications. We also discuss the challenges poised in designing middleware for pervasive sensor networks and cyber-physical systems with specific focus on environmental monitoring.
Sliceable transponders for metro-access transmission links
NASA Astrophysics Data System (ADS)
Wagner, C.; Madsen, P.; Spolitis, S.; Vegas Olmos, J. J.; Tafur Monroy, I.
2015-01-01
This paper presents a solution for upgrading optical access networks by reusing existing electronics or optical equipment: sliceable transponders using signal spectrum slicing and stitching back method after direct detection. This technique allows transmission of wide bandwidth signals from the service provider (OLT - optical line terminal) to the end user (ONU - optical network unit) over an optical distribution network (ODN) via low bandwidth equipment. We show simulation and experimental results for duobinary signaling of 1 Gbit/s and 10 Gbit/s waveforms. The number of slices is adjusted to match the lowest analog bandwidth of used electrical devices and scale from 2 slices to 10 slices. Results of experimental transmission show error free signal recovery by using post forward error correction with 7% overhead.
Convergence of broadband optical and wireless access networks
NASA Astrophysics Data System (ADS)
Chang, Gee-Kung; Jia, Zhensheng; Chien, Hung-Chang; Chowdhury, Arshad; Hsueh, Yu-Ting; Yu, Jianjun
2009-01-01
This paper describes convergence of optical and wireless access networks for delivering high-bandwidth integrated services over optical fiber and air links. Several key system technologies are proposed and experimentally demonstrated. We report here, for the first ever, a campus-wide field trial demonstration of radio-over-fiber (RoF) system transmitting uncompressed standard-definition (SD) high-definition (HD) real-time video contents, carried by 2.4-GHz radio and 60- GHz millimeter-wave signals, respectively, over 2.5-km standard single mode fiber (SMF-28) through the campus fiber network at Georgia Institute of Technology (GT). In addition, subsystem technologies of Base Station and wireless tranceivers operated at 60 GHz for real-time video distribution have been developed and tested.
2010-01-01
Service quality on computer and network systems has become increasingly important as many conventional service transactions are moved online. Service quality of computer and network services can be measured by the performance of the service process in throughput, delay, and so on. On a computer and network system, competing service requests of users and associated service activities change the state of limited system resources which in turn affects the achieved service ...relations of service activities, system state and service
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-01-01
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407
The QAP weighted network analysis method and its application in international services trade
NASA Astrophysics Data System (ADS)
Xu, Helian; Cheng, Long
2016-04-01
Based on QAP (Quadratic Assignment Procedure) correlation and complex network theory, this paper puts forward a new method named QAP Weighted Network Analysis Method. The core idea of the method is to analyze influences among relations in a social or economic group by building a QAP weighted network of networks of relations. In the QAP weighted network, a node depicts a relation and an undirect edge exists between any pair of nodes if there is significant correlation between relations. As an application of the QAP weighted network, we study international services trade by using the QAP weighted network, in which nodes depict 10 kinds of services trade relations. After the analysis of international services trade by QAP weighted network, and by using distance indicators, hierarchy tree and minimum spanning tree, the conclusion shows that: Firstly, significant correlation exists in all services trade, and the development of any one service trade will stimulate the other nine. Secondly, as the economic globalization goes deeper, correlations in all services trade have been strengthened continually, and clustering effects exist in those services trade. Thirdly, transportation services trade, computer and information services trade and communication services trade have the most influence and are at the core in all services trade.
Framework for experimenting with QoS for multimedia services
NASA Astrophysics Data System (ADS)
Chen, Deming; Colwell, Regis; Gelman, Herschel; Chrysanthis, Panos K.; Mosse, Daniel
1996-03-01
It has been recognized that an effective support for multimedia applications must provide Quality of Service (QoS) guarantees. Current methods propose to provide such QoS guarantees through coordinated network resource reservations. In our approach, we extend this idea providing system-wide QoS guarantees that consider the data manipulation and transformations needed in the intermediate and end sites of the network. Given a user's QoS requirements, multisegment virtual channels are established with the necessary communication and computation resources reserved for the timely, synchronized, and reliable delivery of the different datatypes. Such data originate in several distributed data repositories, are transformed at intermediate service stations into suitable formats for transportation and presentation, and are delivered to a viewing unit. In this paper, we first review NETWORLD, an architecture that provides such QoS guarantees and an interface for the specification and negotiation of user-level QoS requirements. Our user interface supports both expert and non- expert modes. We then describe how to map user-level QoS requirements into low-level system parameters, leading into a contract between the application and the network. The mapping considers various characteristics of the architectures (such as the hardware and software available at each source, destination, or intermediate site) as well as cost constraints.
Communication network for decentralized remote tele-science during the Spacelab mission IML-2
NASA Technical Reports Server (NTRS)
Christ, Uwe; Schulz, Klaus-Juergen; Incollingo, Marco
1994-01-01
The ESA communication network for decentralized remote telescience during the Spacelab mission IML-2, called Interconnection Ground Subnetwork (IGS), provided data, voice conferencing, video distribution/conferencing and high rate data services to 5 remote user centers in Europe. The combination of services allowed the experimenters to interact with their experiments as they would normally do from the Payload Operations Control Center (POCC) at MSFC. In addition, to enhance their science results, they were able to make use of reference facilities and computing resources in their home laboratory, which typically are not available in the POCC. Characteristics of the IML-2 communications implementation were the adaptation to the different user needs based on modular service capabilities of IGS and the cost optimization for the connectivity. This was achieved by using a combination of traditional leased lines, satellite based VSAT connectivity and N-ISDN according to the simulation and mission schedule for each remote site. The central management system of IGS allows minimization of staffing and the involvement of communications personnel at the remote sites. The successful operation of IGS for IML-2 as a precursor network for the Columbus Orbital Facility (COF) has proven the concept for communications to support the operation of the COF decentralized scenario.
NASA Astrophysics Data System (ADS)
Wang, Y. D.; Jiang, B. T.; Ye, X. Y.
2016-06-01
Urbanization is one of the most important human social activities in the 21st century (Chaolin et al., 2012). With an increasing number of people visiting cities, the provision of adequate urban service facilities, including public and commercial service facilities, in locations where people live has become an important guarantee of the success of urbanization. Exploring the commercial service facilities in a specific area of a city can help us understand the progress and trends of urban renewal in the area, provide a quantitative basis for evaluating the rationality of planning implementation, and facilitate an analysis of the effects of different factors on the regional development of a city (Schor et al. 2003). In this paper, we proposed a data processing and analysis method for studying the distribution and development pattern of urban commercial facilities based on customer reviews. In addition, based on road network constraints, we explored the patterns contained in customer reviews data, including patterns for the spatial distribution and spatial-temporal evolution of facilities as well as the number of facilities and degree of satisfaction.
Dependable Networks as a Paradigm for Network Innovation
NASA Astrophysics Data System (ADS)
Miki, Tetsuya
In past, dependable networks meant minimizing network outages or the impact of the outages. However, over the decade, major network services have shifted from telephone and data transmission to Internet and to mobile communication, where higher layer services with a variety of contents are provided. Reviewing these backgrounds of network development, the importance of the dependability of higher layer network services are pointed out. Then, the main aspects to realize the dependability are given for lower, middle and higher layer network services. In addition, some particular issues for dependable networks are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Ravindra; Reilly, James T.; Wang, Jianhui
Deregulation of the electric utility industry, environmental concerns associated with traditional fossil fuel-based power plants, volatility of electric energy costs, Federal and State regulatory support of “green” energy, and rapid technological developments all support the growth of Distributed Energy Resources (DERs) in electric utility systems and ensure an important role for DERs in the smart grid and other aspects of modern utilities. DERs include distributed generation (DG) systems, such as renewables; controllable loads (also known as demand response); and energy storage systems. This report describes the role of aggregators of DERs in providing optimal services to distribution networks, through DERmore » monitoring and control systems—collectively referred to as a Distributed Energy Resource Management System (DERMS)—and microgrids in various configurations.« less
NASA Astrophysics Data System (ADS)
Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.
2003-12-01
Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.
Strategic planning of INA-CORS development for public service and tectonic deformation study
NASA Astrophysics Data System (ADS)
Syetiawan, Agung; Gaol, Yustisi Ardhitasari Lumban; Safi'i, Ayu Nur
2017-07-01
GPS technology can be applied for surveying, mapping and research purposes. The simplicity of GPS technology for positioning make it become the first choice for survey compared with another positioning method. GPS can measure a position with various accuracy level based on the measurement method. In order to facilitate the GPS positioning, many organizations are establishing permanent GPS station. National Geodetic Survey (NGS) called it as Continuously Operating Reference Stations (CORS). Those devices continuously collect and record GPS data to be used by users. CORS has been built by several government agencies for particular purposes and scattered throughout Indonesia. Geospatial Information Agency (BIG) as a geospatial information providers begin to compile a grand design of Indonesia CORS (INA-CORS) that can be used for public service such as Real Time Kinematic (RTK), RINEX data request, or post-processing service and for tectonic deformation study to determine the deformation models of Indonesia and to evaluate the national geospatial reference system. This study aims to review the ideal location to develop CORS network distribution. The method was used is to perform spatial analysis on the data distribution of BIG and BPN CORS overlayed with Seismotectonic Map of Indonesia and land cover. The ideal condition to be achieved is that CORS will be available on each radius of 50 km. The result showed that CORS distribution in Java and Nusa Tenggara are already tight while on Sumatra, Celebes and Moluccas are still need to be more tighten. Meanwhile, the development of CORS in Papua will encounter obstacles toward road access and networking. This analysis result can be used as consideration for determining the priorities of CORS development in Indonesia.
Web Services and Data Enhancements at the Northern California Earthquake Data Center
NASA Astrophysics Data System (ADS)
Neuhauser, D. S.; Zuzlewski, S.; Lombard, P. N.; Allen, R. M.
2013-12-01
The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, simple text, or MiniSEED depending on the service and selected output format. The NCEDC offers the following web services that are compliant with the International Federation of Digital Seismograph Networks (FDSN) web services specifications: (1) fdsn-dataselect: time series data delivered in MiniSEED format, (2) fdsn-station: station and channel metadata and time series availability delivered in StationXML format, (3) fdsn-event: earthquake event information delivered in QuakeML format. In addition, the NCEDC offers the the following IRIS-compatible web services: (1) sacpz: provide channel gains, poles, and zeros in SAC format, (2) resp: provide channel response information in RESP format, (3) dataless: provide station and channel metadata in Dataless SEED format. The NCEDC is also developing a web service to deliver timeseries from pre-assembled event waveform gathers. The NCEDC has waveform gathers for ~750,000 northern and central California events from 1984 to the present, many of which were created by the USGS NCSN prior to the establishment of the joint NCSS (Northern California Seismic System). We are currently adding waveforms to these older event gathers with time series from the UCB networks and other networks with waveforms archived at the NCEDC, and ensuring that the waveform for each channel in the event gathers have the highest quality waveform from the archive.
Malm, Annika; Axelsson, Gösta; Barregard, Lars; Ljungqvist, Jakob; Forsberg, Bertil; Bergstedt, Olof; Pettersson, Thomas J R
2013-09-01
There are relatively few studies on the association between disturbances in drinking water services and symptoms of gastrointestinal (GI) illness. Health Call Centres data concerning GI illness may be a useful source of information. This study investigates if there is an increased frequency of contacts with the Health Call Centre (HCC) concerning gastrointestinal symptoms at times when there is a risk of impaired water quality due to disturbances at water works or the distribution network. The study was conducted in Gothenburg, a Swedish city with 0.5 million inhabitants with a surface water source of drinking water and two water works. All HCC contacts due to GI symptoms (diarrhoea, vomiting or abdominal pain) were recorded for a three-year period, including also sex, age, and geocoded location of residence. The number of contacts with the HCC in the affected geographical areas were recorded during eight periods of disturbances in the water works (e.g. short stops of chlorine dosing), six periods of large disturbances in the distribution network (e.g. pumping station failure or pipe breaks with major consequences), and 818 pipe break and leak repairs over a three-year period. For each period of disturbance the observed number of calls was compared with the number of calls during a control period without disturbances in the same geographical area. In total about 55, 000 calls to the HCC due to GI symptoms were recorded over the three-year period, 35 per 1000 inhabitants and year, but much higher (>200) for children <3 yrs of age. There was no statistically significant increase in calls due to GI illness during or after disturbances at the water works or in the distribution network. Our results indicate that GI symptoms due to disturbances in water works or the distribution network are rare. The number of serious failures was, however limited, and further studies are needed to be able to assess the risk of GI illness in such cases. The technique of using geocoded HCC data together with geocoded records of disturbances in the drinking water network was feasible. Copyright © 2013 Elsevier Ltd. All rights reserved.
Differentiated protection method in passive optical networks based on OPEX
NASA Astrophysics Data System (ADS)
Zhang, Zhicheng; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2011-12-01
Reliable service delivery becomes more significant due to increased dependency on electronic services all over society and the growing importance of reliable service delivery. As the capability of PON increasing, both residential and business customers may be included in a PON. Meanwhile, OPEX have been proven to be a very important factor of the total cost for a telecommunication operator. Thus, in this paper, we present the partial protection PON architecture and compare the operational expenditures (OPEX) of fully duplicated protection and partly duplicated protection for ONUs with different distributed fiber length, reliability requirement and penalty cost per hour. At last, we propose a differentiated protection method to minimize OPEX.
Hybrid services efficient provisioning over the network coding-enabled elastic optical networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Gu, Rentao; Ji, Yuefeng; Kavehrad, Mohsen
2017-03-01
As a variety of services have emerged, hybrid services have become more common in real optical networks. Although the elastic spectrum resource optimizations over the elastic optical networks (EONs) have been widely investigated, little research has been carried out on the hybrid services of the routing and spectrum allocation (RSA), especially over the network coding-enabled EON. We investigated the RSA for the unicast service and network coding-based multicast service over the network coding-enabled EON with the constraints of time delay and transmission distance. To address this issue, a mathematical model was built to minimize the total spectrum consumption for the hybrid services over the network coding-enabled EON under the constraints of time delay and transmission distance. The model guarantees different routing constraints for different types of services. The immediate nodes over the network coding-enabled EON are assumed to be capable of encoding the flows for different kinds of information. We proposed an efficient heuristic algorithm of the network coding-based adaptive routing and layered graph-based spectrum allocation algorithm (NCAR-LGSA). From the simulation results, NCAR-LGSA shows highly efficient performances in terms of the spectrum resources utilization under different network scenarios compared with the benchmark algorithms.
Lightning location system supervising Swedish power transmission network
NASA Technical Reports Server (NTRS)
Melin, Stefan A.
1991-01-01
For electric utilities, the ability to prevent or minimize lightning damage on personnel and power systems is of great importance. Therefore, the Swedish State Power Board, has been using data since 1983 from a nationwide lightning location system (LLS) for accurately locating lightning ground strikes. Lightning data is distributed and presented on color graphic displays at regional power network control centers as well as at the national power system control center for optimal data use. The main objectives for use of LLS data are: supervising the power system for optimal and safe use of the transmission and generating capacity during periods of thunderstorms; warning service to maintenance and service crews at power line and substations to end operations hazardous when lightning; rapid positioning of emergency crews to locate network damage at areas of detected lightning; and post analysis of power outages and transmission faults in relation to lightning, using archived lightning data for determination of appropriate design and insulation levels of equipment. Staff have found LLS data useful and economically justified since the availability of power system has increased as well as level of personnel safety.
NASA Astrophysics Data System (ADS)
Horvath, Tomas; Munster, Petr; Vojtech, Josef; Velc, Radek; Oujezsky, Vaclav
2018-01-01
Optical fiber is the most used medium for current telecommunication networks. Besides data transmissions, special advanced applications like accurate time or stable frequency transmissions are more common, especially in research and education networks. On the other hand, new applications like distributed sensing are in ISP's interest because e.g. such sensing allows new service: protection of fiber infrastructure. Transmission of all applications in a single fiber can be very cost efficient but it is necessary to evaluate possible interaction before real application and deploying the service, especially if standard 100 GHz grid is considered. We performed laboratory measurement of simultaneous transmission of 100 G data based on DP-QPSK modulation format, accurate time, stable frequency and sensing system based on phase sensitive OTDR through two types of optical fibers, G.655 and G.653. These fibers are less common than G.652 fiber but thanks to their slightly higher nonlinear character, there are suitable for simulation of the worst case which can arise in a real network.
Forming an ad-hoc nearby storage, based on IKAROS and social networking services
NASA Astrophysics Data System (ADS)
Filippidis, Christos; Cotronis, Yiannis; Markou, Christos
2014-06-01
We present an ad-hoc "nearby" storage, based on IKAROS and social networking services, such as Facebook. By design, IKAROS is capable to increase or decrease the number of nodes of the I/O system instance on the fly, without bringing everything down or losing data. IKAROS is capable to decide the file partition distribution schema, by taking on account requests from the user or an application, as well as a domain or a Virtual Organization policy. In this way, it is possible to form multiple instances of smaller capacity higher bandwidth storage utilities capable to respond in an ad-hoc manner. This approach, focusing on flexibility, can scale both up and down and so can provide more cost effective infrastructures for both large scale and smaller size systems. A set of experiments is performed comparing IKAROS with PVFS2 by using multiple clients requests under HPC IOR benchmark and MPICH2.
NASA Astrophysics Data System (ADS)
Xiao, Jian; Zhang, Mingqiang; Tian, Haiping; Huang, Bo; Fu, Wenlong
2018-02-01
In this paper, a novel prognostics and health management system architecture for hydropower plant equipment was proposed based on fog computing and Docker container. We employed the fog node to improve the real-time processing ability of improving the cloud architecture-based prognostics and health management system and overcome the problems of long delay time, network congestion and so on. Then Storm-based stream processing of fog node was present and could calculate the health index in the edge of network. Moreover, the distributed micros-service and Docker container architecture of hydropower plants equipment prognostics and health management was also proposed. Using the micro service architecture proposed in this paper, the hydropower unit can achieve the goal of the business intercommunication and seamless integration of different equipment and different manufacturers. Finally a real application case is given in this paper.
The multidriver: A reliable multicast service using the Xpress Transfer Protocol
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.
1990-01-01
A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.
[Research of regional medical consumables reagent logistics system in the modern hospital].
Wu, Jingjiong; Zhang, Yanwen; Luo, Xiaochen; Zhang, Qing; Zhu, Jianxin
2013-09-01
To explore the modern hospital and regional medical consumable reagents logistics system management. The characteristics of regional logistics, through cooperation between medical institutions within the region, and organize a wide range of special logistics activities, to make reasonable of the regional medical consumable reagents logistics. To set the regional management system, dynamic management systems, supply chain information management system, after-sales service system and assessment system. By the research of existing medical market and medical resources, to establish the regional medical supplies reagents directory and the initial data. The emphasis is centralized dispatch of medical supplies reagents, to introduce qualified logistics company for dispatching, to improve the modern hospital management efficiency, to costs down. Regional medical center and regional community health service centers constitute a regional logistics network, the introduction of medical consumable reagents logistics services, fully embodies integrity level, relevance, purpose, environmental adaptability of characteristics by the medical consumable reagents regional logistics distribution. Modern logistics distribution systems can increase the area of medical consumables reagent management efficiency and reduce costs.
Code of Federal Regulations, 2010 CFR
2010-10-01
... system of accounts shall be comprised of six major groups—Local Network Services Revenues, Network Access... Group. (j) Long Distance Network Service revenues. Long Distance Network Service revenues shall include... revenues derived from the following categories: Unbundled network element revenues, Resale revenues...
A Weibull distribution accrual failure detector for cloud computing
Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229