NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
Enabling end-user network monitoring via the multicast consolidated proxy monitor
NASA Astrophysics Data System (ADS)
Kanwar, Anshuman; Almeroth, Kevin C.; Bhattacharyya, Supratik; Davy, Matthew
2001-07-01
The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.
A decentralized software bus based on IP multicas ting
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd
1995-01-01
We describe decentralized reconfigurable implementation of a conference management system based on the low-level Internet Protocol (IP) multicasting protocol. IP multicasting allows low-cost, world-wide, two-way transmission of data between large numbers of conferencing participants through the Multicasting Backbone (MBone). Each conference is structured as a software bus -- a messaging system that provides a run-time interconnection model that acts as a separate agent (i.e., the bus) for routing, queuing, and delivering messages between distributed programs. Unlike the client-server interconnection model, the software bus model provides a level of indirection that enhances the flexibility and reconfigurability of a distributed system. Current software bus implementations like POLYLITH, however, rely on a centralized bus process and point-to-point protocols (i.e., TCP/IP) to route, queue, and deliver messages. We implement a software bus called the MULTIBUS that relies on a separate process only for routing and uses a reliable IP multicasting protocol for delivery of messages. The use of multicasting means that interconnections are independent of IP machine addresses. This approach allows reconfiguration of bus participants during system execution without notifying other participants of new IP addresses. The use of IP multicasting also permits an economy of scale in the number of participants. We describe the MULITIBUS protocol elements and show how our implementation performs better than centralized bus implementations.
Efficient Group Coordination in Multicast Trees
2001-01-01
describe a novel protocol to coordinate multipoint groupwork within the IP-multicast framework. The protocol supports Internet-wide coordination for large...and highly-interactive groupwork , relying on the dissemination of coordination directives among group members across a shared end-to-end multicast
High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications
NASA Technical Reports Server (NTRS)
Callahan, John; Montgomery, Todd; Whetten, Brian
1997-01-01
Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ennis, G.; Lala, T.K.
This document presents the results of a study undertaken by First Pacific Networks as part of EPRI Project RP-3567-01 regarding the support of broadcast services within the EPRI Utility Communications Architecture (UCA) protocols and the use of such services by UCA applications. This report has focused on the requirements and architectural implications of broadcast within UCA. A subsequent phase of this project is to develop specific recommendations for extending CUA so as to support broadcast. The conclusions of this report are presented in Section 5. The authors summarize the major conclusions as follows: broadcast and multicast support would be verymore » useful within UCA, not only for utility-specific applications but also simply to support the network engineering of a large-scale communications system, in this regard, UCA is no different from other large network systems which have found broadcast and multicast to be of substantial benefit for a variety of system management purposes; the primary architectural impact of broadcast and multicast falls on the UCA network level (which would need to be enhanced) and the UCA application level (which would be the user of broadcast); there is a useful subset of MMS services which could take advantage of broadcast; the UCA network level would need to be enhanced both in the areas of addressing and routing so as to properly support broadcast. A subsequent analysis will be required to define the specific enhancements to UCA required to support broadcast and multicast.« less
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
NASA Astrophysics Data System (ADS)
Li, Ze; Zhang, Min; Wang, Danshi; Cui, Yue
2017-09-01
We propose a flexible and reconfigurable wavelength-division multiplexing (WDM) multicast scheme supporting downstream emergency multicast communication for WDM optical access network (WDM-OAN) via a multicast module (MM) based on four-wave mixing (FWM) in a semiconductor optical amplifier. It serves as an emergency measure to dispose of the burst, large bandwidth, and real-time multicast service with fast service provisioning and high resource efficiency. It also plays the role of physical backup in cases of big data migration or network disaster caused by invalid lasers or modulator failures. It provides convenient and reliable multicast service and emergency protection for WDM-OAN without modifying WDM-OAN structure. The strategies of an MM setting at the optical line terminal and remote node are discussed to apply this scheme to passive optical networks and active optical networks, respectively. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment in which one-to-six/eight 10-Gbps nonreturn-to-zero-differential phase-shift keying WDM multicasts in both strategies are successfully transmitted over single-mode fiber of 20.2 km. One-to-many reconfigurable WDM multicasts dealing with higher data rate and other modulation formats of multicast service are possible through the proposed scheme. It can be applied to different WDM access technologies, e.g., time-wavelength-division multiplexing-OAN and coherent WDM-OAN, and upgraded smoothly.
A Secure Multicast Framework in Large and High-Mobility Network Groups
NASA Astrophysics Data System (ADS)
Lee, Jung-San; Chang, Chin-Chen
With the widespread use of Internet applications such as Teleconference, Pay-TV, Collaborate tasks, and Message services, how to construct and distribute the group session key to all group members securely is becoming and more important. Instead of adopting the point-to-point packet delivery, these emerging applications are based upon the mechanism of multicast communication, which allows the group member to communicate with multi-party efficiently. There are two main issues in the mechanism of multicast communication: Key Distribution and Scalability. The first issue is how to distribute the group session key to all group members securely. The second one is how to maintain the high performance in large network groups. Group members in conventional multicast systems have to keep numerous secret keys in databases, which makes it very inconvenient for them. Furthermore, in case that a member joins or leaves the communication group, many involved participants have to change their own secret keys to preserve the forward secrecy and the backward secrecy. We consequently propose a novel version for providing secure multicast communication in large network groups. Our proposed framework not only preserves the forward secrecy and the backward secrecy but also possesses better performance than existing alternatives. Specifically, simulation results demonstrate that our scheme is suitable for high-mobility environments.
Multisites Coordination in Shared Multicast Trees
1999-01-01
conferencing, distributed interactive simulations, and collaborative systems. We de- scribe a novel protocol to coordinate multipoint groupwork in the IP...multicast framework. The pro- tocol supports Internet-wide coordination for large and highly-interactive groupwork , relying on trans- mission of
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
Mobility based key management technique for multicast security in mobile ad hoc networks.
Madhusudhanan, B; Chitra, S; Rajan, C
2015-01-01
In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Schiper, Andre; Stephenson, Pat
1990-01-01
A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.
A high performance totally ordered multicast protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Whetten, Brian; Kaplan, Simon
1995-01-01
This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.
NASA Astrophysics Data System (ADS)
Woradit, Kampol; Guyot, Matthieu; Vanichchanunt, Pisit; Saengudomlert, Poompat; Wuttisittikulkij, Lunchakorn
While the problem of multicast routing and wavelength assignment (MC-RWA) in optical wavelength division multiplexing (WDM) networks has been investigated, relatively few researchers have considered network survivability for multicasting. This paper provides an optimization framework to solve the MC-RWA problem in a multi-fiber WDM network that can recover from a single-link failure with shared protection. Using the light-tree (LT) concept to support multicast sessions, we consider two protection strategies that try to reduce service disruptions after a link failure. The first strategy, called light-tree reconfiguration (LTR) protection, computes a new multicast LT for each session affected by the failure. The second strategy, called optical branch reconfiguration (OBR) protection, tries to restore a logical connection between two adjacent multicast members disconnected by the failure. To solve the MC-RWA problem optimally, we propose an integer linear programming (ILP) formulation that minimizes the total number of fibers required for both working and backup traffic. The ILP formulation takes into account joint routing of working and backup traffic, the wavelength continuity constraint, and the limited splitting degree of multicast-capable optical cross-connects (MC-OXCs). After showing some numerical results for optimal solutions, we propose heuristic algorithms that reduce the computational complexity and make the problem solvable for large networks. Numerical results suggest that the proposed heuristic yields efficient solutions compared to optimal solutions obtained from exact optimization.
Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru
2016-02-08
Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.
Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun; Vandenhoute, Marc
2000-10-01
In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.
Lightweight causal and atomic group multicast
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Schiper, Andre; Stephenson, Pat
1991-01-01
The ISIS toolkit is a distributed programming environment based on support for virtually synchronous process groups and group communication. A suite of protocols is presented to support this model. The approach revolves around a multicast primitive, called CBCAST, which implements a fault-tolerant, causally ordered message delivery. This primitive can be used directly or extended into a totally ordered multicast primitive, called ABCAST. It normally delivers messages immediately upon reception, and imposes a space overhead proportional to the size of the groups to which the sender belongs, usually a small number. It is concluded that process groups and group communication can achieve performance and scaling comparable to that of a raw message transport layer. This finding contradicts the widespread concern that this style of distributed computing may be unacceptably costly.
Reliable WDM multicast in optical burst-switched networks
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun
2000-09-01
IN this paper,l we present a reliable WDM (Wavelength-Division Multiplexing) multicast protocol in optical burst-switched (OBS) networks. Since the burst dropping (loss) probability may be potentially high in a heavily loaded OBS backbone network, reliable multicast protocols that have developed for IP networks at the transport (or application) layer may incur heavy overheads such as a large number of duplicate retransmissions. In addition, it may take a longer time for an end host to detect and then recover from burst dropping (loss) occurred at the WDM layer. For efficiency reasons, we propose burst loss recovery within the OBS backbone (i.e., at the WDM link layer). The proposed protocol requires two additional functions to be performed by the WDM switch controller: subcasting and maintaining burst states, when the WDM switch has more than one downstream on the WDM multicast tree. We show that these additional functions are simple to implement and the overhead associated with them is manageable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishihara, T
Currently, the problem at hand is in distributing identical copies of OEP and filter software to a large number of farm nodes. One of the common methods used to transfer these softwares is through unicast. Unicast protocol faces the problem of repetitiously sending the same data over the network. Since the sending rate is limited, this process poses to be a bottleneck. Therefore, one possible solution to this problem lies in creating a reliable multicast protocol. A specific type of multicast protocol is the Bulk Multicast Protocol [4]. This system consists of one sender distributing data to many receivers. Themore » sender delivers data at a given rate of data packets. In response to that, the receiver replies to the sender with a status packet which contains information about the packet loss in terms of Negative Acknowledgment. The probability of the status packet sent back to the sender is+, where N is the number of receivers. The protocol is designed to have approximately 1 status packet for each data packet sent. In this project, we were able to show that the time taken for the complete transfer of a file to multiple receivers was about 12 times faster with multicast than by the use of unicast. The implementation of this experimental protocol shows remarkable improvement in mass data transfer to a large number of farm machines.« less
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
Multicasting in Wireless Communications (Ad-Hoc Networks): Comparison against a Tree-Based Approach
NASA Astrophysics Data System (ADS)
Rizos, G. E.; Vasiliadis, D. C.
2007-12-01
We examine on-demand multicasting in ad hoc networks. The Core Assisted Mesh Protocol (CAMP) is a well-known protocol for multicast routing in ad-hoc networks, generalizing the notion of core-based trees employed for internet multicasting into multicast meshes that have much richer connectivity than trees. On the other hand, wireless tree-based multicast routing protocols use much simpler structures for determining route paths, using only parent-child relationships. In this work, we compare the performance of the CAMP protocol against the performance of wireless tree-based multicast routing protocols, in terms of two important factors, namely packet delay and ratio of dropped packets.
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.
Degree-constrained multicast routing for multimedia communications
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugeng; Li, Guidan
2005-02-01
Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.
A Stateful Multicast Access Control Mechanism for Future Metro-Area-Networks.
ERIC Educational Resources Information Center
Sun, Wei-qiang; Li, Jin-sheng; Hong, Pei-lin
2003-01-01
Multicasting is a necessity for a broadband metro-area-network; however security problems exist with current multicast protocols. A stateful multicast access control mechanism, based on MAPE, is proposed. The architecture of MAPE is discussed, as well as the states maintained and messages exchanged. The scheme is flexible and scalable. (Author/AEF)
A proposed group management scheme for XTP multicast
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
Optical multicast system for data center networks.
Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren
2015-08-24
We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.
A novel WDM passive optical network architecture supporting two independent multicast data streams
NASA Astrophysics Data System (ADS)
Qiu, Yang; Chan, Chun-Kit
2012-01-01
We propose a novel scheme to perform optical multicast overlay of two independent multicast data streams on a wavelength-division-multiplexed (WDM) passive optical network. By controlling a sinusoidal clock signal and shifting the wavelength at the optical line terminal (OLT), the delivery of the two multicast data, being carried by the generated optical tones, can be independently and flexibly controlled. Simultaneous transmission of 10-Gb/s unicast downstream and upstream data as well as two independent 10-Gb/s multicast data was successfully demonstrated.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
Issues in designing transport layer multicast facilities
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined.
NASA Astrophysics Data System (ADS)
Allani, Mouna; Garbinato, Benoît; Pedone, Fernando
An increasing number of Peer-to-Peer (P2P) Internet applications rely today on data dissemination as their cornerstone, e.g., audio or video streaming, multi-party games. These applications typically depend on some support for multicast communication, where peers interested in a given data stream can join a corresponding multicast group. As a consequence, the efficiency, scalability, and reliability guarantees of these applications are tightly coupled with that of the underlying multicast mechanism.
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.
Issues in providing a reliable multicast facility
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Strayer, W. Timothy; Weaver, Alfred C.
1990-01-01
Issues involved in point-to-multipoint communication are presented and the literature for proposed solutions and approaches surveyed. Particular attention is focused on the ideas and implementations that align with the requirements of the environment of interest. The attributes of multicast receiver groups that might lead to useful classifications, what the functionality of a management scheme should be, and how the group management module can be implemented are examined. The services that multicasting facilities can offer are presented, followed by mechanisms within the communications protocol that implements these services. The metrics of interest when evaluating a reliable multicast facility are identified and applied to four transport layer protocols that incorporate reliable multicast.
Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa
2017-05-16
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks.
Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa
2017-01-01
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks. PMID:28509841
Multicast routing for wavelength-routed WDM networks with dynamic membership
NASA Astrophysics Data System (ADS)
Huang, Nen-Fu; Liu, Te-Lung; Wang, Yao-Tzung; Li, Bo
2000-09-01
Future broadband networks must support integrated services and offer flexible bandwidth usage. In our previous work, we explore the optical link control layer on the top of optical layer that enables the possibility of bandwidth on-demand service directly over wavelength division multiplexed (WDM) networks. Today, more and more applications and services such as video-conferencing software and Virtual LAN service require multicast support over the underlying networks. Currently, it is difficult to provide wavelength multicast over the optical switches without optical/electronic conversions although the conversion takes extra cost. In this paper, based on the proposed wavelength router architecture (equipped with ATM switches to offer O/E and E/O conversions when necessary), a dynamic multicast routing algorithm is proposed to furnish multicast services over WDM networks. The goal is to joint a new group member into the multicast tree so that the cost, including the link cost and the optical/electronic conversion cost, is kept as less as possible. The effectiveness of the proposed wavelength router architecture as well as the dynamic multicast algorithm is evaluated by simulation.
Bulk data transfer distributer: a high performance multicast model in ALMA ACS
NASA Astrophysics Data System (ADS)
Cirami, R.; Di Marcantonio, P.; Chiozzi, G.; Jeram, B.
2006-06-01
A high performance multicast model for the bulk data transfer mechanism in the ALMA (Atacama Large Millimeter Array) Common Software (ACS) is presented. The ALMA astronomical interferometer will consist of at least 50 12-m antennas operating at millimeter wavelength. The whole software infrastructure for ALMA is based on ACS, which is a set of application frameworks built on top of CORBA. To cope with the very strong requirements for the amount of data that needs to be transported by the software communication channels of the ALMA subsystems (a typical output data rate expected from the Correlator is of the order of 64 MB per second) and with the potential CORBA bottleneck due to parameter marshalling/de-marshalling, usage of IIOP protocol, etc., a transfer mechanism based on the ACE/TAO CORBA Audio/Video (A/V) Streaming Service has been developed. The ACS Bulk Data Transfer architecture bypasses the CORBA protocol with an out-of-bound connection for the data streams (transmitting data directly in TCP or UDP format), using at the same time CORBA for handshaking and leveraging the benefits of ACS middleware. Such a mechanism has proven to be capable of high performances, of the order of 800 Mbits per second on a 1Gbit Ethernet network. Besides a point-to-point communication model, the ACS Bulk Data Transfer provides a multicast model. Since the TCP protocol does not support multicasting and all the data must be correctly delivered to all ALMA subsystems, a distributer mechanism has been developed. This paper focuses on the ACS Bulk Data Distributer, which mimics a multicast behaviour managing data dispatching to all receivers willing to get data from the same sender.
Mobile Multicast in Hierarchical Proxy Mobile IPV6
NASA Astrophysics Data System (ADS)
Hafizah Mohd Aman, Azana; Hashim, Aisha Hassan A.; Mustafa, Amin; Abdullah, Khaizuran
2013-12-01
Mobile Internet Protocol Version 6 (MIPv6) environments have been developing very rapidly. Many challenges arise with the fast progress of MIPv6 technologies and its environment. Therefore the importance of improving the existing architecture and operations increases. One of the many challenges which need to be addressed is the need for performance improvement to support mobile multicast. Numerous approaches have been proposed to improve mobile multicast performance. This includes Context Transfer Protocol (CXTP), Hierarchical Mobile IPv6 (HMIPv6), Fast Mobile IPv6 (FMIPv6) and Proxy Mobile IPv6 (PMIPv6). This document describes multicast context transfer in hierarchical proxy mobile IPv6 (H-PMIPv6) to provide better multicasting performance in PMIPv6 domain.
WDM Network and Multicasting Protocol Strategies
Zaim, Abdul Halim
2014-01-01
Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM), it is easier to take the advantage of optical networks and optical burst switching (OBS) and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET) and Just In Time (JIT) reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes. PMID:24744683
Point-to-Point Multicast Communications Protocol
NASA Technical Reports Server (NTRS)
Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.
1987-01-01
This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.
Qin, Jun; Lu, Guo-Wei; Sakamoto, Takahide; Akahane, Kouichi; Yamamoto, Naokatsu; Wang, Danshi; Wang, Cheng; Wang, Hongxiang; Zhang, Min; Kawanishi, Tetsuya; Ji, Yuefeng
2014-12-01
In this paper, we experimentally demonstrate simultaneous multichannel wavelength multicasting (MWM) and exclusive-OR logic gate multicasting (XOR-LGM) for three 10Gbps non-return-to-zero differential phase-shift-keying (NRZ-DPSK) signals in quantum-dot semiconductor optical amplifier (QD-SOA) by exploiting the four-wave mixing (FWM) process. No additional pump is needed in the scheme. Through the interaction of the input three 10Gbps DPSK signal lights in QD-SOA, each channel is successfully multicasted to three wavelengths (1-to-3 for each), totally 3-to-9 MWM, and at the same time, three-output XOR-LGM is obtained at three different wavelengths. All the new generated channels are with a power penalty less than 1.2dB at a BER of 10(-9). Degenerate and non-degenerate FWM components are fully used in the experiment for data and logic multicasting.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
MTP: An atomic multicast transport protocol
NASA Technical Reports Server (NTRS)
Freier, Alan O.; Marzullo, Keith
1990-01-01
Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo
2018-03-01
The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.
Many-to-Many Multicast Routing Schemes under a Fixed Topology
Ding, Wei; Wang, Hongfa; Wei, Xuerui
2013-01-01
Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706
NASA Astrophysics Data System (ADS)
Liao, Luhua; Li, Lemin; Wang, Sheng
2006-12-01
We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.
NASA Astrophysics Data System (ADS)
Wu, Fei; Shao, Shihai; Tang, Youxi
2016-10-01
To enable simultaneous multicast downlink transmit and receive operations on the same frequency band, also known as full-duplex links between an access point and mobile users. The problem of minimizing the total power of multicast transmit beamforming is considered from the viewpoint of ensuring the suppression amount of near-field line-of-sight self-interference and guaranteeing prescribed minimum signal-to-interference-plus-noise-ratio (SINR) at each receiver of the multicast groups. Based on earlier results for multicast groups beamforming, the joint problem is easily shown to be NP-hard. A semidefinite relaxation (SDR) technique with linear program power adjust method is proposed to solve the NP-hard problem. Simulation shows that the proposed method is feasible even when the local receive antenna in nearfield and the mobile user in far-filed are in the same direction.
An efficient group multicast routing for multimedia communication
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugen; Yan, Xinfang
2004-04-01
Group multicasting is a kind of communication mechanism whereby each member of a group sends messages to all the other members of the same group. Group multicast routing algorithms capable of satisfying quality of service (QoS) requirements of multimedia applications are essential for high-speed networks. We present a heuristic algorithm for group multicast routing with end to end delay constraint. Source-specific routing trees for each member are generated in our algorithm, which satisfy member"s bandwidth and end to end delay requirements. Simulations over random network were carried out to compare proposed algorithm performance with Low and Song"s. The experimental results show that our proposed algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover, our algorithm achieves good performance in balancing traffic, which can avoid link blocking and enhance the network behavior efficiently.
NASA Technical Reports Server (NTRS)
Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick;
2001-01-01
A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Group-multicast capable optical virtual private ring with contention avoidance
NASA Astrophysics Data System (ADS)
Peng, Yunfeng; Du, Shu; Long, Keping
2008-11-01
A ring based optical virtual private network (OVPN) employing contention sensing and avoidance is proposed to deliver multiple-to-multiple group-multicast traffic. The network architecture is presented and its operation principles as well as performance are investigated. The main contribution of this article is the presentation of an innovative group-multicast capable OVPN architecture with technologies available today.
Fingerprint multicast in secure video streaming.
Zhao, H Vicky; Liu, K J Ray
2006-01-01
Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.
Research on Collaborative Technology in Distributed Virtual Reality System
NASA Astrophysics Data System (ADS)
Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi
2018-01-01
Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.
Dynamic multicast routing scheme in WDM optical network
NASA Astrophysics Data System (ADS)
Zhu, Yonghua; Dong, Zhiling; Yao, Hong; Yang, Jianyong; Liu, Yibin
2007-11-01
During the information era, the Internet and the service of World Wide Web develop rapidly. Therefore, the wider and wider bandwidth is required with the lower and lower cost. The demand of operation turns out to be diversified. Data, images, videos and other special transmission demands share the challenge and opportunity with the service providers. Simultaneously, the electrical equipment has approached their limit. So the optical communication based on the wavelength division multiplexing (WDM) and the optical cross-connects (OXCs) shows great potentials and brilliant future to build an optical network based on the unique technical advantage and multi-wavelength characteristic. In this paper, we propose a multi-layered graph model with inter-path between layers to solve the problem of multicast routing wavelength assignment (RWA) contemporarily by employing an efficient graph theoretic formulation. And at the same time, an efficient dynamic multicast algorithm named Distributed Message Copying Multicast (DMCM) mechanism is also proposed. The multicast tree with minimum hops can be constructed dynamically according to this proposed scheme.
Performance investigation of optical multicast overlay system using orthogonal modulation format
NASA Astrophysics Data System (ADS)
Singh, Simranjit; Singh, Sukhbir; Kaur, Ramandeep; Kaler, R. S.
2015-03-01
We proposed a bandwidth efficient wavelength division multiplexed-passive optical network (WDM-PON) to simultaneously transmit 60 Gb/s unicast and 10 Gb/s multicast services with 10 Gb/s upstream. The differential phase shift keying (DPSK) multicast signal is superimposed onto multiplexed non-return to zero/polarization shift keying (NRZ/PolSK) orthogonal modulated data signals. Upstream amplitude shift keying (ASK) signals formed without use of any additional light source and superimposed onto received unicast NRZ/PolSK signal before being transmitted back to optical line terminal (OLT). We also investigated the proposed WDM-PON system for variable optical input power, transmission distance of single mode fiber in multicast enable and disable mode. The measured Quality factor for all unicast and multicast signal is in acceptable range (>6). The original contribution of this paper is to propose a bandwidth efficient WDM-PON system that could be projected even in high speed scenario at reduced channel spacing and expected to be more technical viable due to use of optical orthogonal modulation formats.
Scalable Multicast Protocols for Overlapped Groups in Broker-Based Sensor Networks
NASA Astrophysics Data System (ADS)
Kim, Chayoung; Ahn, Jinho
In sensor networks, there are lots of overlapped multicast groups because of many subscribers, associated with their potentially varying specific interests, querying every event to sensors/publishers. And gossip based communication protocols are promising as one of potential solutions providing scalability in P(Publish)/ S(Subscribe) paradigm in sensor networks. Moreover, despite the importance of both guaranteeing message delivery order and supporting overlapped multicast groups in sensor or P2P networks, there exist little research works on development of gossip-based protocols to satisfy all these requirements. In this paper, we present two versions of causally ordered delivery guaranteeing protocols for overlapped multicast groups. The one is based on sensor-broker as delegates and the other is based on local views and delegates representing subscriber subgroups. In the sensor-broker based protocol, sensor-broker might lead to make overlapped multicast networks organized by subscriber's interests. The message delivery order has been guaranteed consistently and all multicast messages are delivered to overlapped subscribers using gossip based protocols by sensor-broker. Therefore, these features of the sensor-broker based protocol might be significantly scalable rather than those of the protocols by hierarchical membership list of dedicated groups like traditional committee protocols. And the subscriber-delegate based protocol is much stronger rather than fully decentralized protocols guaranteeing causally ordered delivery based on only local views because the message delivery order has been guaranteed consistently by all corresponding members of the groups including delegates. Therefore, this feature of the subscriber-delegate protocol is a hybrid approach improving the inherent scalability of multicast nature by gossip-based technique in all communications.
Inertial Motion Tracking for Inserting Humans into a Networked Synthetic Environment
2007-08-31
tracking methods. One method requires markers on the tracked buman body, and other method does not use nmkers. OPTOTRAK from Northem Digital Inc. is a...of using multicasting protocols. Unfortunately, most routers on the Internet are not configured for multicasting. A technique called tunneling is...used to overcome this problem. Tunneling is a software solution that m s on the end point routerslcomputers and allows multicast packets to traverse
Authenticated IGMP for Controlling Access to Multicast Distribution Tree
NASA Astrophysics Data System (ADS)
Park, Chang-Seop; Kang, Hyun-Sun
A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Remote software upload techniques in future vehicles and their performance analysis
NASA Astrophysics Data System (ADS)
Hossain, Irina
Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.
The reliable multicast protocol application programming interface
NASA Technical Reports Server (NTRS)
Montgomery , Todd; Whetten, Brian
1995-01-01
The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.
Multicast backup reprovisioning problem for Hamiltonian cycle-based protection on WDM networks
NASA Astrophysics Data System (ADS)
Din, Der-Rong; Huang, Jen-Shen
2014-03-01
As networks grow in size and complexity, the chance and the impact of failures increase dramatically. The pre-allocated backup resources cannot provide 100% protection guarantee when continuous failures occur in a network. In this paper, the multicast backup re-provisioning problem (MBRP) for Hamiltonian cycle (HC)-based protection on WDM networks for the link-failure case is studied. We focus on how to recover the protecting capabilities of Hamiltonian cycle against the subsequent link-failures on WDM networks for multicast transmissions, after recovering the multicast trees affected by the previous link-failure. Since this problem is a hard problem, an algorithm, which consists of several heuristics and a genetic algorithm (GA), is proposed to solve it. The simulation results of the proposed method are also given. Experimental results indicate that the proposed algorithm can solve this problem efficiently.
The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1995-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
The multidriver: A reliable multicast service using the Xpress Transfer Protocol
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.
1990-01-01
A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.
Demonstration of flexible multicasting and aggregation functionality for TWDM-PON
NASA Astrophysics Data System (ADS)
Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Zhu, Jinglong; Tian, Yu; Wu, Zhongying; Peng, Huangfa; Xu, Yongchi; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan
2017-06-01
The time- and wavelength-division multiplexed passive optical network (TWDM-PON) has been recognized as an attractive solution to provide broadband access for the next-generation networks. In this paper, we propose flexible service multicasting and aggregation functionality for TWDM-PON utilizing multiple-pump four-wave-mixing (FWM) and cyclic arrayed waveguide grating (AWG). With the proposed scheme, multiple TWDM-PON links share a single optical line terminal (OLT), which can greatly reduce the network deployment expense and achieve efficient network resource utilization by load balancing among different optical distribution networks (ODNs). The proposed scheme is compatible with existing TDM-PON infrastructure with fixed-wavelength OLT transmitter, thus smooth service upgrade can be achieved. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment with 10-Gb/s OOK and 10-Gb/s QPSK orthogonal frequency division multiplexing (OFDM) signal multicasting and aggregating to seven PON links. Compared with back-to-back (BTB) channel, the newly generated multicasting OOK signal and OFDM signal have power penalty of 1.6 dB and 2 dB at the BER of 10-3, respectively. For the aggregation of multiple channels, no obvious power penalty is observed. What is more, to verify the flexibility of the proposed scheme, we reconfigure the wavelength selective switch (WSS) and adjust the number of pumps to realize flexible multicasting functionality. One to three, one to seven, one to thirteen and one to twenty-one multicasting are achieved without modifying OLT structure.
Digital multi-channel stabilization of four-mode phase-sensitive parametric multicasting.
Liu, Lan; Tong, Zhi; Wiberg, Andreas O J; Kuo, Bill P P; Myslivets, Evgeny; Alic, Nikola; Radic, Stojan
2014-07-28
Stable four-mode phase-sensitive (4MPS) process was investigated as a means to enhance two-pump driven parametric multicasting conversion efficiency (CE) and signal to noise ratio (SNR). Instability of multi-beam, phase sensitive (PS) device that inherently behaves as an interferometer, with output subject to ambient induced fluctuations, was addressed theoretically and experimentally. A new stabilization technique that controls phases of three input waves of the 4MPS multicaster and maximizes CE was developed and described. Stabilization relies on digital phase-locked loop (DPLL) specifically was developed to control pump phases to guarantee stable 4MPS operation that is independent of environmental fluctuations. The technique also controls a single (signal) input phase to optimize the PS-induced improvement of the CE and SNR. The new, continuous-operation DPLL has allowed for fully stabilized PS parametric broadband multicasting, demonstrating CE improvement over 20 signal copies in excess of 10 dB.
Multicast Routing of Hierarchical Data
NASA Technical Reports Server (NTRS)
Shacham, Nachum
1992-01-01
The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.
Wang, Danshi; Zhang, Min; Qin, Jun; Lu, Guo-Wei; Wang, Hongxiang; Huang, Shanguo
2014-09-08
We propose a multifunctional optical switching unit based on the bidirectional liquid crystal on silicon (LCoS) and semiconductor optical amplifier (SOA) architecture. Add/drop, wavelength conversion, format conversion, and WDM multicast are experimentally demonstrated. Due to the bidirectional characteristic, the LCoS device cannot only multiplex the input signals, but also de-multiplex the converted signals. Dual-channel wavelength conversion and format conversion from 2 × 25Gbps differential quadrature phase-shift-keying (DQPSK) to 2 × 12.5Gbps differential phase-shift-keying (DPSK) based on four-wave mixing (FWM) in SOA is obtained with only one pump. One-to-six WDM multicast of 25Gbps DQPSK signals with two pumps is also achieved. All of the multicast channels are with a power penalty less than 1.1 dB at FEC threshold of 3.8 × 10⁻³.
Optimization of multicast optical networks with genetic algorithm
NASA Astrophysics Data System (ADS)
Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng
2007-11-01
In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.
Hines, Michael; Kumar, Sameer; Schürmann, Felix
2011-01-01
For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
Multiuser Transmit Beamforming for Maximum Sum Capacity in Tactical Wireless Multicast Networks
2006-08-01
commonly used extended Kalman filter . See [2, 5, 6] for recent tutorial overviews. In particle filtering , continuous distributions are approximated by...signals (using and developing associated particle filtering tools). Our work on these topics has been reported in seven (IEEE, SIAM) journal papers and...multidimensional scaling, tracking, intercept, particle filters . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT 18. SECURITY CLASSIFICATION OF
XML Tactical Chat (XTC): The Way Ahead for Navy Chat
2007-09-01
multicast transmissions via sophisticated pruning algorithms, while allowing multicast packets to “ tunnel ” through IP routers. [Macedonia, Brutzman 1994...conference was Jabber Inc. who added some great insight into the power of Jabber. • Great features including blackberry handheld connectivity and
An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs
NASA Astrophysics Data System (ADS)
Basalamah, Anas; Sato, Takuro
For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.
Analysis on Multicast Routing Protocols for Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Xiang, Ma
As the Mobile Ad Hoc Networks technologies face a series of challenges like dynamic changes of topological structure, existence of unidirectional channel, limited wireless transmission bandwidth, the capability limitations of mobile termination and etc, therefore, the research to mobile Ad Hoc network routings inevitablely undertake a more important task than those to other networks. Multicast is a mode of communication transmission oriented to group computing, which sends the data to a group of host computers by using single source address. In a typical mobile Ad Hoc Network environment, multicast has a significant meaning. On the one hand, the users of mobile Ad Hoc Network usually need to form collaborative working groups; on the other hand, this is also an important means of fully using the broadcast performances of wireless communication and effectively using the limited wireless channel resources. This paper summarizes and comparatively analyzes the routing mechanisms of various existing multicast routing protocols according to the characteristics of mobile Ad Hoc network.
NASA Astrophysics Data System (ADS)
Bock, Carlos; Prat, Josep
2005-04-01
A hybrid WDM/TDM PON architecture implemented by means of two cascaded Arrayed Waveguide Gratings (AWG) is presented. Using the Free Spectral Range (FSR) periodicity of AWGs we transmit unicast and multicast traffic on different wavelengths to each Optical Network Unit (ONU). The OLT is equipped with two laser stacks, a tunable one for unicast transmission and a fixed one for multicast transmission. We propose the ONU to be reflective in order to avoid any light source at the Costumer Premises Equipment (CPE). Optical transmission tests demonstrate correct transmission at 2.5 Gbps up to 30 km.
75 FR 52267 - Waiver of Statement of Account Filing Deadline for the 2010/1 Period
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-25
... available in a print format, a PDF format, and a software ``fill-in'' format created by Gralin Associates... retransmission of multicast streams. The paper and PDF versions of the form have been available to cable... recognize that the paper and PDF versions of the SOA have been available since July, many large and small...
Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems
NASA Technical Reports Server (NTRS)
Shu, Wei Wennie; Porter, John
2000-01-01
The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
A Low-Complexity Subgroup Formation with QoS-Aware for Enhancing Multicast Services in LTE Networks
NASA Astrophysics Data System (ADS)
Algharem, M.; Omar, M. H.; Rahmat, R. F.; Budiarto, R.
2018-03-01
The high demand of Multimedia services on in Long Term Evolution (LTE) and beyond networks forces the networks operators to find a solution that can handle the huge traffic. Along with this, subgroup formation techniques are introduced to overcome the limitations of the Conventional Multicast Scheme (CMS) by splitting the multicast users into several subgroups based on the users’ channels quality signal. However, finding the best subgroup configuration with low complexity is need more investigations. In this paper, an efficient and simple subgroup formation mechanisms are proposed. The proposed mechanisms take the transmitter MAC queue in account. The effectiveness of the proposed mechanisms is evaluated and compared with CMS in terms of throughput, fairness, delay, Block Error Rate (BLER).
Proxy-assisted multicasting of video streams over mobile wireless networks
NASA Astrophysics Data System (ADS)
Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody
2005-03-01
This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.
Multicasting based optical inverse multiplexing in elastic optical network.
Guo, Bingli; Xu, Yingying; Zhu, Paikun; Zhong, Yucheng; Chen, Yuanxiang; Li, Juhao; Chen, Zhangyuan; He, Yongqi
2014-06-16
Optical multicasting based inverse multiplexing (IM) is introduced in spectrum allocation of elastic optical network to resolve the spectrum fragmentation problem, where superchannels could be split and fit into several discrete spectrum blocks in the intermediate node. We experimentally demonstrate it with a 1-to-7 optical superchannel multicasting module and selecting/coupling components. Also, simulation results show that, comparing with several emerging spectrum defragmentation solutions (e.g., spectrum conversion, split spectrum), IM could reduce blocking performance significantly but without adding too much system complexity as split spectrum. On the other hand, service fairness for traffic with different granularity of these schemes is investigated for the first time and it shows that IM performs better than spectrum conversion and almost as well as split spectrum, especially for smaller size traffic under light traffic intensity.
Reliable multicast protocol specifications protocol operations
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd; Whetten, Brian
1995-01-01
This appendix contains the complete state tables for Reliable Multicast Protocol (RMP) Normal Operation, Multi-RPC Extensions, Membership Change Extensions, and Reformation Extensions. First the event types are presented. Afterwards, each RMP operation state, normal and extended, is presented individually and its events shown. Events in the RMP specification are one of several things: (1) arriving packets, (2) expired alarms, (3) user events, (4) exceptional conditions.
Tera-node Network Technology (Task 3) Scalable Personal Telecommunications
2000-03-14
Simulation results of this work may be found in http://north.east.isi.edu/spt/ audio.html. 6. Internet Research Task Force Reliable Multicast...Adaptation, 4. Multimedia Proxy Caching, 5. Experiments with the Rate Adaptation Protocol (RAP) 6. Providing leadership and innovation to the Internet ... Research Task Force (IRTF) Reliable Multicast Research Group (RMRG) 1. End-to-end Architecture for Quality-adaptive Streaming Applications over the
Lu, Guo-Wei; Bo, Tianwai; Sakamoto, Takahide; Yamamoto, Naokatsu; Chan, Calvin Chun-Kit
2016-10-03
Recently the ever-growing demand for dynamic and high-capacity services in optical networks has resulted in new challenges that require improved network agility and flexibility in order for network resources to become more "consumable" and dynamic, or elastic, in response to requests from higher network layers. Flexible and scalable wavelength conversion or multicast is one of the most important technologies needed for developing agility in the physical layer. This paper will investigate how, using a reconfigurable coherent multi-carrier as a pump, the multicast scalability and the flexibility in wavelength allocation of the converted signals can be effectively improved. Moreover, the coherence in the multiple carriers prevents the phase noise transformation from the local pump to the converted signals, which is imperative for the phase-noise-sensitive multi-level single- or multi-carrier modulated signal. To verify the feasibility of the proposed scheme, we experimentally demonstrate the wavelength multicast of coherent optical orthogonal frequency division multiplexing (CO-OFDM) signals using a reconfigurable coherent multi-carrier pump, showing flexibility in wavelength allocation, scalability in multicast, and tolerance against pump phase noise. Less than 0.5 dB and 1.8 dB power penalties at a bit-error rate (BER) of 10-3 are obtained for the converted CO-OFDM-quadrature phase-shift keying (QPSK) and CO-OFDM-16-ary quadrature amplitude modulation (16QAM) signals, respectively, even when using a distributed feedback laser (DFB) as a pump source. In contrast, with a free-running pumping scheme, the phase noise from DFB pumps severely deteriorates the CO-OFDM signals, resulting in a visible error-floor at a BER of 10-2 in the converted CO-OFDM-16QAM signals.
Overview of AMS (CCSDS Asynchronous Message Service)
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2006-01-01
This viewgraph presentation gives an overview of the Consultative Committee for Space Data Systems (CCSDS) Asynchronous Message Service (AMS). The topics include: 1) Key Features; 2) A single AMS continuum; 3) The AMS Protocol Suite; 4) A multi-continuum venture; 5) Constraining transmissions; 6) Security; 7) Fault Tolerance; 8) Performance of Reference Implementation; 9) AMS vs Multicast (1); 10) AMS vs Multicast (2); 11) RAMS testing exercise; and 12) Results.
Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI
NASA Astrophysics Data System (ADS)
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2015-09-01
In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.
NASA Astrophysics Data System (ADS)
Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin
2016-01-01
Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.
Design alternatives for process group membership and multicast
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.
Multicast Parametric Synchronous Sampling
2011-09-01
enhancement in a parametric mixer device. Fig. 4 shows the principle of generating uniform, high quality replicas extending over previously un-attainable...critical part of the MPASS architecture and is responsible for the direct and continuous acquisition of data across all of the multicast signal copies...ii) ability to copy THz signals with impunity to tens of replicas ; (iii) all-optical delays > 1.9 us; (iv) 10’s of THz-fast all-optical sampling of
Fault recovery in the reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.
Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Callahan, John R.; Whetten, Brian
1996-01-01
The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.
Reliable multicast protocol specifications flow control and NACK policy
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.
Fixed-rate layered multicast congestion control
NASA Astrophysics Data System (ADS)
Bing, Zhang; Bing, Yuan; Zengji, Liu
2006-10-01
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.
Ahlawat, Meenu; Bostani, Ameneh; Tehranchi, Amirhossein; Kashyap, Raman
2013-08-01
We experimentally demonstrate the possibility of agile multicasting for wavelength division multiplexing (WDM) networks, of a single-channel to two and seven channels over the C band, also extendable to S and L bands. This is based on cascaded χ(2) nonlinear mixing processes, namely, second-harmonic generation (SHG)-sum-frequency generation (SFG) and difference-frequency generation (DFG) in a 20-mm-long step-chirped periodically poled lithium niobate crystal, specially designed and fabricated for a 28-nm-wide SH-SF bandwidth centered at around 1.55 μm. The multiple idlers are simultaneously tuned by detuning the pump wavelengths within the broad SH-SF bandwidth. By selectively tuning the pump wavelengths over less than 10 and 6 nm, respectively, multicasting into two and seven idlers is successfully achieved across ~70 WDM channels within the 50 GHz International Telecommunication Union grid spacing.
AF-TRUST, Air Force Team for Research in Ubiquitous Secure Technology
2010-07-26
Charles Sutton, J. D. Tygar, and Kai Xia. Book chapter in Jeffrey J. P. Tsai and Philip S. Yu (eds.) Machine Learning in Cyber Trust: Security, Privacy...enterprise, tactical, embedded systems and command and control levels. From these studies, commissioned by Dr . Sekar Chandersekaran of the Secretary of the...Data centers avoid IP Multicast because of a series of problems with the technology. • Dr . Multicast (the MCMD), a system that maps traditional I PMC
Secure Hierarchical Multicast Routing and Multicast Internet Anonymity
1998-06-01
Multimedia, Summer 94, pages 76{79, 94. [15] David Chaum . Blind signatures for untraceable payments. In Proc. Crypto, pages 199{203, 1982. [16] David L...use of digital signatures , which consist of a cryptographic hash of the message encrypted with the private key of the signer. Digitally-signed messages... signature on the request and on the certi cate it contains. Notice that the location service need not retrieve the initiator’s public key as it is contained
Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks
NASA Astrophysics Data System (ADS)
Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu
P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
An Economic Case for End System Multicast
NASA Astrophysics Data System (ADS)
Analoui, Morteza; Rezvani, Mohammad Hossein
This paper presents a non-strategic model for the end-system multicast networks based on the concept of replica exchange economy. We believe that microeconomics is a good candidate to investigate the problem of selfishness of the end-users (peers) in order to maximize the aggregate throughput. In this solution concept, the decisions that a peer might make, does not affect the actions of the other peers at all. The proposed mechanism tunes the price of the service in such a way that general equilibrium holds.
Multicasting for all-optical multifiber networks
NASA Astrophysics Data System (ADS)
Kã¶Ksal, Fatih; Ersoy, Cem
2007-02-01
All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2017-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead. PMID:28736582
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2016-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.
NASA Astrophysics Data System (ADS)
Singh, Sukhbir; Singh, Surinder
2017-11-01
This paper investigated the effect of FWM and its suppression using optical phase conjugation modules in dispersion managed hybrid WDM-OTDM multicast overlay system. Interaction between propagating wavelength signals at higher power level causes new FWM component generation that can significant limit the system performance. OPC module consists of the pump signal and 0.6 km HNLF implemented in midway of optical link to generate destructive phase FWM components. Investigation revealed that by use of even OPC module in optical link reduces the FWM power and mitigate the interaction between wavelength signals at variable signal input power, dispersion parameter (β2) and transmission distance. System performance comparison is also made between without DM-OPC module, with DM and with DM-OPC module in scenario of FWM tolerance. The BER performance of hybrid WDM-OTDM multicast system using OPC module is improved by multiplication factor of 2 as comparable to dispersion managed and coverage distance is increased by factor of 2 as in Singh and Singh (2016).
Experimental Evaluation of Unicast and Multicast CoAP Group Communication
Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet
2016-01-01
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262
Internetworking satellite and local exchange networks for personal communications applications
NASA Technical Reports Server (NTRS)
Wolff, Richard S.; Pinck, Deborah
1993-01-01
The demand for personal communications services has shown unprecedented growth, and the next decade and beyond promise an era in which the needs for ubiquitous, transparent and personalized access to information will continue to expand in both scale and scope. The exchange of personalized information is growing from two-way voice to include data communications, electronic messaging and information services, image transfer, video, and interactive multimedia. The emergence of new land-based and satellite-based wireless networks illustrates the expanding scale and trend toward globalization and the need to establish new local exchange and exchange access services to meet the communications needs of people on the move. An important issue is to identify the roles that satellite networking can play in meeting these new communications needs. The unique capabilities of satellites, in providing coverage to large geographic areas, reaching widely dispersed users, for position location determination, and in offering broadcast and multicast services, can complement and extend the capabilities of terrestrial networks. As an initial step in exploring the opportunities afforded by the merger of satellite-based and land-based networks, several experiments utilizing the NASA ACTS satellite and the public switched local exchange network were undertaken to demonstrate the use of satellites in the delivery of personal communications services.
Multicast for savings in cache-based video distribution
NASA Astrophysics Data System (ADS)
Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf
1999-12-01
Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.
Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.
Jung, Young-Ho; Choi, Jihoon
2017-02-25
A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.
A Loss Tolerant Rate Controller for Reliable Multicast
NASA Technical Reports Server (NTRS)
Montgomery, Todd
1997-01-01
This paper describes the design, specification, and performance of a Loss Tolerant Rate Controller (LTRC) for use in controlling reliable multicast senders. The purpose of this rate controller is not to adapt to congestion (or loss) on a per loss report basis (such as per received negative acknowledgment), but instead to use loss report information and perceived state to decide more prudent courses of action for both the short and long term. The goal of this controller is to be responsive to congestion, but not overly reactive to spurious independent loss. Performance of the controller is verified through simulation results.
Contact Graph Routing Enhancements Developed in ION for DTN
NASA Technical Reports Server (NTRS)
Segui, John S.; Burleigh, Scott
2013-01-01
The Interplanetary Overlay Network (ION) software suite is an open-source, flight-ready implementation of networking protocols including the Delay/Disruption Tolerant Networking (DTN) Bundle Protocol (BP), the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP), and many others including the Contact Graph Routing (CGR) DTN routing system. While DTN offers the capability to tolerate disruption and long signal propagation delays in transmission, without an appropriate routing protocol, no data can be delivered. CGR was built for space exploration networks with scheduled communication opportunities (typically based on trajectories and orbits), represented as a contact graph. Since CGR uses knowledge of future connectivity, the contact graph can grow rather large, and so efficient processing is desired. These enhancements allow CGR to scale to predicted NASA space network complexities and beyond. This software improves upon CGR by adopting an earliest-arrival-time cost metric and using the Dijkstra path selection algorithm. Moving to Dijkstra path selection also enables construction of an earliest- arrival-time tree for multicast routing. The enhancements have been rolled into ION 3.0 available on sourceforge.net.
Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola
2014-02-10
We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
NASA Astrophysics Data System (ADS)
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
A heuristic for efficient data distribution management in distributed simulation
NASA Astrophysics Data System (ADS)
Gupta, Pankaj; Guha, Ratan K.
2005-05-01
In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.
Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks
NASA Astrophysics Data System (ADS)
Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu
Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.
IPTV multicast with peer-assisted lossy error control
NASA Astrophysics Data System (ADS)
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
Verification and validation of a reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks
NASA Astrophysics Data System (ADS)
Luo, Hongbin; Li, Lemin; Yu, Hongfang
2006-12-01
Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.
Optical network scaling: roles of spectral and spatial aggregation.
Arık, Sercan Ö; Ho, Keang-Po; Kahn, Joseph M
2014-12-01
As the bit rates of routed data streams exceed the throughput of single wavelength-division multiplexing channels, spectral and spatial traffic aggregation become essential for optical network scaling. These aggregation techniques reduce network routing complexity by increasing spectral efficiency to decrease the number of fibers, and by increasing switching granularity to decrease the number of switching components. Spectral aggregation yields a modest decrease in the number of fibers but a substantial decrease in the number of switching components. Spatial aggregation yields a substantial decrease in both the number of fibers and the number of switching components. To quantify routing complexity reduction, we analyze the number of multi-cast and wavelength-selective switches required in a colorless, directionless and contentionless reconfigurable optical add-drop multiplexer architecture. Traffic aggregation has two potential drawbacks: reduced routing power and increased switching component size.
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks
Hammad, Karim; El Bakly, Ahmed M.
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.
Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.
NASA Astrophysics Data System (ADS)
Rezvani, Mohammad Hossein; Analoui, Morteza
2010-11-01
We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.
Wang, Ke; Nirmalathas, Ampalavanapillai; Lim, Christina; Skafidas, Efstratios; Alameh, Kamal
2013-07-01
In this paper, we propose and experimentally demonstrate a free-space based high-speed reconfigurable card-to-card optical interconnect architecture with broadcast capability, which is required for control functionalities and efficient parallel computing applications. Experimental results show that 10 Gb/s data can be broadcast to all receiving channels for up to 30 cm with a worst-case receiver sensitivity better than -12.20 dBm. In addition, arbitrary multicasting with the same architecture is also investigated. 10 Gb/s reconfigurable point-to-point link and multicast channels are simultaneously demonstrated with a measured receiver sensitivity power penalty of ~1.3 dB due to crosstalk.
Bahşi, Hayretdin; Levi, Albert
2010-01-01
Wireless sensor networks (WSNs) generally have a many-to-one structure so that event information flows from sensors to a unique sink. In recent WSN applications, many-to-many structures evolved due to the need for conveying collected event information to multiple sinks. Privacy preserved data collection models in the literature do not solve the problems of WSN applications in which network has multiple un-trusted sinks with different level of privacy requirements. This study proposes a data collection framework bases on k-anonymity for preventing record disclosure of collected event information in WSNs. Proposed method takes the anonymity requirements of multiple sinks into consideration by providing different levels of privacy for each destination sink. Attributes, which may identify an event owner, are generalized or encrypted in order to meet the different anonymity requirements of sinks at the same anonymized output. If the same output is formed, it can be multicasted to all sinks. The other trivial solution is to produce different anonymized outputs for each sink and send them to related sinks. Multicasting is an energy efficient data sending alternative for some sensor nodes. Since minimization of energy consumption is an important design criteria for WSNs, multicasting the same event information to multiple sinks reduces the energy consumption of overall network.
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
Admission and Preventive Load Control for Delivery of Multicast and Broadcast Services via S-UMTS
NASA Astrophysics Data System (ADS)
Angelou, E.; Koutsokeras, N.; Andrikopoulos, I.; Mertzanis, I.; Karaliopoulos, M.; Henrio, P.
2003-07-01
An Admission Control strategy is proposed for unidirectional satellite systems delivering multicast and broadcast services to mobile users. In such systems, both the radio interface and the targeted services impose particular requirements on the RRM task. We briefly discuss the RRM requirements that stem from the services point of view and from the features of the SATIN access scheme that differentiate it from the conventional T-UMTS radio interface. The main functional entities of RRM and the alternative modes of operation are outlined and the proposed Admission Control algorithm is described in detail. The results from the simulation study that demonstrate its performance for a number of different scenarios are finally presented and conclusions derived.
NASA Astrophysics Data System (ADS)
Cheng, Yuh-Jiuh; Yeh, Tzuoh-Chyau; Cheng, Shyr-Yuan
2011-09-01
In this paper, a non-blocking multicast optical packet switch based on fiber Bragg grating technology with optical output buffers is proposed. Only the header of optical packets is converted to electronic signals to control the fiber Bragg grating array of input ports and the packet payloads should be transparently destined to their output ports so that the proposed switch can reduce electronic interfaces as well as the bit rate. The modulation and the format of packet payloads may be non-standard where packet payloads could also include different wavelengths for increasing the volume of traffic. The advantage is obvious: the proposed switch could transport various types of traffic. An easily implemented architecture which can provide multicast services is also presented. An optical output buffer is designed to queue the packets if more than one incoming packet should reach to the same destination output port or including any waiting packets in optical output buffer that will be sent to the output port at a time slot. For preserving service-packet sequencing and fairness of routing sequence, a priority scheme and a round-robin algorithm are adopted at the optical output buffer. The fiber Bragg grating arrays for both input ports and output ports are designed for routing incoming packets using optical code division multiple access technology.
Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks
NASA Astrophysics Data System (ADS)
Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan
In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.
Polarization-insensitive PAM-4-carrying free-space orbital angular momentum (OAM) communications.
Liu, Jun; Wang, Jian
2016-02-22
We present a simple configuration incorporating single polarization-sensitive phase-only liquid crystal spatial light modulator (SLM) to facilitate polarization-insensitive free-space optical communications employing orbital angular momentum (OAM) modes. We experimentally demonstrate several polarization-insensitive optical communication subsystems by propagating a single OAM mode, multicasting 4 and 10 OAM modes, and multiplexing 8 OAM modes, respectively. Free-space polarization-insensitive optical communication links using OAM modes that carry four-level pulse-amplitude modulation (PAM-4) signal are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties are less than 1 dB in both polarization-insensitive N-fold OAM modes multicasting and multiple OAM modes multiplexing at a bit-error rate (BER) of 2e-3 (enhanced forward-error correction (EFEC) threshold).
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
DDS as middleware of the Southern African Large Telescope control system
NASA Astrophysics Data System (ADS)
Maartens, Deneys S.; Brink, Janus D.
2016-07-01
The Southern African Large Telescope (SALT) software control system1 is realised as a distributed control system, implemented predominantly in National Instruments' LabVIEW. The telescope control subsystems communicate using cyclic, state-based messages. Currently, transmitting a message is accomplished by performing an HTTP PUT request to a WebDAV directory on a centralised Apache web server, while receiving is based on polling the web server for new messages. While the method works, it presents a number of drawbacks; a scalable distributed communication solution with minimal overhead is a better fit for control systems. This paper describes our exploration of the Data Distribution Service (DDS). DDS is a formal standard specification, defined by the Object Management Group (OMG), that presents a data-centric publish-subscribe model for distributed application communication and integration. It provides an infrastructure for platform- independent many-to-many communication. A number of vendors provide implementations of the DDS standard; RTI, in particular, provides a DDS toolkit for LabVIEW. This toolkit has been evaluated against the needs of SALT, and a few deficiencies have been identified. We have developed our own implementation that interfaces LabVIEW to DDS in order to address our specific needs. Our LabVIEW DDS interface implementation is built against the RTI DDS Core component, provided by RTI under their Open Community Source licence. Our needs dictate that the interface implementation be platform independent. Since we have access to the RTI DDS Core source code, we are able to build the RTI DDS libraries for any of the platforms on which we require support. The communications functionality is based on UDP multicasting. Multicasting is an efficient communications mechanism with low overheads which avoids duplicated point-to-point transmission of data on a network where there are multiple recipients of the data. In the paper we present a performance evaluation of DDS against the current HTTP-based implementation as well as the historical DataSocket implementation. We conclude with a summary and describe future work.
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
Digital Multicasting of Multiple Audio Streams
NASA Technical Reports Server (NTRS)
Macha, Mitchell; Bullock, John
2007-01-01
The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.
NASA Astrophysics Data System (ADS)
Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo
2006-01-01
Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.
Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter
NASA Astrophysics Data System (ADS)
Ashizawa, Kunitaka; Sato, Takehiro; Tokuhashi, Kazumasa; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki; Oki, Eiji
This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30msec, which is acceptable for on-demand broadcast services.
MDP: Reliable File Transfer for Space Missions
NASA Technical Reports Server (NTRS)
Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)
2002-01-01
This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990's, MDP is now in daily use by both the US Post Office and the DoD. This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Neck Oriented Reliable Multicast) is in the process of becoming an IETF standard.
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... offerings. Television and Class A television broadcast stations may make these announcements visually or... multicast audio programming streams, in a manner that appropriately alerts its audience to the fact that it is listening to a digital audio broadcast. No other insertion between the station's call letters and...
Back pressure based multicast scheduling for fair bandwidth allocation.
Sarkar, Saswati; Tassiulas, Leandros
2005-09-01
We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.
Secure Multicast Tree Structure Generation Method for Directed Diffusion Using A* Algorithms
NASA Astrophysics Data System (ADS)
Kim, Jin Myoung; Lee, Hae Young; Cho, Tae Ho
The application of wireless sensor networks to areas such as combat field surveillance, terrorist tracking, and highway traffic monitoring requires secure communication among the sensor nodes within the networks. Logical key hierarchy (LKH) is a tree based key management model which provides secure group communication. When a sensor node is added or evicted from the communication group, LKH updates the group key in order to ensure the security of the communications. In order to efficiently update the group key in directed diffusion, we propose a method for secure multicast tree structure generation, an extension to LKH that reduces the number of re-keying messages by considering the addition and eviction ratios of the history data. For the generation of the proposed key tree structure the A* algorithm is applied, in which the branching factor at each level can take on different value. The experiment results demonstrate the efficiency of the proposed key tree structure against the existing key tree structures of fixed branching factors.
Context-based user grouping for multi-casting in heterogeneous radio networks
NASA Astrophysics Data System (ADS)
Mannweiler, C.; Klein, A.; Schneider, J.; Schotten, H. D.
2011-08-01
Along with the rise of sophisticated smartphones and smart spaces, the availability of both static and dynamic context information has steadily been increasing in recent years. Due to the popularity of social networks, these data are complemented by profile information about individual users. Making use of this information by classifying users in wireless networks enables targeted content and advertisement delivery as well as optimizing network resources, in particular bandwidth utilization, by facilitating group-based multi-casting. In this paper, we present the design and implementation of a web service for advanced user classification based on user, network, and environmental context information. The service employs simple and advanced clustering algorithms for forming classes of users. Available service functionalities include group formation, context-aware adaptation, and deletion as well as the exposure of group characteristics. Moreover, the results of a performance evaluation, where the service has been integrated in a simulator modeling user behavior in heterogeneous wireless systems, are presented.
The Development of Interactive Distance Learning in Taiwan: Challenges and Prospects.
ERIC Educational Resources Information Center
Chu, Clarence T.
1999-01-01
Describes three types of interactive distance-education systems under development in Taiwan: real-time multicast systems; virtual-classroom systems; and curriculum-on-demand systems. Discusses the use of telecommunications and computer technology in higher education, problems and challenges, and future prospects. (Author/LRW)
Australian DefenceScience. Volume 16, Number 1, Autumn
2008-01-01
are carried via VOIP technology, and multicast IP traffic for audio -visual communications is also supported. The SSATIN system overall is seen to...Artificial Intelligence and Soft Computing Palma de Mallorca, Spain http://iasted.com/conferences/home-628.html 1 - 3 Sep 2008 Visualisation , Imaging and
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
QoS Adaptation in Multimedia Multicast Conference Applications for E-Learning Services
ERIC Educational Resources Information Center
Deusdado, Sérgio; Carvalho, Paulo
2006-01-01
The evolution of the World Wide Web service has incorporated new distributed multimedia conference applications, powering a new generation of e-learning development and allowing improved interactivity and prohuman relations. Groupware applications are increasingly representative in the Internet home applications market, however, the Quality of…
Internet technologies and requirements for telemedicine
NASA Technical Reports Server (NTRS)
Lamaster, H.; Meylor, J.; Meylor, F.
1997-01-01
Internet technologies are briefly introduced and those applicable for telemedicine are reviewed. Multicast internet technologies are described. The National Aeronautics and Space Administration (NASA) 'Telemedicine Space-bridge to Russia' project is described and used to derive requirements for internet telemedicine. Telemedicine privacy and Quality of Service (QoS) requirements are described.
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... free over-the-air signal, including multicast and high definition digital signals. (c) Election cycle... first retransmission consent-mandatory carriage election cycle shall be for a four-year period... carriage election cycle, and all cycles thereafter, shall be for a period of three years (e.g. the second...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2014 CFR
2014-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2013 CFR
2013-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2011 CFR
2011-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2012 CFR
2012-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
An approach to verification and validation of a reliable multicasting protocol: Extended Abstract
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.
Next Generation Integrated Environment for Collaborative Work Across Internets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey B. Newman
2009-02-24
We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less
The Development of CyberLearning in Dual-Mode: Higher Education Institutions in Taiwan.
ERIC Educational Resources Information Center
Chen, Yau Jane
2002-01-01
Open and distance education in Taiwan has evolved into cyberlearning. Over half (56 percent) of the conventional universities and colleges have been upgraded to dual-mode institutions offering real-time multicast instructional systems using videoconferencing, cable television, virtual classrooms, and curriculum-on-demand systems. The Ministry of…
Digital Video and the Internet: A Powerful Combination.
ERIC Educational Resources Information Center
Barron, Ann E.; Orwig, Gary W.
1995-01-01
Provides an overview of digital video and outlines hardware and software necessary for interactive training on the World Wide Web and for videoconferences via the Internet. Lists sites providing additional information on digital video, on CU-SeeMe software, and on MBONE (Multicast BackBONE), a technology that permits real-time transmission of…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-30
...'s channel number, as stated on the station's license, and/or the station's network affiliation may... Stations, choosing to include the station's channel number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station...
Multimedia C for Remote Language Teaching over SuperJANET.
ERIC Educational Resources Information Center
Matthews, E.; And Others
1996-01-01
Describes work carried out as part of a remote language teaching research investigation, which is looking into the use of multicast, multimedia conferencing over SuperJANET. The fundamental idea is to investigate the feasibility of sharing language teaching resources among universities within the United Kingdom by using the broadband SuperJANET…
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
75 FR 53198 - Rate Adjustment for the Satellite Carrier Compulsory License
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Part 386 [Docket No. 2010-4 CRB Satellite Rate] Rate Adjustment for the Satellite Carrier Compulsory License AGENCY: Copyright Royalty Board, Library... last day of a given month. (2) In the case of a station engaged in digital multicasting, the rates set...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management
ERIC Educational Resources Information Center
Osawa, Noritaka; Asai, Kikuo
2008-01-01
A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…
Using Interactive Broadband Multicasting in a Museum Lifelong Learning Program.
ERIC Educational Resources Information Center
Steinbach, Leonard
The Cleveland Museum of Art has embarked on an innovative approach for delivering high quality video-on-demand and live interactive cultural programming, along with Web-based complementary material, to seniors in assisted living residence facilities, community-based centers, and disabled persons in their homes. The project is made possible in part…
Cooperation and information replication in wireless networks.
Poularakis, Konstantinos; Tassiulas, Leandros
2016-03-06
A significant portion of today's network traffic is due to recurring downloads of a few popular contents. It has been observed that replicating the latter in caches installed at network edges-close to users-can drastically reduce network bandwidth usage and improve content access delay. Such caching architectures are gaining increasing interest in recent years as a way of dealing with the explosive traffic growth, fuelled further by the downward slope in storage space price. In this work, we provide an overview of caching with a particular emphasis on emerging network architectures that enable caching at the radio access network. In this context, novel challenges arise due to the broadcast nature of the wireless medium, which allows simultaneously serving multiple users tuned into a multicast stream, and the mobility of the users who may be frequently handed off from one cell tower to another. Existing results indicate that caching at the wireless edge has a great potential in removing bottlenecks on the wired backbone networks. Taking into consideration the schedule of multicast service and mobility profiles is crucial to extract maximum benefit in network performance. © 2016 The Author(s).
Protocol Architecture Model Report
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
CLON: Overlay Networks and Gossip Protocols for Cloud Environments
NASA Astrophysics Data System (ADS)
Matos, Miguel; Sousa, António; Pereira, José; Oliveira, Rui; Deliot, Eric; Murray, Paul
Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource consumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links as happens in Cloud Computing infrastructures composed by several interconnected data centers across the globe.
A Security Architecture for Fault-Tolerant Systems
1993-06-03
aspect of our effort to achieve better performance is integrating the system into microkernel -based operating systems. 4 Summary and discussion In...135-171, June 1983. [vRBC+92] R. van Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels . In...Proceedings of the USENIX Microkernels and Other Kernel Architectures Workshop, April 1992. 29
Lightweight and Statistical Techniques for Petascale PetaScale Debugging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errorsmore » in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis to a small set of nodes or by identifying equivalence classes of nodes and sampling our debug targets from them. We implemented these techniques as lightweight tools that efficiently work on the full scale of the target machine. We explored four lightweight debugging refinements: generic classification parameters, such as stack traces, application-specific classification parameters, such as global variables, statistical data acquisition techniques and machine learning based approaches to perform root cause analysis. Work done under this project can be divided into two categories, new algorithms and techniques for scalable debugging, and foundation infrastructure work on our MRNet multicast-reduction framework for scalability, and Dyninst binary analysis and instrumentation toolkits.« less
Load balancing for massively-parallel soft-real-time systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hailperin, M.
1988-09-01
Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less
OAM-labeled free-space optical flow routing.
Gao, Shecheng; Lei, Ting; Li, Yangjin; Yuan, Yangsheng; Xie, Zhenwei; Li, Zhaohui; Yuan, Xiaocong
2016-09-19
Space-division multiplexing allows unprecedented scaling of bandwidth density for optical communication. Routing spatial channels among transmission ports is critical for future scalable optical network, however, there is still no characteristic parameter to label the overlapped optical carriers. Here we propose a free-space optical flow routing (OFR) scheme by using optical orbital angular moment (OAM) states to label optical flows and simultaneously steer each flow according to their OAM states. With an OAM multiplexer and a reconfigurable OAM demultiplexer, massive individual optical flows can be routed to the demanded optical ports. In the routing process, the OAM beams act as data carriers at the same time their topological charges act as each carrier's labels. Using this scheme, we experimentally demonstrate switching, multicasting and filtering network functions by simultaneously steer 10 input optical flows on demand to 10 output ports. The demonstration of data-carrying OFR with nonreturn-to-zero signals shows that this process enables synchronous processing of massive spatial channels and flexible optical network.
High Performance Computing Multicast
2012-02-01
responsiveness, first-tier applications often implement replicated in- memory key-value stores , using them to store state or to cache data from services...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response. A...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
... to offer remote multi-cast ITCH Wave Ports for clients co-located at other third party data centers... delivery of third party market data to market center clients via a wireless network using millimeter wave... Multi- cast ITCH Wave Ports for clients co-located at other third-party data centers, through which...
Design and Implementation of Replicated Object Layer
NASA Technical Reports Server (NTRS)
Koka, Sudhir
1996-01-01
One of the widely used techniques for construction of fault tolerant applications is the replication of resources so that if one copy fails sufficient copies may still remain operational to allow the application to continue to function. This thesis involves the design and implementation of an object oriented framework for replicating data on multiple sites and across different platforms. Our approach, called the Replicated Object Layer (ROL) provides a mechanism for consistent replication of data over dynamic networks. ROL uses the Reliable Multicast Protocol (RMP) as a communication protocol that provides for reliable delivery, serialization and fault tolerance. Besides providing type registration, this layer facilitates distributed atomic transactions on replicated data. A novel algorithm called the RMP Commit Protocol, which commits transactions efficiently in reliable multicast environment is presented. ROL provides recovery procedures to ensure that site and communication failures do not corrupt persistent data, and male the system fault tolerant to network partitions. ROL will facilitate building distributed fault tolerant applications by performing the burdensome details of replica consistency operations, and making it completely transparent to the application.Replicated databases are a major class of applications which could be built on top of ROL.
Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani
2015-01-01
Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting.
Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani
2015-01-01
Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting. PMID:26236773
Toward fidelity between specification and implementation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing
1994-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
A Novel Group Coordination Protocol for Collaborative Multimedia Systems
1998-01-01
technology have advanced considerably, ef- ficient group coordination support for applications characterized by synchronous and wide-area groupwork is...As a component within a general coordination architecture for many-to-many groupwork , floor control coexists with proto- cols for reliable ordered...multicast and media synchronization at a sub-application level. Orchestration of multiparty groupwork with fine-grained and fair floor control is an
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-28
... that is in test mode in excess of one. (c)-(f) No change. (g) Other Port Fees Remote Multi-cast ITCH... environment to test upcoming NASDAQ releases and product enhancements, as well as test software prior to... public in accordance with the provisions of 5 U.S.C. 552, will be available for Web site viewing and...
Multi-casting approach for vascular networks in cellularized hydrogels.
Justin, Alexander W; Brooks, Roger A; Markaki, Athina E
2016-12-01
Vascularization is essential for living tissue and remains a major challenge in the field of tissue engineering. A lack of a perfusable channel network within a large and densely populated tissue engineered construct leads to necrotic core formation, preventing fabrication of functional tissues and organs. We report a new method for producing a hierarchical, three-dimensional (3D) and perfusable vasculature in a large, cellularized fibrin hydrogel. Bifurcating channels, varying in size from 1 mm to 200-250 µm, are formed using a novel process in which we convert a 3D printed thermoplastic material into a gelatin network template, by way of an intermediate alginate hydrogel. This enables a CAD-based model design, which is highly customizable, reproducible, and which can yield highly complex architectures, to be made into a removable material, which can be used in cellular environments. Our approach yields constructs with a uniform and high density of cells in the bulk, made from bioactive collagen and fibrin hydrogels. Using standard cell staining and immuno-histochemistry techniques, we showed good cell seeding and the presence of tight junctions between channel endothelial cells, and high cell viability and cell spreading in the bulk hydrogel. © 2016 The Authors.
Enhanced Performance & Functionality of Tunable Delay Lines
2012-08-01
Figure 6. Experimental setup. Transmitter is capable of generating 80-Gb/s RZ-DQPSK, 40-Gb/s RZ-DPSK and 40-Gb/s RZ-OOK modulation formats. Phase...Power penalty with respect to B2B of each channel for 2-, 4-, 8-fold multicasting. (c) Pulsewidth as a function of DGD along with eye diagrams of 2...63 Figure 99. Concept. (a) A distributed optical network ; (b) NOLMs for
Scalable Technology for a New Generation of Collaborative Applications
2007-04-01
of the International Symposium on Distributed Computing (DISC), Cracow, Poland, September 2005. Classic Paxos vs. Fast Paxos: Caveat Emptor, Flavio...grou or able and fast multicast primitive to layer under high-level latency across dimensions as varied as group size [10, 17],abstractions such as...servers, networked via fast , dedicated interconnects. The system to subscribe to a fraction of the equities on the software stack running on a single
Saguaro: A Distributed Operating System Based on Pools of Servers.
1988-03-25
asynchronous message passing, multicast, and semaphores are supported. We have found this flexibility to be very useful for distributed programming. The...variety of communication primitives provided by SR has facilitated the research of Stella Atkins, who was a visiting professor at Arizona during Spring...data bits in a raw communication channel to help keep the source and destination synchronized , Psync explicitly embeds timing information drawn from the
Extensible Interest Management for Scalable Persistent Distributed Virtual Environments
1999-12-01
Calvin, Cebula et al. 1995; Morse, Bic et al. 2000) uses a two grid, with each grid cell having two multicast addresses. An entity expresses interest...Entity distribution for experimental runs 78 s I * • ...... ^..... * * a» Sis*«*»* 1 ***** Jj |r...Multiple Users and Shared Applications with VRML. VRML 97, Monterey, CA. pp. 33-40. Calvin, J. O., D. P. Cebula , et al. (1995). Data Subscription in
Improved Lower Bounds on the Price of Stability of Undirected Network Design Games
NASA Astrophysics Data System (ADS)
Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.
A framework using cluster-based hybrid network architecture for collaborative virtual surgery.
Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann
2009-12-01
Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.
2003-09-01
This restriction limits the deployment to small and medium sized enterprises. The Internet cannot universally use DVMRP for this reason. In addition...20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE September 2003 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE... University , 1996 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCE from
Traffic Generator (TrafficGen) Version 1.4.2: Users Guide
2016-06-01
events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with
GUMP: Adapting Client/Server Messaging Protocols into Peer-to-Peer Serverless Environments
2010-06-11
and other related metadata, such as message re- ceiver ID (for supporting multiple connections) and so forth. The Proxy consumes the message and uses...the underlying discovery subsystem and multicast to process the message and translate the request into behaviour suitable for the un- derlying...communication i.e. a chat. Jingle (XEP-0166) [26] is a related specification that de- fines an extension to the XMPP protocol for initiating and
Tactical Mobile Communications (Communications tactiques mobiles)
1999-11-01
13]. randomly at the network nodes. Each multicast group Our studies do, in fact, support this conjecture. consists of the source node plus at least...Capability investigate the MMR concept in some more detail. The study was contracted to a group which Multi-role denotes the capability to support a...through the HW- and SW-resources of the frontends can be incorporated in a task-dedicated GPU. Functions can be grouped into four categories: MMR
Multimedia Data Capture with Multicast Dissemination for Online Distance Learning
2001-12-01
Juan Gril and Dr. Don Brutzman to wrap the multiple videos in a user- friendly environment. The web pages also contain the original PowerPoint...this CD, Juan Gril , a volunteer for the Siggraph 2001 Online Committee, created web pages that match the style and functionality desired by the...leader. The Committee for 2001 consisted of Don Brutzman, Stephen. Matsuba, Mike Collins, Allen Dutton, Juan Gril , Mike Hunsberger, Jerry Isdale
2002-09-01
Secure Multicast......................................................................24 i. Message Digests and Message Authentication Codes ( MACs ...that is, the needs of the VE will determine what the design will look like (e.g., reliable vs . unreliable data communications). In general, there...Molva00] and [Abdalla00]. i. Message Digests and Message Authentication Codes ( MACs ) Message digests and MACs are used for data integrity verification
Robust Airborne Networking Extensions (RANGE)
2008-02-01
IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code
Optimum Guidance Law and Information Management for a Large Number of Formation Flying Spacecrafts
NASA Astrophysics Data System (ADS)
Tsuda, Yuichi; Nakasuka, Shinichi
In recent years, formation flying technique is recognized as one of the most important technologies for deep space and orbital missions that involve multiple spacecraft operations. Formation flying mission improves simultaneous observability over a wide area, redundancy and reconfigurability of the system with relatively small and low cost spacecrafts compared with the conventional single spacecraft mission. From the viewpoint of guidance and control, realizing formation flying mission usually requires tight maintenance and control of the relative distances, speeds and orientations between the member satellites. This paper studies a practical architecture for formation flight missions focusing mainly on guidance and control, and describes a new guidance algorithm for changing and keeping the relative positions and speeds of the satellites in formation. The resulting algorithm is suitable for onboard processing and gives the optimum impulsive trajectory for satellites flying closely around a certain reference orbit, that can be elliptic, parabolic or hyperbolic. Based on this guidance algorithm, this study introduces an information management methodology between the member spacecrafts which is suitable for a large formation flight architecture. Routing and multicast communication based on the wireless local area network technology are introduced. Some mathematical analyses and computer simulations will be shown in the presentation to reveal the feasibility of the proposed formation flight architecture, especially when a very large number of satellites join the formation.
Collaboration Services: Enabling Chat in Disadvantaged Grids
2014-06-01
grids in the tactical domain" [2]. The main focus of this group is to identify what we call tactical SOA foundation services. By this we mean which...Here, only IPv4 is supported, as differences relating to IPv4 and IPv6 addressing meant that this functionality was not easily extended to use IPv6 ...multicast groups. Our IPv4 implementation is fully compliant with the specification, whereas the IPv6 implementation uses our own interpretation of
Design and Implementation of the MARG Human Body Motion Tracking System
2004-10-01
7803-8463-6/041$20.00 ©:!004 IEEE 625 OPTOTRAK from Northern Digital Inc. is a typical example of a marker-based system [I 0]. Another is the...technique called tunneling is :used to overcome this problem. Tunneling is a software solution that runs on the end point routers/computers and allows...multicast packets to traverse the network by putting them into unicast packets. MUTUP overcomes the tunneling problem using shared memory in the
A Secure Group Communication Architecture for a Swarm of Autonomous Unmanned Aerial Vehicles
2008-03-01
members to use the same decryption key. This shared decryption key is called the Session Encryption Key ( SEK ) or Traffic Encryption Key (TEK...Since everyone shares the SEK , members need to hold additional Key Encryption Keys (KEK) that are used to securely distribute the SEK to each valid...managing this process. To preserve the secrecy of the multicast data, the SEK needs to be updated upon certain events such as a member joining and
NASA Technical Reports Server (NTRS)
Stoenescu, Tudor M.; Woo, Simon S.
2009-01-01
In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.
Network connectivity enhancement by exploiting all optical multicast in semiconductor ring laser
NASA Astrophysics Data System (ADS)
Siraj, M.; Memon, M. I.; Shoaib, M.; Alshebeili, S.
2015-03-01
The use of smart phone and tablet applications will provide the troops for executing, controlling and analyzing sophisticated operations with the commanders providing crucial documents directly to troops wherever and whenever needed. Wireless mesh networks (WMNs) is a cutting edge networking technology which is capable of supporting Joint Tactical radio System (JTRS).WMNs are capable of providing the much needed bandwidth for applications like hand held radios and communication for airborne and ground vehicles. Routing management tasks can be efficiently handled through WMNs through a central command control center. As the spectrum space is congested, cognitive radios are a much welcome technology that will provide much needed bandwidth. They can self-configure themselves, can adapt themselves to the user requirement, provide dynamic spectrum access for minimizing interference and also deliver optimal power output. Sometimes in the indoor environment, there are poor signal issues and reduced coverage. In this paper, a solution utilizing (CR WMNs) over optical network is presented by creating nanocells (PCs) inside the indoor environment. The phenomenon of four-wave mixing (FWM) is exploited to generate all-optical multicast using semiconductor ring laser (SRL). As a result same signal is transmitted at different wavelengths. Every PC is assigned a unique wavelength. By using CR technology in conjunction with PC will not only solve network coverage issue but will provide a good bandwidth to the secondary users.
NASA Astrophysics Data System (ADS)
Emmerson, S. R.; Veeraraghavan, M.; Chen, S.; Ji, X.
2015-12-01
Results of a pilot deployment of a major new version of the Unidata Local Data Manager (LDM-7) are presented. The Unidata LDM was developed by the University Corporation for Atmospheric Research (UCAR) and comprises a suite of software for the distribution and local processing of data in near real-time. It is widely used in the geoscience community to distribute observational data and model output, most notably as the foundation of the Unidata Internet Data Distribution (IDD) system run by UCAR, but also in private networks operated by NOAA, NASA, USGS, etc. The current version, LDM-6, uses at least one unicast TCP connection per receiving host. With over 900 connections, the bit-rate of total outgoing IDD traffic from UCAR averages approximately 3.0 GHz, with peak data rates exceeding 6.6 GHz. Expected increases in data volume suggest that a more efficient distribution mechanism will be required in the near future. LDM-7 greatly reduces the outgoing bandwidth requirement by incorporating a recently-developed "semi-reliable" IP multicast protocol while retaining the unicast TCP mechanism for reliability. During the summer of 2015, UCAR and the University of Virginia conducted a pilot deployment of the Unidata LDM-7 among U.S. university participants with access to the Internet2 network. Results of this pilot program, along with comparisons to the existing Unidata LDM-6 system, are presented.
NASA Astrophysics Data System (ADS)
Duan, Haoran
1997-12-01
This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 μm CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.
Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript
2014-09-01
scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of
The Use of End-to-End Multicast Measurements for Characterizing Internal Network Behavior
2002-08-01
dropping on the basis Random Early Detection ( RED ) [17] is another mechanism by which packet loss may become decorrelated. It remains to be seen whether...this mechanism will be widely deployed in communications networks. On the other hand, the use of RED to merely mark packets will not break correlations...Tail and Random Early Detection ( RED ) buffer discard methods, [17]. We compared the inferred loss and delay with actual probe loss and delay. We found
Polling-Based High-Bit-Rate Packet Transfer in a Microcellular Network to Allow Fast Terminals
NASA Astrophysics Data System (ADS)
Hoa, Phan Thanh; Lambertsen, Gaute; Yamada, Takahiko
A microcellular network will be a good candidate for the future broadband mobile network. It is expected to support high-bit-rate connection for many fast mobile users if the handover is processed fast enough to lessen its impact on QoS requirements. One of the promising techniques is believed to use for the wireless interface in such a microcellular network is the WLAN (Wireless LAN) technique due to its very high wireless channel rate. However, the less capability of mobility support of this technique must be improved to be able to expand its utilization for the microcellular environment. The reason of its less support mobility is large handover latency delay caused by contention-based handover to the new BS (base station) and delay of re-forwarding data from the old to new BS. This paper presents a proposal of multi-polling and dynamic LMC (Logical Macro Cell) to reduce mentioned above delays. Polling frame for an MT (Mobile Terminal) is sent from every BS belonging to the same LMC — a virtual single macro cell that is a multicast group of several adjacent micro-cells in which an MT is communicating. Instead of contending for the medium of a new BS during handover, the MT responds to the polling sent from that new BS to enable the transition. Because only one BS of the LMC receives the polling ACK (acknowledgement) directly from the MT, this ACK frame has to be multicast to all BSs of the same LMC through the terrestrial network to continue sending the next polling cycle at each BS. Moreover, when an MT hands over to a new cell, its current LMC is switched over to a newly corresponding LMC to prevent the future contending for a new LMC. By this way, an MT can do handover between micro-cells of an LMC smoothly because the redundant resource is reserved for it at neighboring cells, no need to contend with others. Our simulation results using the OMNeT++ simulator illustrate the performance achievements of the multi-polling and dynamic LMC scheme in eliminating handover latency, packet loss and keeping mobile users' throughput stable in the high traffic load condition though it causes somewhat overhead on the neighboring cells.
A Real-Time Executive for Multiple-Computer Clusters.
1984-12-01
in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by ’b4, internal _loopback by 02"b4, clear loooback by ’b4, go offline by Ŝ"b4, eo online by ’b4, onboard _diagnostic by Oa’b4, cdr
Guest Editor's introduction: Special issue on distributed virtual environments
NASA Astrophysics Data System (ADS)
Lea, Rodger
1998-09-01
Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com
Improving the performance of interferometric imaging through the use of disturbance feedforward.
Böhm, Michael; Glück, Martin; Keck, Alexander; Pott, Jörg-Uwe; Sawodny, Oliver
2017-05-01
In this paper, we present a disturbance compensation technique to improve the performance of interferometric imaging for extremely large ground-based telescopes, e.g., the Large Binocular Telescope (LBT), which serves as the application example in this contribution. The most significant disturbance sources at ground-based telescopes are wind-induced mechanical vibrations in the range of 8-60 Hz. Traditionally, their optical effect is eliminated by feedback systems, such as the adaptive optics control loop combined with a fringe tracking system within the interferometric instrument. In this paper, accelerometers are used to measure the vibrations. These measurements are used to estimate the motion of the mirrors, i.e., tip, tilt and piston, with a dynamic estimator. Additional delay compensation methods are presented to cancel sensor network delays and actuator input delays, improving the estimation result even more, particularly at higher frequencies. Because various instruments benefit from the implementation of telescope vibration mitigation, the estimator is implemented as a separate, independent software on the telescope, publishing the estimated values via multicast on the telescope's ethernet. Every client capable of using and correcting the estimated disturbances can subscribe and use these values in a feedforward for its compensation device, e.g., the deformable mirror, the piston mirror of LINC-NIRVANA, or the fast path length corrector of the Large Binocular Telescope Interferometer. This easy-to-use approach eventually leveraged the presented technology for interferometric use at the LBT and now significantly improves the sky coverage, performance, and operational robustness of interferometric imaging on a regular basis.
BabelFish-Tools for IEEE C37.118.2-compliant real-time synchrophasor data mediation
NASA Astrophysics Data System (ADS)
Almas, M. S.; Vanfretti, L.; Baudette, M.
BabelFish (BF) is a real-time data mediator for development and fast prototyping of synchrophasor applications. BF is compliant with the synchrophasor data transmission IEEE Std C37.118.2-2011. BF establishes a TCP/IP connection with any Phasor Measurement Unit (PMU) or Phasor Data Concentrator (PDC) stream and parses the IEEE Std C37.118.2-2011 frames in real-time to provide access to raw numerical data in the LabVIEW environment. Furthermore, BF allows the user to select "data-of-interest" and transmit it to either a local or remote application using the User Datagram Protocol (UDP) in order to support both unicast and multicast communication. In the power systems Wide Area Monitoring Protection and Control (WAMPAC) domain, BF provides the first Free/Libre and Open Source Software (FLOSS) for the purpose of giving the users tools for fast prototyping of new applications processing PMU measurements in their chosen environment, thus liberating them of time consuming synchrophasor data handling and allowing them to develop applications in a modular fashion, without a need of a large and monolithic synchrophasor software environment.
Adaptive Peer Sampling with Newscast
NASA Astrophysics Data System (ADS)
Tölgyesi, Norbert; Jelasity, Márk
The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.
NASA Astrophysics Data System (ADS)
Rivera, Juan J.; Trachtman, Eyal; Richharia, Madhavendra
2005-11-01
Mobile satellite telecommunications systems have undergone an enormous evolution in the last decades, with the interest in having advanced telecommunications services available on demand, anywhere and at any time, leading to incredible advances. The demand for braodband data is therefore rapidly gathering pace, but current solutions are finding it increasingly difficult to combine large bandwidth with ubiquitous coverage, reliability and portability. The BGAN (Broadband Global Area Network) system, designed to operate with the Inmarsat-4 satellites, provides breakthrough services that meet all of these requirements. It will enable broadband connection on the move, delivering all the key tools of the modern office. Recognising the great impact that Inmarsat's BGAN system will have on the European satellite communications industry, and the benefits that it will bring to a wide range of European industries, in 2003 ESA initiated the "BGAN Extension" project. Its primary goals are to provide the full range of BGAN services to truly mobile platforms, operating in aeronautical, vehicular and maritime environments, and to introduce a multicast service capability. The project is supported by the ARTES Programme which establishes a collaboration agreement between ESA, Inmarsat and a group of key industrial and academic institutions which includes EMS, Logica, Nera and the University of Surrey (UK).
Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation
NASA Technical Reports Server (NTRS)
Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat
2011-01-01
The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.
Integrating security in a group oriented distributed system
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth; Gong, LI
1992-01-01
A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.
Unequal error control scheme for dimmable visible light communication systems
NASA Astrophysics Data System (ADS)
Deng, Keyan; Yuan, Lei; Wan, Yi; Li, Huaan
2017-01-01
Visible light communication (VLC), which has the advantages of a very large bandwidth, high security, and freedom from license-related restrictions and electromagnetic-interference, has attracted much interest. Because a VLC system simultaneously performs illumination and communication functions, dimming control, efficiency, and reliable transmission are significant and challenging issues of such systems. In this paper, we propose a novel unequal error control (UEC) scheme in which expanding window fountain (EWF) codes in an on-off keying (OOK)-based VLC system are used to support different dimming target values. To evaluate the performance of the scheme for various dimming target values, we apply it to H.264 scalable video coding bitstreams in a VLC system. The results of the simulations that are performed using additive white Gaussian noises (AWGNs) with different signal-to-noise ratios (SNRs) are used to compare the performance of the proposed scheme for various dimming target values. It is found that the proposed UEC scheme enables earlier base layer recovery compared to the use of the equal error control (EEC) scheme for different dimming target values and therefore afford robust transmission for scalable video multicast over optical wireless channels. This is because of the unequal error protection (UEP) and unequal recovery time (URT) of the EWF code in the proposed scheme.
Multimedia And Internetworking Architecture Infrastructure On Interactive E-Learning System
NASA Astrophysics Data System (ADS)
Indah, K. A. T.; Sukarata, G.
2018-01-01
Interactive e-learning is a distance learning method that involves information technology, electronic system or computer as one means of learning system used for teaching and learning process that is implemented without having face to face directly between teacher and student. A strong dependence on emerging technologies greatly influences the way in which the architecture is designed to produce a powerful interactive e-learning network. In this paper analyzed an architecture model where learning can be done interactively, involving many participants (N-way synchronized distance learning) using video conferencing technology. Also used broadband internet network as well as multicast techniques as a troubleshooting method for bandwidth usage can be efficient.
Web server for priority ordered multimedia services
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund
2001-10-01
In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.
Representation and Integration of Scientific Information
NASA Technical Reports Server (NTRS)
1998-01-01
The objective of this Joint Research Interchange with NASA-Ames was to investigate how the Tsimmis technology could be used to represent and integrate scientific information. The main goal of the Tsimmis project is to allow a decision maker to find information of interest from such sources, fuse it, and process it (e.g., summarize it, visualize it, discover trends). Another important goal is the easy incorporation of new sources, as well the ability to deal with sources whose structure or services evolve. During the Interchange we had research meetings approximately every month or two. The funds provided by NASA supported work that lead to the following two papers: Fusion Queries over Internet Databases; Efficient Query Subscription Processing in a Multicast Environment.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System
Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi
2015-01-01
WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication. PMID:26351658
Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System.
Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi
2015-01-01
WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication.
Robertson, Brian; Zhang, Zichen; Yang, Haining; Redmond, Maura M; Collings, Neil; Liu, Jinsong; Lin, Ruisheng; Jeziorska-Chapman, Anna M; Moore, John R; Crossland, William A; Chu, D P
2012-04-20
It is shown that reflective liquid crystal on silicon (LCOS) spatial light modulator (SLM) based interconnects or fiber switches that use defocus to reduce crosstalk can be evaluated and optimized using a fractional Fourier transform if certain optical symmetry conditions are met. Theoretically the maximum allowable linear hologram phase error compared to a Fourier switch is increased by a factor of six before the target crosstalk for telecom applications of -40 dB is exceeded. A Gerchberg-Saxton algorithm incorporating a fractional Fourier transform modified for use with a reflective LCOS SLM is used to optimize multi-casting holograms in a prototype telecom switch. Experiments are in close agreement to predicted performance.
Modelling and temporal performances evaluation of networked control systems using (max, +) algebra
NASA Astrophysics Data System (ADS)
Ammour, R.; Amari, S.
2015-01-01
In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.
Distributed Ship Navigation Control System Based on Dual Network
NASA Astrophysics Data System (ADS)
Yao, Ying; Lv, Wu
2017-10-01
Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.
VPLS: an effective technology for building scalable transparent LAN services
NASA Astrophysics Data System (ADS)
Dong, Ximing; Yu, Shaohua
2005-02-01
Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.
A Protocol for Scalable Loop-Free Multicast Routing
1997-01-01
N ÁŜ+@"B�tQDVO:(#ÄŜ+ÄIO8+@D@DÀiÄq/"VÂaÀi:!8A:<B?V+B3/"V+I 8AQ "kVÄDV|¾VO¾K/"VO:J;¬K�@ÄDVNt�QDV:(+:=V|QD@D:zV+8cI#ඝ/RöV> Ä...8@DVOK¶Å�)/"Ài:�:zÀcQDÄzVO:yø|K¶@.W¢K¶ÅiQD:=Vù iIc5�T�@*(@�QR¾7/"VO:�ÀA :=VS8c;qÀi@"ILÀcQRÃMBqŜOWV I 8AQ "kV+B7¸qî�÷ï ½�ÄzÀ�8A:z:=K�WVI
NASA Astrophysics Data System (ADS)
Deng, Ning
In recent years, optical phase modulation has attracted much research attention in the field of fiber optic communications. Compared with the traditional optical intensity-modulated signal, one of the main merits of the optical phase-modulated signal is the better transmission performance. For optical phase modulation, in spite of the comprehensive study of its transmission performance, only a little research has been carried out in terms of its functions, applications and signal processing for future optical networks. These issues are systematically investigated in this thesis. The research findings suggest that optical phase modulation and its signal processing can greatly facilitate flexible network functions and high bandwidth which can be enjoyed by end users. In the thesis, the most important physical-layer technology, signal processing and multiplexing, are investigated with optical phase-modulated signals. Novel and advantageous signal processing and multiplexing approaches are proposed and studied. Experimental investigations are also reported and discussed in the thesis. Optical time-division multiplexing and demultiplexing. With the ever-increasing demand on communication bandwidth, optical time division multiplexing (OTDM) is an effective approach to upgrade the capacity of each wavelength channel in current optical systems. OTDM multiplexing can be simply realized, however, the demultiplexing requires relatively complicated signal processing and stringent timing control, and thus hinders its practicability. To tackle this problem, in this thesis a new OTDM scheme with hybrid DPSK and OOK signals is proposed. Experimental investigation shows this scheme can greatly enhance the demultiplexing timing misalignment and improve the demultiplexing performance, and thus make OTDM more practical and cost effective. All-optical signal processing. In current and future optical communication systems and networks, the data rate per wavelength has been approaching the speed limitation of electronics. Thus, all-optical signal processing techniques are highly desirable to support the necessary optical switching functionalities in future ultrahigh-speed optical packet-switching networks. To cope with the wide use of optical phase-modulated signals, in the thesis, an all-optical logic for DPSK or PSK input signals is developed, for the first time. Based on four-wave mixing in semiconductor optical amplifier, the structure of the logic gate is simple, compact, and capable of supporting ultrafast operation. In addition to the general logic processing, a simple label recognition scheme, as a specific signal processing function, is proposed for phase-modulated label signals. The proposed scheme can recognize any incoming label pattern according to the local pattern, and is potentially capable of handling variable-length label patterns. Optical access network with multicast overlay and centralized light sources. In the arena of optical access networks, wavelength division multiplexing passive optical network (WDM-PON) is a promising technology to deliver high-speed data traffic. However, most of proposed WDM-PONs only support conventional point-to-point service, and cannot meet the requirement of increasing demand on broadcast and multicast service. In this thesis, a simple network upgrade is proposed based on the traditional PON architecture to support both point-to-point and multicast service. In addition, the two service signals are modulated on the same lightwave carrier. The upstream signal is also remodulated on the same carrier at the optical network unit, which can significantly relax the requirement on wavelength management at the network unit.
NASA Astrophysics Data System (ADS)
Cannon, Brice M.
This thesis investigates the all-optical combination of amplitude and phase modulated signals into one unified multi-level phase modulated signal, utilizing the Kerr nonlinearity of cross-phase modulation (XPM). Predominantly, the first experimental demonstration of simultaneous polarization-insensitive phase-transmultiplexing and multicasting (PI-PTMM) will be discussed. The PI-PTMM operation combines the data of a single 10-Gbaud carrier-suppressed return-to-zero (CSRZ) on-off keyed (OOK) pump signal and 4x10-Gbaud return-to-zero (RZ) binary phase-shift keyed (BPSK) probe signals to generate 4x10-GBd RZ-quadrature phase-shift keyed (QPSK) signals utilizing a highly nonlinear, birefringent photonic crystal fiber (PCF). Since XPM is a highly polarization dependent nonlinearity, a polarization sensitivity reduction technique was used to alleviate the fluctuations due to the remotely generated signals' unpredictable states of polarization (SOP). The measured amplified spontaneous emission (ASE) limited receiver sensitivity optical signal-to-noise ratio (OSNR) penalty of the PI-PTMM signal relative to the field-programmable gate array (FPGA) pre-coded RZ-DQPSK baseline at a forward-error correction (FEC) limit of 10-3 BER was ≈ 0.3 dB. In addition, the OSNR of the remotely generated CSRZ-OOK signal could be degraded to ≈ 29 dB/0.1nm, before the bit error rate (BER) performance of the PI-PTMM operation began to exponentially degrade. A 138-km dispersion-managed recirculating loop system with a 100-GHz, 13-channel mixed-format dense-wavelength-division multiplexed (DWDM) transmitter was constructed to investigate the effect of metro/long-haul transmission impairments. The PI-PTMM DQPSK and the FPGA pre-coded RZ-DQPSK baseline signals were transmitted 1,900 km and 2,400 km in the nonlinearity-limited transmission regime before reaching the 10-3 BER FEC limit. The relative reduction in transmission distance for the PI-PTMM signal was due to the additional transmitter impairments in the PCF that interact negatively with the transmission fiber.
Global Interoperability of High Definition Video Streams Via ACTS and Intelsat
NASA Technical Reports Server (NTRS)
Hsu, Eddie; Wang, Charles; Bergman, Larry; Pearman, James; Bhasin, Kul; Clark, Gilbert; Shopbell, Patrick; Gill, Mike; Tatsumi, Haruyuki; Kadowaki, Naoto
2000-01-01
In 1993, a proposal at the Japan.-U.S. Cooperation in Space Program Workshop lead to a subsequent series of satellite communications experiments and demonstrations, under the title of Trans-Pacific High Data Rate Satellite Communications Experiments. The first of which is a joint collaboration between government and industry teams in the United States and Japan that successfully demonstrated distributed high definition video (HDV) post-production on a global scale using a combination of high data rate satellites and terrestrial fiber optic asynchronous transfer mode (ATM) networks. The HDV experiment is the first GIBN experiment to establish a dual-hop broadband satellite link for the transmission of digital HDV over ATM. This paper describes the team's effort in using the NASA Advanced Communications Technology Satellite (ACTS) at rates up to OC-3 (155 Mbps) between Los Angeles and Honolulu, and using Intelsat at rates up to DS-3 (45 Mbps) between Kapolei and Tokyo, with which HDV source material was transmitted between Sony Pictures High Definition Center (SPHDC) in Los Angeles and Sony Visual Communication Center (VCC) in Shinagawa, Tokyo. The global-scale connection also used terrestrial networks in Japan, the States of Hawaii and California. The 1.2 Gbps digital HDV stream was compressed down to 22.5 Mbps using a proprietary Mitsubishi MPEG-2 codec that was ATM AAL-5 compatible. The codec: employed four-way parallel processing. Improved versions of the codec are now commercially available. The successful post-production activity performed in Tokyo with a HDV clip transmitted from Los Angeles was predicated on the seamless interoperation of all the equipment between the sites, and was an exciting example in deploying a global-scale information infrastructure involving a combination of broadband satellites and terrestrial fiber optic networks. Correlation of atmospheric effects with cell loss, codec drop-out, and picture quality were made. Current efforts in the Trans-Pacific series plan to examine the use of Internet Protocol (IP)-related technologies over such an infrastructure. The use of IP allows the general public to be an integral part of the exciting activities, helps to examine issues in constructing the solar-system internet, and affords an opportunity to tap the research results from the (reliable) multicast and distributed systems communities. The current Trans- Pacific projects, including remote astronomy and digital library (visible human) are briefly described.
Virtually-synchronous communication based on a weak failure suspector
NASA Technical Reports Server (NTRS)
Schiper, Andre; Ricciardi, Aleta
1993-01-01
Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.
Remote Observing and Automatic FTP on Kitt Peak
NASA Astrophysics Data System (ADS)
Seaman, Rob; Bohannan, Bruce
As part of KPNO's Internet-based observing services we experimented with the publically available audio, video and whiteboard MBONE clients (vat, nv, wb and others) in both point-to-point and multicast modes. While bandwidth is always a constraint on the Internet, it is less of a constraint to operations than many might think. These experiments were part of two new Internet-based observing services offered to KPNO observers beginning with the Fall 1995 semester: a remote observing station and an automatic FTP data queue. The remote observing station seeks to duplicate the KPNO IRAF/ICE observing environment on a workstation at the observer's home institution. The automatic FTP queue is intended to support those observing programs that require quick transport of data back to the home institution, for instance, for near real time reductions to aid in observing tactics. We also discuss the early operational results of these services.
Efficient Assignment of Multiple E-MBMS Sessions towards LTE
NASA Astrophysics Data System (ADS)
Alexiou, Antonios; Bouras, Christos; Kokkinos, Vasileios
One of the major prerequisites for Long Term Evolution (LTE) networks is the mass provision of multimedia services to mobile users. To this end, Evolved - Multimedia Broadcast/Multicast Service (E-MBMS) is envisaged to play an instrumental role during LTE standardization process and ensure LTE’s proliferation in mobile market. E-MBMS targets at the economic delivery, in terms of power and spectral efficiency, of multimedia data from a single source entity to multiple destinations. This paper proposes a novel mechanism for efficient radio bearer selection during E-MBMS transmissions in LTE networks. The proposed mechanism is based on the concept of transport channels combination in any cell of the network. Most significantly, the mechanism manages to efficiently deliver multiple E-MBMS sessions. The performance of the proposed mechanism is evaluated and compared with several radio bearer selection mechanisms in order to highlight the enhancements that it provides.
Enabling Optical Network Test Bed for 5G Tests
NASA Astrophysics Data System (ADS)
Giuntini, Marco; Grazioso, Paolo; Matera, Francesco; Valenti, Alessandro; Attanasio, Vincenzo; Di Bartolo, Silvia; Nastri, Emanuele
2017-03-01
In this work, we show some experimental approaches concerning optical network design dedicated to 5G infrastructures. In particular, we show some implementations of network slicing based on Carrier Ethernet forwarding, which will be very suitable in the context of 5G heterogeneous networks, especially looking at services for vertical enterprises. We also show how to adopt a central unit (orchestrator) to automatically manage such logical paths according to quality-of-service requirements, which can be monitored at the user location. We also illustrate how novel all-optical processes, such as the ones based on all-optical wavelength conversion, can be used for multicasting, enabling development of TV broadcasting based on 4G-5G terminals. These managing and forwarding techniques, operating on optical links, are tested in a wireless environment on Wi-Fi cells and emulating LTE and WiMAX systems by means of the NS-3 code.
Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2009-09-08
Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.
NASA Technical Reports Server (NTRS)
Stehle, Roy H.; Ogier, Richard G.
1993-01-01
Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.
MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
Internet Voice Distribution System (IVoDS) Utilization in Remote Payload Operations
NASA Technical Reports Server (NTRS)
Best, Susan; Bradford, Bob; Chamberlain, Jim; Nichols, Kelvin; Bailey, Darrell (Technical Monitor)
2002-01-01
Due to limited crew availability to support science and the large number of experiments to be operated simultaneously, telescience is key to a successful International Space Station (ISS) science program. Crew, operations personnel at NASA centers, and researchers at universities and companies around the world must work closely together to perform scientific experiments on-board ISS. NASA has initiated use of Voice over Internet Protocol (VoIP) to supplement the existing HVoDS mission voice communications system used by researchers. The Internet Voice Distribution System (IVoDS) connects researchers to mission support "loops" or conferences via Internet Protocol networks such as the high-speed Internet 2. Researchers use IVoDS software on personal computers to talk with operations personnel at NASA centers. IVoDS also has the capability, if authorized, to allow researchers to communicate with the ISS crew during experiment operations. NODS was developed by Marshall Space Flight Center with contractors A2 Technology, Inc. FVC, Lockheed- Martin, and VoIP Group. IVoDS is currently undergoing field-testing with full deployment for up to 50 simultaneous users expected in 2002. Research is currently being performed to take full advantage of the digital world - the Personal Computer and Internet Protocol networks - to qualitatively enhance communications among ISS operations personnel. In addition to the current voice capability, video and data-sharing capabilities are being investigated. Major obstacles being addressed include network bandwidth capacity and strict security requirements. Techniques being investigated to reduce and overcome these obstacles include emerging audio-video protocols and network technology including multicast and quality-of-service.
NASA Technical Reports Server (NTRS)
2007-01-01
Topics include: Advanced Systems for Monitoring Underwater Sounds; Wireless Data-Acquisition System for Testing Rocket Engines; Processing Raw HST Data With Up-to-Date Calibration Data; Mobile Collection and Automated Interpretation of EEG Data; System for Secure Integration of Aviation Data; Servomotor and Controller Having Large Dynamic Range; Digital Multicasting of Multiple Audio Streams; Translator for Optimizing Fluid-Handling Components; AIRSAR Web-Based Data Processing; Pattern Matcher for Trees Constructed From Lists; Reducing a Knowledge-Base Search Space When Data Are Missing; Ground-Based Correction of Remote-Sensing Spectral Imagery; State-Chart Autocoder; Pointing History Engine for the Spitzer Space Telescope; Low-Friction, High-Stiffness Joint for Uniaxial Load Cell; Magnet-Based System for Docking of Miniature Spacecraft; Electromechanically Actuated Valve for Controlling Flow Rate; Plumbing Fixture for a Microfluidic Cartridge; Camera Mount for a Head-Up Display; Core-Cutoff Tool; Recirculation of Laser Power in an Atomic Fountain; Simplified Generation of High-Angular-Momentum Light Beams; Imaging Spectrometer on a Chip; Interferometric Quantum-Nondemolition Single-Photon Detectors; Ring-Down Spectroscopy for Characterizing a CW Raman Laser; Complex Type-II Interband Cascade MQW Photodetectors; Single-Point Access to Data Distributed on Many Processors; Estimating Dust and Water Ice Content of the Martian Atmosphere From THEMIS Data; Computing a Stability Spectrum by Use of the HHT; Theoretical Studies of Routes to Synthesis of Tetrahedral N4; Estimation Filter for Alignment of the Spitzer Space Telescope; Antenna for Measuring Electric Fields Within the Inner Heliosphere; Improved High-Voltage Gas Isolator for Ion Thruster; and Hybrid Mobile Communication Networks for Planetary Exploration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.
2012-07-30
Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to bemore » used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.« less
Ultra-High Capacity Silicon Photonic Interconnects through Spatial Multiplexing
NASA Astrophysics Data System (ADS)
Chen, Christine P.
The market for higher data rate communication is driving the semiconductor industry to develop new techniques of writing at smaller scales, while continuing to scale bandwidth at low power consumption. Silicon photonic (SiPh) devices offer a potential solution to the electronic interconnect bandwidth bottleneck. SiPh leverages the technology commensurate of decades of fabrication development with the unique functionality of next-generation optical interconnects. Finer fabrication techniques have allowed for manufacturing physical characteristics of waveguide structures that can support multiple modes in a single waveguide. By refining modal characteristics in photonic waveguide structures, through mode multiplexing with the asymmetric y-junction and microring resonator, higher aggregate data bandwidth is demonstrated via various combinations of spatial multiplexing, broadening applications supported by the integrated platform. The main contributions of this dissertation are summarized as follows. Experimental demonstrations of new forms of spatial multiplexing combined together exhibit feasibility of data transmission through mode-division multiplexing (MDM), mode-division and wavelength-division multiplexing (MDM-WDM), and mode-division and polarization-division multiplexing (MDM-PDM) through a C-band, Si photonic platform. Error-free operation through mode multiplexers and demultiplexers show how data can be viably scaled on multiple modes and with existing spatial domains simultaneously. Furthermore, we explore expanding device channel support from two to three arms. Finding that a slight mismatch in the third arm can increase crosstalk contributions considerably, especially when increasing data rate, we explore a methodical way to design the asymmetric y-junction device by considering its angles and multiplexer/demultiplexer arm width. By taking into consideration device fabrication variations, we turn towards optimizing device performance post-fabrication. Through ModePROP simulations, optimizing device performance dynamically post-fabrication is analyzed, through either electro-optical or thermo-optical means. By biasing the arm introducing the slight spectral offset, we can quantifiably improve device performance. Scaling bandwidth is experimentally demonstrated through the device at 3 modes, 2 wavelengths, and 40 Gb/s data rate for 240 Gb/s aggregate bandwidth, with the potential to reduce power penalty per the device optimization process we described. A main motivation for this on-chip spatial multiplexing is the need to reduce costs. As the laser source serves as the greatest power consumer in an optical system, mode-division multiplexing and other forms of spatial multiplexing can be implemented to push its potentially prohibitive cost metrics down. In order to demonstrate an intelligent platform capable of dynamically multicasting data and reallocating power as needed by the system, we must first initialize the switch fabric to control with an electronic interface. A dithering mechanism, whereby exact cross, bar, and sub-percentage states are enforced through the device, is described here. Such a method could be employed for actuating the device table of bias values to states automatically. We then employ a dynamic power reallocation algorithm through a data acquisition unit, showing real-time channel recovery for channels experiencing power loss by diverting power from paths that could tolerate it. The data that is being multicast through the system is experimentally shown to be error-free at 40 Gb/s data rate, when transmitting from one to three clients and going from automatic bar/cross states to equalized power distribution. For the last portion of this topic, the switch fabric was inserted into a high-performance computing system. In order to run benchmarks at 10 Gb/s data ontop of the switch fabric, a newer model of the control plane was implemented to toggle states according to the command issued by the server. Such a programmable mechanism will prove necessary in future implementations of optical subsystems embedded inside larger systems, like data centers. Beyond the specific control plane demonstrated, the idea of an intelligent photonic layer can be applied to alleviate many kinds of optical channel abnormalities or accommodate for switching based on different patterns in data transmission. Finally, the experimental demonstration of a coherent perfect absorption Si modulator is exhibited, showing a viable extinction ratio of 24.5 dB. Using this coherent perfect absorption mechanism to demodulate signals, there is the added benefit of differential reception. Currently, an automated process for data collection is employed at a faster time scale than instabilities present in fibers in the setup with future implementations eliminating the off-chip phase modulator for greater signal stability. The field of SiPh has developed to a stage where specific application domains can take off and compete according to industrial-level standards. The work in this dissertation contributes to experimental demonstration of a newly developing area of mode-division multiplexing for substantially increasing bandwidth on-chip. While implementing the discussed photonic devices in dynamic systems, various attributes of integrated photonics are leveraged with existing electronic technologies. Future generations of computing systems should then be designed by implementing both system and device level considerations. (Abstract shortened by ProQuest.).
Methods and apparatus of analyzing electrical power grid data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.
Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less
Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo
2012-12-01
A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.
Double inflation - A possible resolution of the large-scale structure problem
NASA Technical Reports Server (NTRS)
Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman
1987-01-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.
Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy
2012-11-01
Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kleeorin, N.
2018-06-01
We discuss a mean-field theory of the generation of large-scale vorticity in a rotating density stratified developed turbulence with inhomogeneous kinetic helicity. We show that the large-scale non-uniform flow is produced due to either a combined action of a density stratified rotating turbulence and uniform kinetic helicity or a combined effect of a rotating incompressible turbulence and inhomogeneous kinetic helicity. These effects result in the formation of a large-scale shear, and in turn its interaction with the small-scale turbulence causes an excitation of the large-scale instability (known as a vorticity dynamo) due to a combined effect of the large-scale shear and Reynolds stress-induced generation of the mean vorticity. The latter is due to the effect of large-scale shear on the Reynolds stress. A fast rotation suppresses this large-scale instability.
On large-scale dynamo action at high magnetic Reynolds number
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk
2014-07-01
We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less
Inverse Interscale Transport of the Reynolds Shear Stress in Plane Couette Turbulence
NASA Astrophysics Data System (ADS)
Kawata, Takuya; Alfredsson, P. Henrik
2018-06-01
Interscale interaction between small-scale structures near the wall and large-scale structures away from the wall plays an increasingly important role with increasing Reynolds number in wall-bounded turbulence. While the top-down influence from the large- to small-scale structures is well known, it has been unclear whether the small scales near the wall also affect the large scales away from the wall. In this Letter we show that the small-scale near-wall structures indeed play a role to maintain the large-scale structures away from the wall, by showing that the Reynolds shear stress is transferred from small to large scales throughout the channel. This is in contrast to the turbulent kinetic energy transport which is from large to small scales. Such an "inverse" interscale transport of the Reynolds shear stress eventually supports the turbulent energy production at large scales.
IS THE SMALL-SCALE MAGNETIC FIELD CORRELATED WITH THE DYNAMO CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karak, Bidya Binay; Brandenburg, Axel, E-mail: bbkarak@nordita.org
2016-01-01
The small-scale magnetic field is ubiquitous at the solar surface—even at high latitudes. From observations we know that this field is uncorrelated (or perhaps even weakly anticorrelated) with the global sunspot cycle. Our aim is to explore the origin, and particularly the cycle dependence, of such a phenomenon using three-dimensional dynamo simulations. We adopt a simple model of a turbulent dynamo in a shearing box driven by helically forced turbulence. Depending on the dynamo parameters, large-scale (global) and small-scale (local) dynamos can be excited independently in this model. Based on simulations in different parameter regimes, we find that, when onlymore » the large-scale dynamo is operating in the system, the small-scale magnetic field generated through shredding and tangling of the large-scale magnetic field is positively correlated with the global magnetic cycle. However, when both dynamos are operating, the small-scale field is produced from both the small-scale dynamo and the tangling of the large-scale field. In this situation, when the large-scale field is weaker than the equipartition value of the turbulence, the small-scale field is almost uncorrelated with the large-scale magnetic cycle. On the other hand, when the large-scale field is stronger than the equipartition value, we observe an anticorrelation between the small-scale field and the large-scale magnetic cycle. This anticorrelation can be interpreted as a suppression of the small-scale dynamo. Based on our studies we conclude that the observed small-scale magnetic field in the Sun is generated by the combined mechanisms of a small-scale dynamo and tangling of the large-scale field.« less
NASA Astrophysics Data System (ADS)
Guiquan, Xi; Lin, Cong; Xuehui, Jin
2018-05-01
As an important platform for scientific and technological development, large -scale scientific facilities are the cornerstone of technological innovation and a guarantee for economic and social development. Researching management of large-scale scientific facilities can play a key role in scientific research, sociology and key national strategy. This paper reviews the characteristics of large-scale scientific facilities, and summarizes development status of China's large-scale scientific facilities. At last, the construction, management, operation and evaluation of large-scale scientific facilities is analyzed from the perspective of sustainable development.
NASA Astrophysics Data System (ADS)
Brasseur, James G.; Juneja, Anurag
1996-11-01
Previous DNS studies indicate that small-scale structure can be directly altered through ``distant'' dynamical interactions by energetic forcing of the large scales. To remove the possibility of stimulating energy transfer between the large- and small-scale motions in these long-range interactions, we here perturb the large scale structure without altering its energy content by suddenly altering only the phases of large-scale Fourier modes. Scale-dependent changes in turbulence structure appear as a non zero difference field between two simulations from identical initial conditions of isotropic decaying turbulence, one perturbed and one unperturbed. We find that the large-scale phase perturbations leave the evolution of the energy spectrum virtually unchanged relative to the unperturbed turbulence. The difference field, on the other hand, is strongly affected by the perturbation. Most importantly, the time scale τ characterizing the change in in turbulence structure at spatial scale r shortly after initiating a change in large-scale structure decreases with decreasing turbulence scale r. Thus, structural information is transferred directly from the large- to the smallest-scale motions in the absence of direct energy transfer---a long-range effect which cannot be explained by a linear mechanism such as rapid distortion theory. * Supported by ARO grant DAAL03-92-G-0117
NASA Astrophysics Data System (ADS)
Solanki, K.; Hauksson, E.; Kanamori, H.; Wu, Y.; Heaton, T.; Boese, M.
2007-12-01
We have implemented an on-site early warning algorithm using the infrastructure of the Caltech/USGS Southern California Seismic Network (SCSN). We are evaluating the real-time performance of the software system and the algorithm for rapid assessment of earthquakes. In addition, we are interested in understanding what parts of the SCSN need to be improved to make early warning practical. Our EEW processing system is composed of many independent programs that process waveforms in real-time. The codes were generated by using a software framework. The Pd (maximum displacement amplitude of P wave during the first 3sec) and Tau-c (a period parameter during the first 3 sec) values determined during the EEW processing are being forwarded to the California Integrated Seismic Network (CISN) web page for independent evaluation of the results. The on-site algorithm measures the amplitude of the P-wave (Pd) and the frequency content of the P-wave during the first three seconds (Tau-c). The Pd and the Tau-c values make it possible to discriminate between a variety of events such as large distant events, nearby small events, and potentially damaging nearby events. The Pd can be used to infer the expected maximum ground shaking. The method relies on data from a single station although it will become more reliable if readings from several stations are associated. To eliminate false triggers from stations with high background noise level, we have created per station Pd threshold configuration for the Pd/Tau-c algorithm. To determine appropriate values for the Pd threshold we calculate Pd thresholds for stations based on the information from the EEW logs. We have operated our EEW test system for about a year and recorded numerous earthquakes in the magnitude range from M3 to M5. Two recent examples are a M4.5 earthquake near Chatsworth and a M4.7 earthquake near Elsinore. In both cases, the Pd and Tau-c parameters were determined successfully within 10 to 20 sec of the arrival of the P-wave at the station. The Tau-c values predicted the magnitude within 0.1 and the predicted average peak-ground-motion was 0.7 cm/s and 0.6 cm/s. The delays in the system are caused mostly by the packetizing delay because our software system is based on processing miniseed packets. Most recently we have begun reducing the data latency using new qmaserv2 software for the Q330 Quanterra datalogger. We implemented qmaserv2 based multicast receiver software to receive the native 1 sec packets from the dataloggers. The receiver reads multicast packets from the network and writes them into shared memory area. This new software will fully take advantage of the capabilities of the Q330 datalogger and significantly reduce data latency for EEW system. We have also implemented a new EEW sub-system that compliments the currently running EEW system by associating Pd and Tau-c values from multiple stations. So far, we have implemented a new trigger generation algorithm for real-time processing for the sub-system, and are able to routinely locate events and determine magnitudes using the Pd and Tau-c values.
Energy transfers in large-scale and small-scale dynamos
NASA Astrophysics Data System (ADS)
Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra
2015-11-01
We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.
ERIC Educational Resources Information Center
Oliveri, Maria Elena; von Davier, Matthias
2014-01-01
In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…
MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530
A CBLT and MCST capable VME slave interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wuerthwein, F.; Strohman, C.; Honscheid, K.
1996-12-31
We report on the development of a VME slave interface for the CLEO III detector implemented in an ALTERA EPM7256 CPLD. This includes the first implementation of the chained block transfer protocol (CBLT) and multi-cast cycles (MCST) as defined by the VME-P task group of VIPA. Within VME64 there is no operation that guarantees efficient readout of large blocks of data that are sparsely distributed among a series of slave modules in a VME crate. This has led the VME-P task group of VIPA to specify protocols that enable a master to address many slaves at a single address. Whichmore » slave is to drive the data bus is determined by a token passing mechanism that uses the *IACKOUT, *IACKIN daisy chain. This protocol requires no special features from the master besides conformance to VME64. Non-standard features are restricted to the VME slave interface. The CLEO III detector comprises {approximately}400,000 electronic channels that have to be digitized, sparsified, and stored within 20{mu}s in order to incur less than 2% dead time at an anticipated trigger rate of 1000Hz. 95% of these channels are accounted for by only two detector subsystems, the silicon microstrip detector (125,000 channels), and the ring imaging Cerenkov detector (RICH) (230,400 channels). After sparsification either of these two detector subsystems is expected to provide event fragments on the order of 10KBytes, spread over 4, and 8 VME crates, respectively. We developed a chip set that sparsifies, tags, and stores the incoming digital data on the data boards, and includes a VME slave interface that implements MCST and CUT protocols. In this poster, we briefly describe this chip set and then discuss the VME slave interface in detail.« less
A survey of system architecture requirements for health care-based wireless sensor networks.
Egbogah, Emeka E; Fapojuwo, Abraham O
2011-01-01
Wireless Sensor Networks (WSNs) have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs) that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera). However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera) to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC) protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.
Phase-relationships between scales in the perturbed turbulent boundary layer
NASA Astrophysics Data System (ADS)
Jacobi, I.; McKeon, B. J.
2017-12-01
The phase-relationship between large-scale motions and small-scale fluctuations in a non-equilibrium turbulent boundary layer was investigated. A zero-pressure-gradient flat plate turbulent boundary layer was perturbed by a short array of two-dimensional roughness elements, both statically, and under dynamic actuation. Within the compound, dynamic perturbation, the forcing generated a synthetic very-large-scale motion (VLSM) within the flow. The flow was decomposed by phase-locking the flow measurements to the roughness forcing, and the phase-relationship between the synthetic VLSM and remaining fluctuating scales was explored by correlation techniques. The general relationship between large- and small-scale motions in the perturbed flow, without phase-locking, was also examined. The synthetic large scale cohered with smaller scales in the flow via a phase-relationship that is similar to that of natural large scales in an unperturbed flow, but with a much stronger organizing effect. Cospectral techniques were employed to describe the physical implications of the perturbation on the relative orientation of large- and small-scale structures in the flow. The correlation and cospectral techniques provide tools for designing more efficient control strategies that can indirectly control small-scale motions via the large scales.
Skin Friction Reduction Through Large-Scale Forcing
NASA Astrophysics Data System (ADS)
Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer
2017-11-01
Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories
NASA Astrophysics Data System (ADS)
Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.
Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy
2013-05-01
Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebrahimi, Fatima
Magnetic fields are observed to exist on all scales in many astrophysical sources such as stars, galaxies, and accretion discs. Understanding the origin of large scale magnetic fields, whereby the field emerges on spatial scales large compared to the fluctuations, has been a particularly long standing challenge. Our physics objective are: 1) what are the minimum ingredients for large-scale dynamo growth? 2) could a large-scale magnetic field grow out of turbulence and sustained despite the presence of dissipation? These questions are fundamental for understanding the large-scale dynamo in both laboratory and astrophysical plasmas. Here, we report major new findings inmore » the area of Large-Scale Dynamo (magnetic field generation).« less
NASA Astrophysics Data System (ADS)
Wang, Honghuan; Xing, Fangyuan; Yin, Hongxi; Zhao, Nan; Lian, Bizhan
2016-02-01
With the explosive growth of network services, the reasonable traffic scheduling and efficient configuration of network resources have an important significance to increase the efficiency of the network. In this paper, an adaptive traffic scheduling policy based on the priority and time window is proposed and the performance of this algorithm is evaluated in terms of scheduling ratio. The routing and spectrum allocation are achieved by using the Floyd shortest path algorithm and establishing a node spectrum resource allocation model based on greedy algorithm, which is proposed by us. The fairness index is introduced to improve the capability of spectrum configuration. The results show that the designed traffic scheduling strategy can be applied to networks with multicast and broadcast functionalities, and makes them get real-time and efficient response. The scheme of node spectrum configuration improves the frequency resource utilization and gives play to the efficiency of the network.
Research on performance of three-layer MG-OXC system based on MLAG and OCDM
NASA Astrophysics Data System (ADS)
Wang, Yubao; Ren, Yanfei; Meng, Ying; Bai, Jian
2017-10-01
At present, as traffic volume which optical transport networks convey and species of traffic grooming methods increase rapidly, optical switching techniques are faced with a series of issues, such as more requests for the number of wavelengths and complicated structure management and implementation. This work introduces optical code switching based on wavelength switching, constructs the three layers multi-granularity optical cross connection (MG-OXC) system on the basis of optical code division multiplexing (OCDM) and presents a new traffic grooming algorithm. The proposed architecture can improve the flexibility of traffic grooming, reduce the amount of used wavelengths and save the number of consumed ports, hence, it can simplify routing device and enhance the performance of the system significantly. Through analyzing the network model of switching structure on multicast layered auxiliary graph (MLAG) and the establishment of traffic grooming links, and the simulation of blocking probability and throughput, this paper shows the excellent performance of this mentioned architecture.
The Xpress Transfer Protocol (XTP): A tutorial (expanded version)
NASA Technical Reports Server (NTRS)
Sanders, Robert M.; Weaver, Alfred C.
1990-01-01
The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4.
A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services
NASA Astrophysics Data System (ADS)
Cho, Kenjiro; Birman, Kenneth P.
1994-05-01
This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.
Reliable communication in the presence of failures
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, Thomas A.
1987-01-01
The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistant orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols is the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.
TeCo3D: a 3D telecooperation application based on VRML and Java
NASA Astrophysics Data System (ADS)
Mauve, Martin
1998-12-01
In this paper we present a method for sharing collaboration- unaware VRML content, e.g. 3D models which were not specifically developed for use in a distributed environment. This functionality is an essential requirement for the inclusion of arbitrary VRML content, as generated by standard CAD or animation software, into teleconferencing sessions. We have developed a 3D TeleCooperation (TeCo3D) prototype to demonstrate the feasibility of our approach. The basic services provided by the prototype are the distribution of cooperation unaware VRML content, the sharing of user interactions, and the joint viewing of the content. In order to achieve maximum portability, the prototype was developed completely in Java. This paper presents general aspects of sharing VRML content as well as the concepts, the architecture and the services of the TeCo3D prototype. Our approach relies on existing VRML browsers as the VRML presentation and execution engines while reliable multicast is used as the means of communication to provide for scalability.
NASA Technical Reports Server (NTRS)
Iannicca, Dennis; Hylton, Alan; Ishac, Joseph
2012-01-01
Delay-Tolerant Networking (DTN) is an active area of research in the space communications community. DTN uses a standard layered approach with the Bundle Protocol operating on top of transport layer protocols known as convergence layers that actually transmit the data between nodes. Several different common transport layer protocols have been implemented as convergence layers in DTN implementations including User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Licklider Transmission Protocol (LTP). The purpose of this paper is to evaluate several stand-alone implementations of negative-acknowledgment based transport layer protocols to determine how they perform in a variety of different link conditions. The transport protocols chosen for this evaluation include Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), Licklider Transmission Protocol (LTP), NACK-Oriented Reliable Multicast (NORM), and Saratoga. The test parameters that the protocols were subjected to are characteristic of common communications links ranging from terrestrial to cis-lunar and apply different levels of delay, line rate, and error.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.
2016-07-06
Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less
Male group size, female distribution and changes in sexual segregation by Roosevelt elk
Peterson, Leah M.
2017-01-01
Sexual segregation, or the differential use of space by males and females, is hypothesized to be a function of body size dimorphism. Sexual segregation can also manifest at small (social segregation) and large (habitat segregation) spatial scales for a variety of reasons. Furthermore, the connection between small- and large-scale sexual segregation has rarely been addressed. We studied a population of Roosevelt elk (Cervus elaphus roosevelti) across 21 years in north coastal California, USA, to assess small- and large-scale sexual segregation in winter. We hypothesized that male group size would associate with small-scale segregation and that a change in female distribution would associate with large-scale segregation. Variation in forage biomass might also be coupled to small and large-scale sexual segregation. Our findings were consistent with male group size associating with small-scale segregation and a change in female distribution associating with large-scale segregation. Females appeared to avoid large groups comprised of socially dominant males. Males appeared to occupy a habitat vacated by females because of a wider forage niche, greater tolerance to lethal risks, and, perhaps, to reduce encounters with other elk. Sexual segregation at both spatial scales was a poor predictor of forage biomass. Size dimorphism was coupled to change in sexual segregation at small and large spatial scales. Small scale segregation can seemingly manifest when all forage habitat is occupied by females and large scale segregation might happen when some forage habitat is not occupied by females. PMID:29121076
ERIC Educational Resources Information Center
Alexander, George
1984-01-01
Discusses small-scale integrated (SSI), medium-scale integrated (MSI), large-scale integrated (LSI), very large-scale integrated (VLSI), and ultra large-scale integrated (ULSI) chips. The development and properties of these chips, uses of gallium arsenide, Josephson devices (two superconducting strips sandwiching a thin insulator), and future…
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
NASA Astrophysics Data System (ADS)
Blackman, Eric G.; Subramanian, Kandaswamy
2013-02-01
The extent to which large-scale magnetic fields are susceptible to turbulent diffusion is important for interpreting the need for in situ large-scale dynamos in astrophysics and for observationally inferring field strengths compared to kinetic energy. By solving coupled evolution equations for magnetic energy and magnetic helicity in a system initialized with isotropic turbulence and an arbitrarily helical large-scale field, we quantify the decay rate of the latter for a bounded or periodic system. The magnetic energy associated with the non-helical large-scale field decays at least as fast as the kinematically estimated turbulent diffusion rate, but the decay rate of the helical part depends on whether the ratio of its magnetic energy to the turbulent kinetic energy exceeds a critical value given by M1, c = (k1/k2)2, where k1 and k2 are the wavenumbers of the large and forcing scales. Turbulently diffusing helical fields to small scales while conserving magnetic helicity requires a rapid increase in total magnetic energy. As such, only when the helical field is subcritical can it so diffuse. When supercritical, it decays slowly, at a rate determined by microphysical dissipation even in the presence of macroscopic turbulence. In effect, turbulent diffusion of such a large-scale helical field produces small-scale helicity whose amplification abates further turbulent diffusion. Two curious implications are that (1) standard arguments supporting the need for in situ large-scale dynamos based on the otherwise rapid turbulent diffusion of large-scale fields require re-thinking since only the large-scale non-helical field is so diffused in a closed system. Boundary terms could however provide potential pathways for rapid change of the large-scale helical field. (2) Since M1, c ≪ 1 for k1 ≪ k2, the presence of long-lived ordered large-scale helical fields as in extragalactic jets do not guarantee that the magnetic field dominates the kinetic energy.
Generation of large-scale magnetic fields by small-scale dynamo in shear flows
Squire, J.; Bhattacharjee, A.
2015-10-20
We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less
Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.
Squire, J; Bhattacharjee, A
2015-10-23
We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.
The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation
NASA Astrophysics Data System (ADS)
Noh, Yookyung
The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.
Synchronization of coupled large-scale Boolean networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Fangfei, E-mail: li-fangfei@163.com
2014-03-15
This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.
Reynolds number trend of hierarchies and scale interactions in turbulent boundary layers.
Baars, W J; Hutchins, N; Marusic, I
2017-03-13
Small-scale velocity fluctuations in turbulent boundary layers are often coupled with the larger-scale motions. Studying the nature and extent of this scale interaction allows for a statistically representative description of the small scales over a time scale of the larger, coherent scales. In this study, we consider temporal data from hot-wire anemometry at Reynolds numbers ranging from Re τ ≈2800 to 22 800, in order to reveal how the scale interaction varies with Reynolds number. Large-scale conditional views of the representative amplitude and frequency of the small-scale turbulence, relative to the large-scale features, complement the existing consensus on large-scale modulation of the small-scale dynamics in the near-wall region. Modulation is a type of scale interaction, where the amplitude of the small-scale fluctuations is continuously proportional to the near-wall footprint of the large-scale velocity fluctuations. Aside from this amplitude modulation phenomenon, we reveal the influence of the large-scale motions on the characteristic frequency of the small scales, known as frequency modulation. From the wall-normal trends in the conditional averages of the small-scale properties, it is revealed how the near-wall modulation transitions to an intermittent-type scale arrangement in the log-region. On average, the amplitude of the small-scale velocity fluctuations only deviates from its mean value in a confined temporal domain, the duration of which is fixed in terms of the local Taylor time scale. These concentrated temporal regions are centred on the internal shear layers of the large-scale uniform momentum zones, which exhibit regions of positive and negative streamwise velocity fluctuations. With an increasing scale separation at high Reynolds numbers, this interaction pattern encompasses the features found in studies on internal shear layers and concentrated vorticity fluctuations in high-Reynolds-number wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
Large-Scale 1:1 Computing Initiatives: An Open Access Database
ERIC Educational Resources Information Center
Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet
2013-01-01
This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…
Contractual Duration and Investment Incentives: Evidence from Large Scale Production Units in China
NASA Astrophysics Data System (ADS)
Li, Fang; Feng, Shuyi; D'Haese, Marijke; Lu, Hualiang; Qu, Futian
2017-04-01
Large Scale Production Units have become important forces in the supply of agricultural commodities and agricultural modernization in China. Contractual duration in farmland transfer to Large Scale Production Units can be considered to reflect land tenure security. Theoretically, long-term tenancy contracts can encourage Large Scale Production Units to increase long-term investments by ensuring land rights stability or favoring access to credit. Using a unique Large Scale Production Units- and plot-level field survey dataset from Jiangsu and Jiangxi Province, this study aims to examine the effect of contractual duration on Large Scale Production Units' soil conservation behaviours. IV method is applied to take into account the endogeneity of contractual duration and unobserved household heterogeneity. Results indicate that farmland transfer contract duration significantly and positively affects land-improving investments. Policies aimed at improving transaction platforms and intermediary organizations in farmland transfer to facilitate Large Scale Production Units to access farmland with long-term tenancy contracts may therefore play an important role in improving soil quality and land productivity.
Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications
NASA Astrophysics Data System (ADS)
Maskey, M.; Ramachandran, R.; Miller, J.
2017-12-01
Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
Effects of biasing on the galaxy power spectrum at large scales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beltran Jimenez, Jose; Departamento de Fisica Teorica, Universidad Complutense de Madrid, 28040, Madrid; Durrer, Ruth
2011-05-15
In this paper we study the effect of biasing on the power spectrum at large scales. We show that even though nonlinear biasing does introduce a white noise contribution on large scales, the P(k){proportional_to}k{sup n} behavior of the matter power spectrum on large scales may still be visible and above the white noise for about one decade. We show, that the Kaiser biasing scheme which leads to linear bias of the correlation function on large scales, also generates a linear bias of the power spectrum on rather small scales. This is a consequence of the divergence on small scales ofmore » the pure Harrison-Zeldovich spectrum. However, biasing becomes k dependent if we damp the underlying power spectrum on small scales. We also discuss the effect of biasing on the baryon acoustic oscillations.« less
Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
Large-scale dynamos in rapidly rotating plane layer convection
NASA Astrophysics Data System (ADS)
Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.
2018-05-01
Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.
Large-Scale 3D Printing: The Way Forward
NASA Astrophysics Data System (ADS)
Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid
2018-03-01
Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.
Gary M. Tabor; Anne Carlson; Travis Belote
2014-01-01
The Yellowstone to Yukon Conservation Initiative (Y2Y) was established over 20 years ago as an experiment in large landscape conservation. Initially, Y2Y emerged as a response to large scale habitat fragmentation by advancing ecological connectivity. It also laid the foundation for large scale multi-stakeholder conservation collaboration with almost 200 non-...
Ensemble Kalman filters for dynamical systems with unresolved turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.
Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less
Large-Scale Coronal Heating from the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.
1999-01-01
In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
A Functional Model for Management of Large Scale Assessments.
ERIC Educational Resources Information Center
Banta, Trudy W.; And Others
This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…
Connecting the large- and the small-scale magnetic fields of solar-like stars
NASA Astrophysics Data System (ADS)
Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.
2018-05-01
A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.
Large-Angular-Scale Clustering as a Clue to the Source of UHECRs
NASA Astrophysics Data System (ADS)
Berlind, Andreas A.; Farrar, Glennys R.
We explore what can be learned about the sources of UHECRs from their large-angular-scale clustering (referred to as their "bias" by the cosmology community). Exploiting the clustering on large scales has the advantage over small-scale correlations of being insensitive to uncertainties in source direction from magnetic smearing or measurement error. In a Cold Dark Matter cosmology, the amplitude of large-scale clustering depends on the mass of the system, with more massive systems such as galaxy clusters clustering more strongly than less massive systems such as ordinary galaxies or AGN. Therefore, studying the large-scale clustering of UHECRs can help determine a mass scale for their sources, given the assumption that their redshift depth is as expected from the GZK cutoff. We investigate the constraining power of a given UHECR sample as a function of its cutoff energy and number of events. We show that current and future samples should be able to distinguish between the cases of their sources being galaxy clusters, ordinary galaxies, or sources that are uncorrelated with the large-scale structure of the universe.
Generation of large-scale magnetic fields by small-scale dynamo in shear flows
NASA Astrophysics Data System (ADS)
Squire, Jonathan; Bhattacharjee, Amitava
2015-11-01
A new mechanism for turbulent mean-field dynamo is proposed, in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the ``shear-current'' effect. The dynamo is studied using a variety of computational and analytic techniques, both when the magnetic fluctuations arise self-consistently through the small-scale dynamo and in lower Reynolds number regimes. Given the inevitable existence of non-helical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help to explain generation of large-scale magnetic fields across a wide range of astrophysical objects. This work was supported by a Procter Fellowship at Princeton University, and the US Department of Energy Grant DE-AC02-09-CH11466.
NASA Astrophysics Data System (ADS)
Tang, Zhanqi; Jiang, Nan
2018-05-01
This study reports the modifications of scale interaction and arrangement in a turbulent boundary layer perturbed by a wall-mounted circular cylinder. Hot-wire measurements were executed at multiple streamwise and wall-normal wise locations downstream of the cylindrical element. The streamwise fluctuating signals were decomposed into large-, small-, and dissipative-scale signatures by corresponding cutoff filters. The scale interaction under the cylindrical perturbation was elaborated by comparing the small- and dissipative-scale amplitude/frequency modulation effects downstream of the cylinder element with the results observed in the unperturbed case. It was obtained that the large-scale fluctuations perform a stronger amplitude modulation on both the small and dissipative scales in the near-wall region. At the wall-normal positions of the cylinder height, the small-scale amplitude modulation coefficients are redistributed by the cylinder wake. The similar observation was noted in small-scale frequency modulation; however, the dissipative-scale frequency modulation seems to be independent of the cylindrical perturbation. The phase-relationship observation indicated that the cylindrical perturbation shortens the time shifts between both the small- and dissipative-scale variations (amplitude and frequency) and large-scale fluctuations. Then, the integral time scale dependence of the phase-relationship between the small/dissipative scales and large scales was also discussed. Furthermore, the discrepancy of small- and dissipative-scale time shifts relative to the large-scale motions was examined, which indicates that the small-scale amplitude/frequency leads the dissipative scales.
Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors
NASA Astrophysics Data System (ADS)
Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.
The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several hundred cores in a round robin fashion. Current efforts toward further performance enhancement for PCID are shifting toward using the Playstations in conjunction with the Xeons to take advantage of outstanding price/performance as well as the Flops/Watt cost advantage. We are fine-tuning the PCID parallization strategy to balance processing over Xeons and Cell BEs to find an optimal partitioning of PCID over the heterogeneous processors. A high performance information management system that exploits native Infiniband multicast is used to improve latency among the head nodes. Using a publication/subscription oriented information management system to implement a unified communications platform makes runs on large HPCs with thousands of intercommunicating cores more flexible and more fault tolerant. It features a loose couplingof publishers to subscribers through intervening brokers. We are also working on enhancing performance for both Xeons and Cell BEs, buy moving selected operations to single precision. Techniques for adapting the code to single precision and performance results are reported.
An informal paper on large-scale dynamic systems
NASA Technical Reports Server (NTRS)
Ho, Y. C.
1975-01-01
Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.
Large Scale Metal Additive Techniques Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W
2016-01-01
In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environmentmore » friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.« less
Large-scale weakly supervised object localization via latent category learning.
Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve
2015-04-01
Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.
Nonlinear modulation of the HI power spectrum on ultra-large scales. I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Umeh, Obinna; Maartens, Roy; Santos, Mario, E-mail: umeobinna@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: mgrsantos@uwc.ac.za
2016-03-01
Intensity mapping of the neutral hydrogen brightness temperature promises to provide a three-dimensional view of the universe on very large scales. Nonlinear effects are typically thought to alter only the small-scale power, but we show how they may bias the extraction of cosmological information contained in the power spectrum on ultra-large scales. For linear perturbations to remain valid on large scales, we need to renormalize perturbations at higher order. In the case of intensity mapping, the second-order contribution to clustering from weak lensing dominates the nonlinear contribution at high redshift. Renormalization modifies the mean brightness temperature and therefore the evolutionmore » bias. It also introduces a term that mimics white noise. These effects may influence forecasting analysis on ultra-large scales.« less
Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.
Hotta, H; Rempel, M; Yokoyama, T
2016-03-25
The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrana, Alexandra; Johnson, Matthew C.; Harris, Mary-Jean, E-mail: aterrana@perimeterinstitute.ca, E-mail: mharris8@perimeterinstitute.ca, E-mail: mjohnson@perimeterinstitute.ca
Due to cosmic variance we cannot learn any more about large-scale inhomogeneities from the primary cosmic microwave background (CMB) alone. More information on large scales is essential for resolving large angular scale anomalies in the CMB. Here we consider cross correlating the large-scale kinetic Sunyaev Zel'dovich (kSZ) effect and probes of large-scale structure, a technique known as kSZ tomography. The statistically anisotropic component of the cross correlation encodes the CMB dipole as seen by free electrons throughout the observable Universe, providing information about long wavelength inhomogeneities. We compute the large angular scale power asymmetry, constructing the appropriate transfer functions, andmore » estimate the cosmic variance limited signal to noise for a variety of redshift bin configurations. The signal to noise is significant over a large range of power multipoles and numbers of bins. We present a simple mode counting argument indicating that kSZ tomography can be used to estimate more modes than the primary CMB on comparable scales. A basic forecast indicates that a first detection could be made with next-generation CMB experiments and galaxy surveys. This paper motivates a more systematic investigation of how close to the cosmic variance limit it will be possible to get with future observations.« less
Grid-Enabled Quantitative Analysis of Breast Cancer
2010-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also
Detection of large-scale concentric gravity waves from a Chinese airglow imager network
NASA Astrophysics Data System (ADS)
Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao
2018-06-01
Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.
NASA Astrophysics Data System (ADS)
Thorslund, Josefin; Jarsjö, Jerker; Destouni, Georgia
2017-04-01
Wetlands are often considered as nature-based solutions that can provide a multitude of services of great social, economic and environmental value to humankind. The services may include recreation, greenhouse gas sequestration, contaminant retention, coastal protection, groundwater level and soil moisture regulation, flood regulation and biodiversity support. Changes in land-use, water use and climate can all impact wetland functions and occur at scales extending well beyond the local scale of an individual wetland. However, in practical applications, management decisions usually regard and focus on individual wetland sites and local conditions. To understand the potential usefulness and services of wetlands as larger-scale nature-based solutions, e.g. for mitigating negative impacts from large-scale change pressures, one needs to understand the combined function multiple wetlands at the relevant large scales. We here systematically investigate if and to what extent research so far has addressed the large-scale dynamics of landscape systems with multiple wetlands, which are likely to be relevant for understanding impacts of regional to global change. Our investigation regards key changes and impacts of relevance for nature-based solutions, such as large-scale nutrient and pollution retention, flow regulation and coastal protection. Although such large-scale knowledge is still limited, evidence suggests that the aggregated functions and effects of multiple wetlands in the landscape can differ considerably from those observed at individual wetlands. Such scale differences may have important implications for wetland function-effect predictability and management under large-scale change pressures and impacts, such as those of climate change.
Information Tailoring Enhancements for Large-Scale Social Data
2016-06-15
Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale... Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...1 Work Performed within This Reporting Period .................................................... 2 1.1 Enhanced Named Entity Recognition (NER
Current Scientific Issues in Large Scale Atmospheric Dynamics
NASA Technical Reports Server (NTRS)
Miller, T. L. (Compiler)
1986-01-01
Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.
Flexible Unicast-Based Group Communication for CoAP-Enabled Devices †
Ishaq, Isam; Hoebeke, Jeroen; Van den Abeele, Floris; Rossey, Jen; Moerman, Ingrid; Demeester, Piet
2014-01-01
Smart embedded objects will become an important part of what is called the Internet of Things. Applications often require concurrent interactions with several of these objects and their resources. Existing solutions have several limitations in terms of reliability, flexibility and manageability of such groups of objects. To overcome these limitations we propose an intermediately level of intelligence to easily manipulate a group of resources across multiple smart objects, building upon the Constrained Application Protocol (CoAP). We describe the design of our solution to create and manipulate a group of CoAP resources using a single client request. Furthermore we introduce the concept of profiles for the created groups. The use of profiles allows the client to specify in more detail how the group should behave. We have implemented our solution and demonstrate that it covers the complete group life-cycle, i.e., creation, validation, flexible usage and deletion. Finally, we quantitatively analyze the performance of our solution and compare it against multicast-based CoAP group communication. The results show that our solution improves reliability and flexibility with a trade-off in increased communication overhead. PMID:24901978
The DAQ system for the AEḡIS experiment
NASA Astrophysics Data System (ADS)
Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.
2017-10-01
In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.
Innovative Networking Concepts Tested on the Advanced Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Friedman, Daniel; Gupta, Sonjai; Zhang, Chuanguo; Ephremides, Anthony
1996-01-01
This paper describes a program of experiments conducted over the advanced communications technology satellite (ACTS) and the associated TI-VSAT (very small aperture terminal). The experiments were motivated by the commercial potential of low-cost receive only satellite terminals that can operate in a hybrid network environment, and by the desire to demonstrate frame relay technology over satellite networks. The first experiment tested highly adaptive methods of satellite bandwidth allocation in an integrated voice-data service environment. The second involved comparison of forward error correction (FEC) and automatic repeat request (ARQ) methods of error control for satellite communication with emphasis on the advantage that a hybrid architecture provides, especially in the case of multicasts. Finally, the third experiment demonstrated hybrid access to databases and compared the performance of internetworking protocols for interconnecting local area networks (LANs) via satellite. A custom unit termed frame relay access switch (FRACS) was developed by COMSAT Laboratories for these experiments; the preparation and conduct of these experiments involved a total of 20 people from the University of Maryland, the University of Colorado and COMSAT Laboratories, from late 1992 until 1995.
NADIR: A Flexible Archiving System Current Development
NASA Astrophysics Data System (ADS)
Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.
2014-05-01
The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-08-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical Burst Switching - Support of Multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule Paper Submission Deadline: November 1, 2003 Notification to Authors: January 15, 2004 Final Manuscripts to Publisher: February 15, 2004 Publication of Focus Issue: February/March 2004
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-10-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-09-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004
Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
NASA Astrophysics Data System (ADS)
Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo
2017-04-01
In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex (smooth) objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the application of the framework to some resource allocation problems in communication networks. In particular, we consider two non-trivial case-study applications, namely: (generalizations of) i) the rate profile maximization in MIMO interference broadcast networks; and the ii) the max-min fair multicast multigroup beamforming problem in a multi-cell environment. We develop a new class of algorithms enjoying the following distinctive features: i) they are \\emph{distributed} across the base stations (with limited signaling) and lead to subproblems whose solutions are computable in closed form; and ii) differently from current relaxation-based schemes (e.g., semidefinite relaxation), they are proved to always converge to d-stationary solutions of the aforementioned class of nonconvex problems. Numerical results show that the proposed (distributed) schemes achieve larger worst-case rates (resp. signal-to-noise interference ratios) than state-of-the-art centralized ones while having comparable computational complexity.
Automation Hooks Architecture Trade Study for Flexible Test Orchestration
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.
2010-01-01
We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.
Secure distribution for high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Liu, Jin; Sun, Jing; Xu, Zheng Q.
2010-09-01
The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.
Takizawa, Masaomi; Miyashita, Toyohisa; Murase, Sumio; Kanda, Hirohito; Karaki, Yoshiaki; Yagi, Kazuo; Ohue, Toru
2003-01-01
A real-time telescreening system is developed to detect early diseases for rural area residents using two types of mobile vans with a portable satellite station. The system consists of a satellite communication system with 1.5Mbps of the JCSAT-1B satellite, a spiral CT van, an ultrasound imaging van with two video conference system, a DICOM server and a multicast communication unit. The video image and examination image data are transmitted from the van to hospitals and the university simultaneously. Physician in the hospital observes and interprets exam images from the van and watches the video images of the position of ultrasound transducer on screenee in the van. After the observation images, physician explains a results of the examination by the video conference system. Seventy lung CT screening and 203 ultrasound screening were done from March to June 2002. The trial of this real time screening suggested that rural residents are given better healthcare without visit to the hospital. And it will open the gateway to reduce the medical cost and medical divide between city area and rural area.
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-08-08
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.
Sound production due to large-scale coherent structures
NASA Technical Reports Server (NTRS)
Gatski, T. B.
1979-01-01
The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.
Moon-based Earth Observation for Large Scale Geoscience Phenomena
NASA Astrophysics Data System (ADS)
Guo, Huadong; Liu, Guang; Ding, Yixing
2016-07-01
The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.
Wedge measures parallax separations...on large-scale 70-mm
Steven L. Wert; Richard J. Myhre
1967-01-01
A new parallax wedge (range: 1.5 to 2 inches) has been designed for use with large-scaled 70-mm. aerial photographs. The narrow separation of the wedge allows the user to measure small parallax separations that are characteristic of large-scale photographs.
Response of deep and shallow tropical maritime cumuli to large-scale processes
NASA Technical Reports Server (NTRS)
Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.
1976-01-01
The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.
Penders, Bart; Vos, Rein; Horstman, Klasien
2009-11-01
Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.
Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.
Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S
2016-09-26
In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.
State of the Art in Large-Scale Soil Moisture Monitoring
NASA Technical Reports Server (NTRS)
Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.;
2013-01-01
Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.
The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems
ERIC Educational Resources Information Center
Diamanti, Eirini Ilana
2012-01-01
Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…
A unified large/small-scale dynamo in helical turbulence
NASA Astrophysics Data System (ADS)
Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel
2016-09-01
We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.
Soft X-ray Emission from Large-Scale Galactic Outflows in Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E. J. M.; Baum, S.; O'Dea, C.; Veilleux, S.
1998-01-01
Kiloparsec-scale soft X-ray nebulae extend along the galaxy minor axes in several Seyfert galaxies, including NGC 2992, NGC 4388 and NGC 5506. In these three galaxies, the extended X-ray emission observed in ROSAT HRI images has 0.2-2.4 keV X-ray luminosities of 0.4-3.5 x 10(40) erg s(-1) . The X-ray nebulae are roughly co-spatial with the large-scale radio emission, suggesting that both are produced by large-scale galactic outflows. Assuming pressure balance between the radio and X-ray plasmas, the X-ray filling factor is >~ 10(4) times as large as the radio plasma filling factor, suggesting that large-scale outflows in Seyfert galaxies are predominantly winds of thermal X-ray emitting gas. We favor an interpretation in which large-scale outflows originate as AGN-driven jets that entrain and heat gas on kpc scales as they make their way out of the galaxy. AGN- and starburst-driven winds are also possible explanations if the winds are oriented along the rotation axis of the galaxy disk. Since large-scale outflows are present in at least 50 percent of Seyfert galaxies, the soft X-ray emission from the outflowing gas may, in many cases, explain the ``soft excess" X-ray feature observed below 2 keV in X-ray spectra of many Seyfert 2 galaxies.
Impact of spectral nudging on the downscaling of tropical cyclones in regional climate simulations
NASA Astrophysics Data System (ADS)
Choi, Suk-Jin; Lee, Dong-Kyou
2016-06-01
This study investigated the simulations of three months of seasonal tropical cyclone (TC) activity over the western North Pacific using the Advanced Research WRF Model. In the control experiment (CTL), the TC frequency was considerably overestimated. Additionally, the tracks of some TCs tended to have larger radii of curvature and were shifted eastward. The large-scale environments of westerly monsoon flows and subtropical Pacific highs were unreasonably simulated. The overestimated frequency of TC formation was attributed to a strengthened westerly wind field in the southern quadrants of the TC center. In comparison with the experiment with the spectral nudging method, the strengthened wind speed was mainly modulated by large-scale flow that was greater than approximately 1000 km in the model domain. The spurious formation and undesirable tracks of TCs in the CTL were considerably improved by reproducing realistic large-scale atmospheric monsoon circulation with substantial adjustment between large-scale flow in the model domain and large-scale boundary forcing modified by the spectral nudging method. The realistic monsoon circulation took a vital role in simulating realistic TCs. It revealed that, in the downscaling from large-scale fields for regional climate simulations, scale interaction between model-generated regional features and forced large-scale fields should be considered, and spectral nudging is a desirable method in the downscaling method.
Generation of large-scale density fluctuations by buoyancy
NASA Technical Reports Server (NTRS)
Chasnov, J. R.; Rogallo, R. S.
1990-01-01
The generation of fluid motion from a state of rest by buoyancy forces acting on a homogeneous isotropic small-scale density field is considered. Nonlinear interactions between the generated fluid motion and the initial isotropic small-scale density field are found to create an anisotropic large-scale density field with spectrum proportional to kappa(exp 4). This large-scale density field is observed to result in an increasing Reynolds number of the fluid turbulence in its final period of decay.
Nonlinear Generation of shear flows and large scale magnetic fields by small scale
NASA Astrophysics Data System (ADS)
Aburjania, G.
2009-04-01
EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge
On the scaling of small-scale jet noise to large scale
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Allen, Christopher S.
1992-01-01
An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.
On the scaling of small-scale jet noise to large scale
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Allen, Christopher S.
1992-01-01
An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Designing for Scale: Reflections on Rolling Out Reading Improvement in Kenya and Liberia.
Gove, Amber; Korda Poole, Medina; Piper, Benjamin
2017-03-01
Since 2008, the Ministries of Education in Liberia and Kenya have undertaken transitions from small-scale pilot programs to improve reading outcomes among primary learners to the large-scale implementation of reading interventions. The effects of the pilots on learning outcomes were significant, but questions remained regarding whether such large gains could be sustained at scale. In this article, the authors dissect the Liberian and Kenyan experiences with implementing large-scale reading programs, documenting the critical components and conditions of the program designs that affected the likelihood of successfully transitioning from pilot to scale. They also review the design, deployment, and effectiveness of each pilot program and the scale, design, duration, enabling conditions, and initial effectiveness results of the scaled programs in each country. The implications of these results for the design of both pilot and large-scale reading programs are discussed in light of the experiences of both the Liberian and Kenyan programs. © 2017 Wiley Periodicals, Inc.
The Relevancy of Large-Scale, Quantitative Methodologies in Middle Grades Education Research
ERIC Educational Resources Information Center
Mertens, Steven B.
2006-01-01
This article examines the relevancy of large-scale, quantitative methodologies in middle grades education research. Based on recommendations from national advocacy organizations, the need for more large-scale, quantitative research, combined with the application of more rigorous methodologies, is presented. Subsequent sections describe and discuss…
Forum: The Rise of International Large-Scale Assessments and Rationales for Participation
ERIC Educational Resources Information Center
Addey, Camilla; Sellar, Sam; Steiner-Khamsi, Gita; Lingard, Bob; Verger, Antoni
2017-01-01
This Forum discusses the significant growth of international large-scale assessments (ILSAs) since the mid-1990s. Addey and Sellar's contribution ("A Framework for Analysing the Multiple Rationales for Participating in International Large-Scale Assessments") outlines a framework of rationales for participating in ILSAs and examines the…
The Challenge of Large-Scale Literacy Improvement
ERIC Educational Resources Information Center
Levin, Ben
2010-01-01
This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…
Critical Issues in Large-Scale Assessment: A Resource Guide.
ERIC Educational Resources Information Center
Redfield, Doris
The purpose of this document is to provide practical guidance and support for the design, development, and implementation of large-scale assessment systems that are grounded in research and best practice. Information is included about existing large-scale testing efforts, including national testing programs, state testing programs, and…
Toward Instructional Leadership: Principals' Perceptions of Large-Scale Assessment in Schools
ERIC Educational Resources Information Center
Prytula, Michelle; Noonan, Brian; Hellsten, Laurie
2013-01-01
This paper describes a study of the perceptions that Saskatchewan school principals have regarding large-scale assessment reform and their perceptions of how assessment reform has affected their roles as principals. The findings revealed that large-scale assessments, especially provincial assessments, have affected the principal in Saskatchewan…
1984-06-01
RD-Rl45 988 AQUATIC PLANT CONTROL RESEARCH PROGRAM LARGE-SCALE 1/2 OPERATIONS MANAGEMENT ..(U) ARMY ENGINEER WATERWAYS EXPERIMENT STATION VICKSBURG MS...REPORT A-78-2 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR -, CONTROL OF PROBLEM AQUATIC PLANTS Report 5 SYNTHESIS REPORT bv Andrew...Corps of Engineers Washington, DC 20314 84 0,_1 oil.. LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC
An Novel Architecture of Large-scale Communication in IOT
NASA Astrophysics Data System (ADS)
Ma, Wubin; Deng, Su; Huang, Hongbin
2018-03-01
In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.
Preventing Large-Scale Controlled Substance Diversion From Within the Pharmacy
Martin, Emory S.; Dzierba, Steven H.; Jones, David M.
2013-01-01
Large-scale diversion of controlled substances (CS) from within a hospital or heath system pharmacy is a rare but growing problem. It is the responsibility of pharmacy leadership to scrutinize control processes to expose weaknesses. This article reviews examples of large-scale diversion incidents and diversion techniques and provides practical strategies to stimulate enhanced CS security within the pharmacy staff. Large-scale diversion from within a pharmacy department can be averted by a pharmacist-in-charge who is informed and proactive in taking effective countermeasures. PMID:24421497
Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md.; Fletcher, Christine
2016-01-01
General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon. PMID:27561887
Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md; Fletcher, Christine
2016-08-26
General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon.
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Large-scale environments of narrow-line Seyfert 1 galaxies
NASA Astrophysics Data System (ADS)
Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.
2017-09-01
Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.
ERIC Educational Resources Information Center
Najm, Majdi R. Abou; Mohtar, Rabi H.; Cherkauer, Keith A.; French, Brian F.
2010-01-01
Proper understanding of scaling and large-scale hydrologic processes is often not explicitly incorporated in the teaching curriculum. This makes it difficult for students to connect the effect of small scale processes and properties (like soil texture and structure, aggregation, shrinkage, and cracking) on large scale hydrologic responses (like…
Large Scale Cross Drive Correlation Of Digital Media
2016-03-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS LARGE SCALE CROSS-DRIVE CORRELATION OF DIGITAL MEDIA by Joseph Van Bruaene March 2016 Thesis Co...CROSS-DRIVE CORRELATION OF DIGITAL MEDIA 5. FUNDING NUMBERS 6. AUTHOR(S) Joseph Van Bruaene 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...the ability to make large scale cross-drive correlations among a large corpus of digital media becomes increasingly important. We propose a
Large-scale modeling of rain fields from a rain cell deterministic model
NASA Astrophysics Data System (ADS)
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
NASA Technical Reports Server (NTRS)
Avissar, Roni; Chen, Fei
1993-01-01
Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.
Case Study: Commercialization of sweet sorghum juice clarification for large-scale syrup manufacture
USDA-ARS?s Scientific Manuscript database
The precipitation and burning of insoluble granules of starch from sweet sorghum juice on heating coils prevented the large scale manufacture of syrup at a new industrial plant in Missouri, USA. To remove insoluble starch granules, a series of small and large-scale experiments were conducted at the...
ERIC Educational Resources Information Center
Uetake, Tetsuya
2015-01-01
Purpose: Large-scale collective action is necessary when managing agricultural natural resources such as biodiversity and water quality. This paper determines the key factors to the success of such action. Design/Methodology/Approach: This paper analyses four large-scale collective actions used to manage agri-environmental resources in Canada and…
Large scale geologic sequestration (GS) of carbon dioxide poses a novel set of challenges for regulators. This paper focuses on the unique needs of large scale GS projects in light of the existing regulatory regimes in the United States and Canada and identifies several differen...
Comprehensive School Teachers' Professional Agency in Large-Scale Educational Change
ERIC Educational Resources Information Center
Pyhältö, Kirsi; Pietarinen, Janne; Soini, Tiina
2014-01-01
This article explores how comprehensive school teachers' sense of professional agency changes in the context of large-scale national educational change in Finland. We analysed the premises on which teachers (n = 100) view themselves and their work in terms of developing their own school, catalysed by the large-scale national change. The study…
Improving International Assessment through Evaluation
ERIC Educational Resources Information Center
Rutkowski, David
2018-01-01
In this article I advocate for a new discussion in the field of international large-scale assessments; one that calls for a reexamination of international large-scale assessments (ILSAs) and their use. Expanding on the high-quality work in this special issue I focus on three inherent limitations to international large-scale assessments noted by…
ERIC Educational Resources Information Center
Burstein, Leigh
Two specific methods of analysis in large-scale evaluations are considered: structural equation modeling and selection modeling/analysis of non-equivalent control group designs. Their utility in large-scale educational program evaluation is discussed. The examination of these methodological developments indicates how people (evaluators,…
Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement
ERIC Educational Resources Information Center
Zheng, Xiaohui
2009-01-01
The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…
Managing Risk and Uncertainty in Large-Scale University Research Projects
ERIC Educational Resources Information Center
Moore, Sharlissa; Shangraw, R. F., Jr.
2011-01-01
Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@auecc.aichi-edu.ac.jp, E-mail: sano@ile.osaka-u.ac.jp
We report the first successful simulation of spontaneous formation of surface magnetic structures from a large-scale dynamo by strongly stratified thermal convection in Cartesian geometry. The large-scale dynamo observed in our strongly stratified model has physical properties similar to those in earlier weakly stratified convective dynamo simulations, indicating that the α {sup 2}-type mechanism is responsible for the dynamo. In addition to the large-scale dynamo, we find that large-scale structures of the vertical magnetic field are spontaneously formed in the convection zone (CZ) surface only in cases with a strongly stratified atmosphere. The organization of the vertical magnetic field proceedsmore » in the upper CZ within tens of convective turnover time and band-like bipolar structures recurrently appear in the dynamo-saturated stage. We consider several candidates to be possibly be the origin of the surface magnetic structure formation, and then suggest the existence of an as-yet-unknown mechanism for the self-organization of the large-scale magnetic structure, which should be inherent in the strongly stratified convective atmosphere.« less
Large-scale retrieval for medical image analytics: A comprehensive review.
Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting
2018-01-01
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
A bibliographical surveys of large-scale systems
NASA Technical Reports Server (NTRS)
Corliss, W. R.
1970-01-01
A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.
The Large -scale Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Flin, Piotr
A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.
Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu
2014-09-01
This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.
Investigating a link between large and small-scale chaos features on Europa
NASA Astrophysics Data System (ADS)
Tognetti, L.; Rhoden, A.; Nelson, D. M.
2017-12-01
Chaos is one of the most recognizable, and studied, features on Europa's surface. Most models of chaos formation invoke liquid water at shallow depths within the ice shell; the liquid destabilizes the overlying ice layer, breaking it into mobile rafts and destroying pre-existing terrain. This class of model has been applied to both large-scale chaos like Conamara and small-scale features (i.e. microchaos), which are typically <10 km in diameter. Currently unknown, however, is whether both large-scale and small-scale features are produced together, e.g. through a network of smaller sills linked to a larger liquid water pocket. If microchaos features do form as satellites of large-scale chaos features, we would expect a drop off in the number density of microchaos with increasing distance from the large chaos feature; the trend should not be observed in regions without large-scale chaos features. Here, we test the hypothesis that large chaos features create "satellite" systems of smaller chaos features. Either outcome will help us better understand the relationship between large-scale chaos and microchaos. We focus first on regions surrounding the large chaos features Conamara and Murias (e.g. the Mitten). We map all chaos features within 90,000 sq km of the main chaos feature and assign each one a ranking (High Confidence, Probable, or Low Confidence) based on the observed characteristics of each feature. In particular, we look for a distinct boundary, loss of preexisting terrain, the existence of rafts or blocks, and the overall smoothness of the feature. We also note features that are chaos-like but lack sufficient characteristics to be classified as chaos. We then apply the same criteria to map microchaos features in regions of similar area ( 90,000 sq km) that lack large chaos features. By plotting the distribution of microchaos with distance from the center point of the large chaos feature or the mapping region (for the cases without a large feature), we determine whether there is a distinct signature linking large-scale chaos features with nearby microchaos. We discuss the implications of these results on the process of chaos formation and the extent of liquid water within Europa's ice shell.
Large-scale motions in the universe: Using clusters of galaxies as tracers
NASA Technical Reports Server (NTRS)
Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard
1995-01-01
Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).
The Prominent Role of the Upstream Conditions on the Large-scale Motions of a Turbulent Channel Flow
NASA Astrophysics Data System (ADS)
Castillo, Luciano; Dharmarathne, Suranga; Tutkun, Murat; Hutchins, Nicholas
2017-11-01
In this study we investigate how upstream perturbations in a turbulent channel flow impact the downstream flow evolution, especially the large-scale motions. Direct numerical simulations were carried out at a friction Reynolds number, Reτ = 394 . Spanwise varying inlet blowing perturbations were imposed at 1 πh from the inlet. The flow field is decomposed into its constituent scales using proper orthogonal decomposition. The large-scale motions and the small-scale motions of the flow field are separated at a cut-off mode number, Mc. The cut-off mode number is defined as the number of the mode at which the fraction of energy recovered is 55 % . It is found that Reynolds stresses are increased due to blowing perturbations and large-scale motions are responsible for more than 70 % of the increase of the streamwise component of Reynolds normal stress. Surprisingly, 90 % of Reynolds shear stress is due to the energy augmentation of large-scale motions. It is shown that inlet perturbations impact the downstream flow by means of the LSM.
On the large eddy simulation of turbulent flows in complex geometry
NASA Technical Reports Server (NTRS)
Ghosal, Sandip
1993-01-01
Application of the method of Large Eddy Simulation (LES) to a turbulent flow consists of three separate steps. First, a filtering operation is performed on the Navier-Stokes equations to remove the small spatial scales. The resulting equations that describe the space time evolution of the 'large eddies' contain the subgrid-scale (sgs) stress tensor that describes the effect of the unresolved small scales on the resolved scales. The second step is the replacement of the sgs stress tensor by some expression involving the large scales - this is the problem of 'subgrid-scale modeling'. The final step is the numerical simulation of the resulting 'closed' equations for the large scale fields on a grid small enough to resolve the smallest of the large eddies, but still much larger than the fine scale structures at the Kolmogorov length. In dividing a turbulent flow field into 'large' and 'small' eddies, one presumes that a cut-off length delta can be sensibly chosen such that all fluctuations on a scale larger than delta are 'large eddies' and the remainder constitute the 'small scale' fluctuations. Typically, delta would be a length scale characterizing the smallest structures of interest in the flow. In an inhomogeneous flow, the 'sensible choice' for delta may vary significantly over the flow domain. For example, in a wall bounded turbulent flow, most statistical averages of interest vary much more rapidly with position near the wall than far away from it. Further, there are dynamically important organized structures near the wall on a scale much smaller than the boundary layer thickness. Therefore, the minimum size of eddies that need to be resolved is smaller near the wall. In general, for the LES of inhomogeneous flows, the width of the filtering kernel delta must be considered to be a function of position. If a filtering operation with a nonuniform filter width is performed on the Navier-Stokes equations, one does not in general get the standard large eddy equations. The complication is caused by the fact that a filtering operation with a nonuniform filter width in general does not commute with the operation of differentiation. This is one of the issues that we have looked at in detail as it is basic to any attempt at applying LES to complex geometry flows. Our principal findings are summarized.
Linking crop yield anomalies to large-scale atmospheric circulation in Europe.
Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J
2017-06-15
Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.
NASA Astrophysics Data System (ADS)
Michioka, Takenobu; Sato, Ayumu; Sada, Koichi
2011-10-01
Large-scale turbulent motions enhancing horizontal gas spread in an atmospheric boundary layer are simulated in a wind-tunnel experiment. The large-scale turbulent motions can be generated using an active grid installed at the front of the test section in the wind tunnel, when appropriate parameters for the angular deflection and the rotation speed are chosen. The power spectra of vertical velocity fluctuations are unchanged with and without the active grid because they are strongly affected by the surface. The power spectra of both streamwise and lateral velocity fluctuations with the active grid increase in the low frequency region, and are closer to the empirical relations inferred from field observations. The large-scale turbulent motions do not affect the Reynolds shear stress, but change the balance of the processes involved. The relative contributions of ejections to sweeps are suppressed by large-scale turbulent motions, indicating that the motions behave as sweep events. The lateral gas spread is enhanced by the lateral large-scale turbulent motions generated by the active grid. The large-scale motions, however, do not affect the vertical velocity fluctuations near the surface, resulting in their having a minimal effect on the vertical gas spread. The peak concentration normalized using the root-mean-squared value of concentration fluctuation is remarkably constant over most regions of the plume irrespective of the operation of the active grid.
How much a galaxy knows about its large-scale environment?: An information theoretic perspective
NASA Astrophysics Data System (ADS)
Pandey, Biswajit; Sarkar, Suman
2017-05-01
The small-scale environment characterized by the local density is known to play a crucial role in deciding the galaxy properties but the role of large-scale environment on galaxy formation and evolution still remain a less clear issue. We propose an information theoretic framework to investigate the influence of large-scale environment on galaxy properties and apply it to the data from the Galaxy Zoo project that provides the visual morphological classifications of ˜1 million galaxies from the Sloan Digital Sky Survey. We find a non-zero mutual information between morphology and environment that decreases with increasing length-scales but persists throughout the entire length-scales probed. We estimate the conditional mutual information and the interaction information between morphology and environment by conditioning the environment on different length-scales and find a synergic interaction between them that operates up to at least a length-scales of ˜30 h-1 Mpc. Our analysis indicates that these interactions largely arise due to the mutual information shared between the environments on different length-scales.
The three-point function as a probe of models for large-scale structure
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gaztanaga, Enrique
1993-01-01
The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
Large-Scale Assessment, Rationality, and Scientific Management: The Case of No Child Left Behind
ERIC Educational Resources Information Center
Roach, Andrew T.; Frank, Jennifer
2007-01-01
This article examines the ways in which NCLB and the movement towards large-scale assessment systems are based on Weber's concept of formal rationality and tradition of scientific management. Building on these ideas, the authors use Ritzer's McDonaldization thesis to examine some of the core features of large-scale assessment and accountability…
The Status of Large-Scale Assessment in the Pacific Region. REL Technical Brief. REL 2008-No. 003
ERIC Educational Resources Information Center
Ryan, Jennifer; Keir, Scott
2008-01-01
This technical brief describes the large-scale assessment measures and practices used in the jurisdictions served by the Pacific Regional Educational Laboratory. The need for effective large-scale assessment was identified as a major priority for improving student achievement in the Pacific Region jurisdictions: American Samoa, Guam, Hawaii, the…
ERIC Educational Resources Information Center
Flanagan, Gina E.
2014-01-01
There is limited research that outlines how a superintendent's instructional vision can help to gain acceptance of a large-scale technology initiative. This study explored how superintendents gain acceptance for a large-scale technology initiative (specifically a 1:1 device program) through various leadership actions. The role of the instructional…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... Establishment To Support Large-Scale Marine Air Ground Task Force Live- Fire and Maneuver Training at the Marine...), announces its decision to establish a large-scale Marine Air Ground Task Force (MAGTF) training facility at... through the Federal Aviation Administration the establishment and modification of military Special Use...
Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming
ERIC Educational Resources Information Center
Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.
2013-01-01
Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…
ERIC Educational Resources Information Center
Arnold, Erik P.
2014-01-01
A multiple-case qualitative study of five school districts that had implemented various large-scale technology initiatives was conducted to describe what superintendents do to gain acceptance of those initiatives. The large-scale technology initiatives in the five participating districts included 1:1 District-Provided Device laptop and tablet…
Potential for geophysical experiments in large scale tests.
Dieterich, J.H.
1981-01-01
Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (100MPa).-Author
Survey on large scale system control methods
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1987-01-01
The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Regular Topologies for Gigabit Wide-Area Networks. Volume 1
NASA Technical Reports Server (NTRS)
Shacham, Nachum; Denny, Barbara A.; Lee, Diane S.; Khan, Irfan H.; Lee, Danny Y. C.; McKenney, Paul
1994-01-01
In general terms, this project aimed at the analysis and design of techniques for very high-speed networking. The formal objectives of the project were to: (1) Identify switch and network technologies for wide-area networks that interconnect a large number of users and can provide individual data paths at gigabit/s rates; (2) Quantitatively evaluate and compare existing and proposed architectures and protocols, identify their strength and growth potentials, and ascertain the compatibility of competing technologies; and (3) Propose new approaches to existing architectures and protocols, and identify opportunities for research to overcome deficiencies and enhance performance. The project was organized into two parts: 1. The design, analysis, and specification of techniques and protocols for very-high-speed network environments. In this part, SRI has focused on several key high-speed networking areas, including Forward Error Control (FEC) for high-speed networks in which data distortion is the result of packet loss, and the distribution of broadband, real-time traffic in multiple user sessions. 2. Congestion Avoidance Testbed Experiment (CATE). This part of the project was done within the framework of the DARTnet experimental T1 national network. The aim of the work was to advance the state of the art in benchmarking DARTnet's performance and traffic control by developing support tools for network experimentation, by designing benchmarks that allow various algorithms to be meaningfully compared, and by investigating new queueing techniques that better satisfy the needs of best-effort and reserved-resource traffic. This document is the final technical report describing the results obtained by SRI under this project. The report consists of three volumes: Volume 1 contains a technical description of the network techniques developed by SRI in the areas of FEC and multicast of real-time traffic. Volume 2 describes the work performed under CATE. Volume 3 contains the source code of all software developed under CATE.
NASA Astrophysics Data System (ADS)
Von Storch, H.; Klehmet, K.; Geyer, B.; Li, D.; Schubert-Frisius, M.; Tim, N.; Zorita, E.
2015-12-01
Global re-analyses suffer from inhomogeneities, as they process data from networks under development. However, the large-scale component of such re-analyses is mostly homogeneous; additional observational data add in most cases to a better description of regional details and less so on large-scale states. Therefore, the concept of downscaling may be applied to homogeneously complementing the large-scale state of the re-analyses with regional detail - wherever the condition of homogeneity of the large-scales is fulfilled. Technically this can be done by using a regional climate model, or a global climate model, which is constrained on the large scale by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional risks - in particular marine risks - was identified. While the data density in Europe is considerably better than in most other regions of the world, even here insufficient spatial and temporal coverage is limiting risk assessments. Therefore, downscaled data-sets are frequently used by off-shore industries. We have run this system also in regions with reduced or absent data coverage, such as the Lena catchment in Siberia, in the Yellow Sea/Bo Hai region in East Asia, in Namibia and the adjacent Atlantic Ocean. Also a global (large scale constrained) simulation has been. It turns out that spatially detailed reconstruction of the state and change of climate in the three to six decades is doable for any region of the world.The different data sets are archived and may freely by used for scientific purposes. Of course, before application, a careful analysis of the quality for the intended application is needed, as sometimes unexpected changes in the quality of the description of large-scale driving states prevail.
Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi
2014-07-01
Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Downscaling ocean conditions: Experiments with a quasi-geostrophic model
NASA Astrophysics Data System (ADS)
Katavouta, A.; Thompson, K. R.
2013-12-01
The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
How Large Scale Flows in the Solar Convection Zone may Influence Solar Activity
NASA Technical Reports Server (NTRS)
Hathaway, D. H.
2004-01-01
Large scale flows within the solar convection zone are the primary drivers of the Sun s magnetic activity cycle. Differential rotation can amplify the magnetic field and convert poloidal fields into toroidal fields. Poleward meridional flow near the surface can carry magnetic flux that reverses the magnetic poles and can convert toroidal fields into poloidal fields. The deeper, equatorward meridional flow can carry magnetic flux toward the equator where it can reconnect with oppositely directed fields in the other hemisphere. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun s rotation on convection produce velocity correlations that can maintain the differential rotation and meridional circulation. These convective motions can influence solar activity themselves by shaping the large-scale magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.
Measuring large-scale vertical motion in the atmosphere with dropsondes
NASA Astrophysics Data System (ADS)
Bony, Sandrine; Stevens, Bjorn
2017-04-01
Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.
Large-Scale Structure and Hyperuniformity of Amorphous Ices
NASA Astrophysics Data System (ADS)
Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto
2017-09-01
We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.
NASA Astrophysics Data System (ADS)
Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong
2016-07-01
In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.
Large Scale Processes and Extreme Floods in Brazil
NASA Astrophysics Data System (ADS)
Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.
2016-12-01
Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2017-08-05
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Nonlocal and collective relaxation in stellar systems
NASA Technical Reports Server (NTRS)
Weinberg, Martin D.
1993-01-01
The modal response of stellar systems to fluctuations at large scales is presently investigated by means of analytic theory and n-body simulation; the stochastic excitation of these modes is shown to increase the relaxation rate even for a system which is moderately far from instability. The n-body simulations, when designed to suppress relaxation at small scales, clearly show the effects of large-scale fluctuations. It is predicted that large-scale fluctuations will be largest for such marginally bound systems as forming star clusters and associations.
Inventory of File SN.2012091412.gribn3.f18.grib2
Convective Precipitation [kg/m^2] 285 surface NCPCP 18 hour fcst Large-Scale Precipitation (non-convective ) [kg/m^2] 286 surface NCPCP 18 hour fcst Large-Scale Precipitation (non-convective) [kg/m^2] 287 surface NCPCP 18 hour fcst Large-Scale Precipitation (non-convective) [kg/m^2] 288 surface NCPCP 18 hour
ERIC Educational Resources Information Center
Copp, Derek T.
2017-01-01
Large-scale assessment (LSA) is a tool used by education authorities for several purposes, including the promotion of teacher-based instructional change. In Canada, all 10 provinces engage in large-scale testing across several grade levels and subjects, and also have the common expectation that the results data will be used to improve instruction…
Grid-Enabled Quantitative Analysis of Breast Cancer
2009-10-01
large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast
ERIC Educational Resources Information Center
Sharp, Jeff; Tucker, Mark
2005-01-01
The development of large-scale livestock facilities has become a controversial issue in many regions of the U.S. in recent years. In this research, rural-urban differences in familiarity and concern about large-scale livestock facilities among Ohioans is examined as well as the relationship of social distance from agriculture and trust in risk…
An Analysis of Large-Scale Writing Assessments in Canada (Grades 5-8)
ERIC Educational Resources Information Center
Peterson, Shelley Stagg; McClay, Jill; Main, Kristin
2011-01-01
This paper reports on an analysis of large-scale assessments of Grades 5-8 students' writing across 10 provinces and 2 territories in Canada. Theory, classroom practice, and the contributions and constraints of large-scale writing assessment are brought together with a focus on Grades 5-8 writing in order to provide both a broad view of…
Christopher P. Bloch; Michael R. Willi
2006-01-01
Large-scale natural disturbances, such as hurricanes, can have profound effects on animal populations. Nonetheless, generalizations about the effects of disturbance are elusive, and few studies consider long-term responses of a single population or community to multiple large-scale disturbance events. In the last 20 y, twomajor hurricanes (Hugo and Georges) have struck...
ERIC Educational Resources Information Center
Töytäri, Aija; Piirainen, Arja; Tynjälä, Päivi; Vanhanen-Nuutinen, Liisa; Mäki, Kimmo; Ilves, Vesa
2016-01-01
In this large-scale study, higher education teachers' descriptions of their own learning were examined with qualitative analysis involving application of principles of phenomenographic research. This study is unique: it is unusual to use large-scale data in qualitative studies. The data were collected through an e-mail survey sent to 5960 teachers…
1981-06-01
V ADA02 7414 UNIVERSITY OF SOUTH FLORIDA TAMPA DEPT OF BIOLOGY F/6 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE MHITE AMUM-ETC(U) JUN 81...Army Engineer Waterways Expiftaton P. 0. Box 631, Vicksburg, Miss. 391( 0 81 8 1102 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR...78-22// 4. TITLE (and Su~btitle) 5 TYPE OF REPORT & PERIOD COVERED LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE. or Report I of a series THE W4HITE
1982-02-01
7AD-AI3 853 ’FLORIDA SAME AND FRESH WATER FISH COMMISSION ORLANDO F/ 616 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR--ETC(U...of a series of reports documenting a large-scale operations management test of use of the white amur for control of problem aquatic plants in Lake...M. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants; Report 2, First Year Poststock- ing
1982-02-01
AD A113 .5. ORANGE COUNTY POLLUTION CONTROL DEPT ORLANDO FL F/S 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR-ETC(U) FEB 82 H D...Large-Scale Operations Management Test of use of the white amur for control of problem aquatic plants in Lake Conway, Fla. Report 1 of the series presents...as follows: Miller, D. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants; Report 2, First
1983-01-01
RAI-RI247443 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE i/i UNITE AMUR FOR CONTR.. (U) MILLER RND MILLER INC ORLANDO FL H D MILLER ET RL...LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMUR FOR CONTROL OF PROBLEM AQUATIC PLANTS Report 1: Baseline Studies Volume I...Boyd, J. 1983. "Large-Scale Operations Management Test of Use of the White Amur for Control of Problem Aquatic Plants; Report 4, Third Year Poststocking
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
Large-scale influences in near-wall turbulence.
Hutchins, Nicholas; Marusic, Ivan
2007-03-15
Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.
Bai, Hua; Li, Xinshi; Hu, Chao; Zhang, Xuan; Li, Junfang; Yan, Yan; Xi, Guangcheng
2013-01-01
Mesoporous nanostructures represent a unique class of photocatalysts with many applications, including splitting of water, degradation of organic contaminants, and reduction of carbon dioxide. In this work, we report a general Lewis acid catalytic template route for the high–yield producing single– and multi–component large–scale three–dimensional (3D) mesoporous metal oxide networks. The large-scale 3D mesoporous metal oxide networks possess large macroscopic scale (millimeter–sized) and mesoporous nanostructure with huge pore volume and large surface exposure area. This method also can be used for the synthesis of large–scale 3D macro/mesoporous hierarchical porous materials and noble metal nanoparticles loaded 3D mesoporous networks. Photocatalytic degradation of Azo dyes demonstrated that the large–scale 3D mesoporous metal oxide networks enable high photocatalytic activity. The present synthetic method can serve as the new design concept for functional 3D mesoporous nanomaterials. PMID:23857595
Large-Scale Coherent Vortex Formation in Two-Dimensional Turbulence
NASA Astrophysics Data System (ADS)
Orlov, A. V.; Brazhnikov, M. Yu.; Levchenko, A. A.
2018-04-01
The evolution of a vortex flow excited by an electromagnetic technique in a thin layer of a conducting liquid was studied experimentally. Small-scale vortices, excited at the pumping scale, merge with time due to the nonlinear interaction and produce large-scale structures—the inverse energy cascade is formed. The dependence of the energy spectrum in the developed inverse cascade is well described by the Kraichnan law k -5/3. At large scales, the inverse cascade is limited by cell sizes, and a large-scale coherent vortex flow is formed, which occupies almost the entire area of the experimental cell. The radial profile of the azimuthal velocity of the coherent vortex immediately after the pumping was switched off has been established for the first time. Inside the vortex core, the azimuthal velocity grows linearly along a radius and reaches a constant value outside the core, which agrees well with the theoretical prediction.
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
Field-aligned currents' scale analysis performed with the Swarm constellation
NASA Astrophysics Data System (ADS)
Lühr, Hermann; Park, Jaeheung; Gjerloev, Jesper W.; Rauberg, Jan; Michaelis, Ingo; Merayo, Jose M. G.; Brauer, Peter
2015-01-01
We present a statistical study of the temporal- and spatial-scale characteristics of different field-aligned current (FAC) types derived with the Swarm satellite formation. We divide FACs into two classes: small-scale, up to some 10 km, which are carried predominantly by kinetic Alfvén waves, and large-scale FACs with sizes of more than 150 km. For determining temporal variability we consider measurements at the same point, the orbital crossovers near the poles, but at different times. From correlation analysis we obtain a persistent period of small-scale FACs of order 10 s, while large-scale FACs can be regarded stationary for more than 60 s. For the first time we investigate the longitudinal scales. Large-scale FACs are different on dayside and nightside. On the nightside the longitudinal extension is on average 4 times the latitudinal width, while on the dayside, particularly in the cusp region, latitudinal and longitudinal scales are comparable.
Why do large and small scales couple in a turbulent boundary layer?
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Promode R.
2011-11-01
Correlation measurement, which is not definitive, suggests that large and small scales in a turbulent boundary layer (TBL) couple. A TBL is modeled as a jungle of interacting nonlinear oscillators to explore the origin of the coupling. These oscillators have the inherent property of self-sustainability, disturbance rejection, and of self-referential phase reset whereby several oscillators can phase align (or have constant phase difference between them) when an ``external'' impulse is applied. Consequently, these properties of a TBL are accounted for: self-sustainability, return of the wake component after a disturbance is removed, and the formation of the 18o large structures, which are composed of a sequential train of hairpin vortices. The nonlinear ordinary differential equations of the oscillators are solved using an analog circuit for rapid solution. The post-bifurcation limit cycles are determined. A small scale and a large scale are akin to two different oscillators. The state variables from the two disparate interacting oscillators are shown to couple and the small scales appear at certain regions of the phase of the large scale. The coupling is a consequence of the nonlinear oscillatory behavior. Although state planes exist where the disparate scales appear de-superposed, all scales in a TBL are in fact coupled and they cannot be monochromatically isolated.
CMB hemispherical asymmetry from non-linear isocurvature perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assadullahi, Hooshyar; Wands, David; Firouzjahi, Hassan
2015-04-01
We investigate whether non-adiabatic perturbations from inflation could produce an asymmetric distribution of temperature anisotropies on large angular scales in the cosmic microwave background (CMB). We use a generalised non-linear δ N formalism to calculate the non-Gaussianity of the primordial density and isocurvature perturbations due to the presence of non-adiabatic, but approximately scale-invariant field fluctuations during multi-field inflation. This local-type non-Gaussianity leads to a correlation between very long wavelength inhomogeneities, larger than our observable horizon, and smaller scale fluctuations in the radiation and matter density. Matter isocurvature perturbations contribute primarily to low CMB multipoles and hence can lead to a hemisphericalmore » asymmetry on large angular scales, with negligible asymmetry on smaller scales. In curvaton models, where the matter isocurvature perturbation is partly correlated with the primordial density perturbation, we are unable to obtain a significant asymmetry on large angular scales while respecting current observational constraints on the observed quadrupole. However in the axion model, where the matter isocurvature and primordial density perturbations are uncorrelated, we find it may be possible to obtain a significant asymmetry due to isocurvature modes on large angular scales. Such an isocurvature origin for the hemispherical asymmetry would naturally give rise to a distinctive asymmetry in the CMB polarisation on large scales.« less
Large-Scale Outflows in Seyfert Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E. J. M.; Baum, S. A.
1995-12-01
\\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.
Large Eddy Simulation in the Computation of Jet Noise
NASA Technical Reports Server (NTRS)
Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.
1999-01-01
Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.
NASA Astrophysics Data System (ADS)
Choi, N.; Lee, M. I.; Lim, Y. K.; Kim, K. M.
2017-12-01
Heatwave is an extreme hot weather event which accompanies fatal damage to human health. The heatwave has a strong relationship with the large-scale atmospheric teleconnection patterns. In this study, we examine the spatial pattern of heatwave in East Asia by using the EOF analysis and the relationship between heatwave frequency and large-scale atmospheric teleconnection patterns. We also separate the time scale of heatwave frequency as the time scale longer than a decade and the interannual time scale. The long-term variation of heatwave frequency in East Asia shows a linkage with the sea surface temperature (SST) variability over the North Atlantic with a decadal time scale (a.k.a. the Atlantic Multidecadal Oscillation; AMO). On the other hands, the interannual variation of heatwave frequency is linked with the two dominant spatial patterns associated with the large-scale teleconnection patterns mimicking the Scandinavian teleconnection (SCAND-like) pattern and the circumglobal teleconnection (CGT-like) pattern, respectively. It is highlighted that the interannual variation of heatwave frequency in East Asia shows a remarkable change after mid-1990s. While the heatwave frequency was mainly associated with the CGT-like pattern before mid-1990s, the SCAND-like pattern becomes the most dominant one after mid-1990s, making the CGT-like pattern as the second. This study implies that the large-scale atmospheric teleconnection patterns play a key role in developing heatwave events in East Asia. This study further discusses possible mechanisms for the decadal change in the linkage between heatwave frequency and the large-scale teleconnection patterns in East Asia such as early melting of snow cover and/or weakening of East Asian jet stream due to global warming.
Transfer of movement sequences: bigger is better.
Dean, Noah J; Kovacs, Attila J; Shea, Charles H
2008-02-01
Experiment 1 was conducted to determine if proportional transfer from "small to large" scale movements is as effective as transferring from "large to small." We hypothesize that the learning of larger scale movement will require the participant to learn to manage the generation, storage, and dissipation of forces better than when practicing smaller scale movements. Thus, we predict an advantage for transfer of larger scale movements to smaller scale movements relative to transfer from smaller to larger scale movements. Experiment 2 was conducted to determine if adding a load to a smaller scale movement would enhance later transfer to a larger scale movement sequence. It was hypothesized that the added load would require the participants to consider the dynamics of the movement to a greater extent than without the load. The results replicated earlier findings of effective transfer from large to small movements, but consistent with our hypothesis, transfer was less effective from small to large (Experiment 1). However, when a load was added during acquisition transfer from small to large was enhanced even though the load was removed during the transfer test. These results are consistent with the notion that the transfer asymmetry noted in Experiment 1 was due to factors related to movement dynamics that were enhanced during practice of the larger scale movement sequence, but not during the practice of the smaller scale movement sequence. The findings that the movement structure is unaffected by transfer direction but the movement dynamics are influenced by transfer direction is consistent with hierarchal models of sequence production.
Large-Scale Hybrid Motor Testing. Chapter 10
NASA Technical Reports Server (NTRS)
Story, George
2006-01-01
Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.
2014-08-10
Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less
EFFECTS OF LARGE-SCALE POULTRY FARMS ON AQUATIC MICROBIAL COMMUNITIES: A MOLECULAR INVESTIGATION.
The effects of large-scale poultry production operations on water quality and human health are largely unknown. Poultry litter is frequently applied as fertilizer to agricultural lands adjacent to large poultry farms. Run-off from the land introduces a variety of stressors into t...
Wang, Lu-Yong; Fasulo, D
2006-01-01
Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.
Enhancements for a Dynamic Data Warehousing and Mining System for Large-scale HSCB Data
2016-08-29
Intelligent Automation Incorporated Enhancements for a Dynamic Data Warehousing and Mining ...Page | 2 Intelligent Automation Incorporated Monthly Report No. 5 Enhancements for a Dynamic Data Warehousing and Mining System Large-Scale HSCB...System for Large-scale HSCB Data Monthly Report No. 5 Reporting Period: July 20, 2016 – Aug 19, 2016 Contract No. N00014-16-P-3014
Some aspects of control of a large-scale dynamic system
NASA Technical Reports Server (NTRS)
Aoki, M.
1975-01-01
Techniques of predicting and/or controlling the dynamic behavior of large scale systems are discussed in terms of decentralized decision making. Topics discussed include: (1) control of large scale systems by dynamic team with delayed information sharing; (2) dynamic resource allocation problems by a team (hierarchical structure with a coordinator); and (3) some problems related to the construction of a model of reduced dimension.
R. S. Seymour; J. Guldin; D. Marshall; B. Palik
2006-01-01
This paper provides a synopsis of large-scale, long-term silviculture experiments in the United States. Large-scale in a silvicultural context means that experimental treatment units encompass entire stands (5 to 30 ha); long-term means that results are intended to be monitored over many cutting cycles or an entire rotation, typically for many decades. Such studies...
ERIC Educational Resources Information Center
van Barneveld, Christina; Brinson, Karieann
2017-01-01
The purpose of this research was to identify conflicts in the rights and responsibility of Grade 9 test takers when some parts of a large-scale test are marked by teachers and used in the calculation of students' class marks. Data from teachers' questionnaires and students' questionnaires from a 2009-10 administration of a large-scale test of…
Bayesian hierarchical model for large-scale covariance matrix estimation.
Zhu, Dongxiao; Hero, Alfred O
2007-12-01
Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.
ERIC Educational Resources Information Center
Fleisch, Brahm; Taylor, Stephen; Schöer, Volker; Mabogoane, Thabo
2017-01-01
This article illustrates the value of large-scale impact evaluations with counterfactual components. It begins by exploring the limitations of small-scale impact studies, which do not allow reliable inference to a wider population or which do not use valid comparison groups. The paper then describes the design features of a recent large-scale…
ERIC Educational Resources Information Center
Burgin, Rick A.
2012-01-01
Large-scale crises continue to surprise, overwhelm, and shatter college and university campuses. While the devastation to physical plants and persons is often evident and is addressed with crisis management plans, the number of emotional casualties left in the wake of these large-scale crises may not be apparent and are often not addressed with…
Transport induced by large scale convective structures in a dipole-confined plasma.
Grierson, B A; Mauel, M E; Worstell, M W; Klassen, M
2010-11-12
Convective structures characterized by E×B motion are observed in a dipole-confined plasma. Particle transport rates are calculated from density dynamics obtained from multipoint measurements and the reconstructed electrostatic potential. The calculated transport rates determined from the large-scale dynamics and local probe measurements agree in magnitude, show intermittency, and indicate that the particle transport is dominated by large-scale convective structures.
Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network
NASA Technical Reports Server (NTRS)
Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.
1999-01-01
In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.
NASA Astrophysics Data System (ADS)
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe
2016-07-01
We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.
R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove
2016-01-01
The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...
2003-12-01
operations run the full gamut from large-scale, theater-wide combat, as witnessed in Operation Iraqi Freedom, to small-scale operations against terrorists, to... gamut from large-scale, theater-wide combat, as witnessed in Operation Iraqi Freedom, to small-scale operations against terror- ists, to operations
Formation of large-scale structure from cosmic-string loops and cold dark matter
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Scherrer, Robert J.
1987-01-01
Some results from a numerical simulation of the formation of large-scale structure from cosmic-string loops are presented. It is found that even though G x mu is required to be lower than 2 x 10 to the -6th (where mu is the mass per unit length of the string) to give a low enough autocorrelation amplitude, there is excessive power on smaller scales, so that galaxies would be more dense than observed. The large-scale structure does not include a filamentary or connected appearance and shares with more conventional models based on Gaussian perturbations the lack of cluster-cluster correlation at the mean cluster separation scale as well as excessively small bulk velocities at these scales.
1982-08-01
AD-AIA 700 FLORIDA UN1V GAINESVILLE DEPT OF ENVIRONMENTAL ENGIN -ETC F/G 6/6 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE WHITE AMOR--ENL...Conway ecosystem and is part of the Large- Scale Operations Management Test (LSOMT) of the Aquatic Plant Control Research Program (APCRP) at the WES...should be cited as follows: Blancher, E. C., II, and Fellows, C. R. 1982. "Large-Scale Operations Management Test of Use of the White Amur for Control
1983-07-01
TEST CHART NATIONAL BVIREAU OF StANARS-1963- I AQUATIC PLANT CONTROL RESEARCH PROGRAM TECHNICAL REPORT A-78-2 LARGE-SCALE OPERATIONS MANAGEMENT TEST OF...Waterways Experiment Station P. 0. Box 631, Vicksburg, Miss. 39180 83 11 01 018 - I ., lit I III I | LARGE-SCALE OPERATIONS MANAGEMENT TEST OF USE OF THE...No. 3. RECIPIENT’S CATALOG NUMBER Technical Report A-78-2 Aa 1 Lj 19 ________5!1___ A. TITLE (Ad Subtitle) LARGE-SCALE OPERATIONS MANAGEMENT S. TYPE
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
Copy of Using Emulation and Simulation to Understand the Large-Scale Behavior of the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adalsteinsson, Helgi; Armstrong, Robert C.; Chiang, Ken
2008-10-01
We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
pycola: N-body COLA method code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias
2015-09-01
pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
Gas-Centered Swirl Coaxial Liquid Injector Evaluations
NASA Technical Reports Server (NTRS)
Cohn, A. K.; Strakey, P. A.; Talley, D. G.
2005-01-01
Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.
Numerical study of dynamo action at low magnetic Prandtl numbers.
Ponty, Y; Mininni, P D; Montgomery, D C; Pinton, J-F; Politano, H; Pouquet, A
2005-04-29
We present a three-pronged numerical approach to the dynamo problem at low magnetic Prandtl numbers P(M). The difficulty of resolving a large range of scales is circumvented by combining direct numerical simulations, a Lagrangian-averaged model and large-eddy simulations. The flow is generated by the Taylor-Green forcing; it combines a well defined structure at large scales and turbulent fluctuations at small scales. Our main findings are (i) dynamos are observed from P(M)=1 down to P(M)=10(-2), (ii) the critical magnetic Reynolds number increases sharply with P(M)(-1) as turbulence sets in and then it saturates, and (iii) in the linear growth phase, unstable magnetic modes move to smaller scales as P(M) is decreased. Then the dynamo grows at large scales and modifies the turbulent velocity fluctuations.
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data
Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.
2017-01-01
With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less
Hao, Shijie; Cui, Lishan; Wang, Hua; ...
2016-02-10
Crystals held at ultrahigh elastic strains and stresses may exhibit exceptional physical and chemical properties. Individual metallic nanowires can sustain ultra-large elastic strains of 4-7%. However, retaining elastic strains of such magnitude in kilogram-scale nanowires is challenging. Here, we find that under active load, ~5.6% elastic strain can be achieved in Nb nanowires in a composite material. Moreover, large tensile (2.8%) and compressive (-2.4%) elastic strains can be retained in kilogram-scale Nb nanowires when the composite is unloaded to a free-standing condition. It is then demonstrated that the retained tensile elastic strains of Nb nanowires significantly increase their superconducting transitionmore » temperature and critical magnetic fields, corroborating ab initio calculations based on BCS theory. This free-standing nanocomposite design paradigm opens new avenues for retaining ultra-large elastic strains in great quantities of nanowires and elastic-strain-engineering at industrial scale.« less
A process for creating multimetric indices for large-scale aquatic surveys
Differences in sampling and laboratory protocols, differences in techniques used to evaluate metrics, and differing scales of calibration and application prohibit the use of many existing multimetric indices (MMIs) in large-scale bioassessments. We describe an approach to develop...
Microfilament-Eruption Mechanism for Solar Spicules
NASA Technical Reports Server (NTRS)
Sterling, Alphonse C.; Moore, Ronald L.
2017-01-01
Recent studies indicate that solar coronal jets result from eruption of small-scale filaments, or "minifilaments" (Sterling et al. 2015, Nature, 523, 437; Panesar et al. ApJL, 832L, 7). In many aspects, these coronal jets appear to be small-scale versions of long-recognized large-scale solar eruptions that are often accompanied by eruption of a large-scale filament and that produce solar flares and coronal mass ejections (CMEs). In coronal jets, a jet-base bright point (JBP) that is often observed to accompany the jet and that sits on the magnetic neutral line from which the minifilament erupts, corresponds to the solar flare of larger-scale eruptions that occurs at the neutral line from which the large-scale filament erupts. Large-scale eruptions are relatively uncommon (approximately 1 per day) and occur with relatively large-scale erupting filaments (approximately 10 (sup 5) kilometers long). Coronal jets are more common (approximately 100s per day), but occur from erupting minifilaments of smaller size (approximately 10 (sup 4) kilometers long). It is known that solar spicules are much more frequent (many millions per day) than coronal jets. Just as coronal jets are small-scale versions of large-scale eruptions, here we suggest that solar spicules might in turn be small-scale versions of coronal jets; we postulate that the spicules are produced by eruptions of "microfilaments" of length comparable to the width of observed spicules (approximately 300 kilometers). A plot of the estimated number of the three respective phenomena (flares/CMEs, coronal jets, and spicules) occurring on the Sun at a given time, against the average sizes of erupting filaments, minifilaments, and the putative microfilaments, results in a size distribution that can be fitted with a power-law within the estimated uncertainties. The counterparts of the flares of large-scale eruptions and the JBPs of jets might be weak, pervasive, transient brightenings observed in Hinode/CaII images, and the production of spicules by microfilament eruptions might explain why spicules spin, as do coronal jets. The expected small-scale neutral lines from which the microfilaments would be expected to erupt would be difficult to detect reliably with current instrumentation, but might be apparent with instrumentation of the near future. A full report on this work appears in Sterling and Moore 2016, ApJL, 829, L9.
The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.
Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie
2016-01-01
In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O. (Editor); Housner, Jerrold M. (Editor)
1993-01-01
Computing speed is leaping forward by several orders of magnitude each decade. Engineers and scientists gathered at a NASA Langley symposium to discuss these exciting trends as they apply to parallel computational methods for large-scale structural analysis and design. Among the topics discussed were: large-scale static analysis; dynamic, transient, and thermal analysis; domain decomposition (substructuring); and nonlinear and numerical methods.
Chen, Wei; Deng, Da
2014-11-11
We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2014-09-30
172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing
2011-12-01
aqueous film forming foam ( AFFF ) firefighting agents and equipment are capable of...AFRL-RX-TY-TR-2012-0012 PERFORMANCE OF AQUEOUS FILM FORMING FOAM ( AFFF ) ON LARGE-SCALE HYDROPROCESSED RENEWABLE JET (HRJ) FUEL FIRES...Performance of Aqueous Film Forming Foam ( AFFF ) on Large-Scale Hydroprocessed Renewable Jet (HRJ) Fuel Fires FA4819-09-C-0030 0602102F 4915 D0
Large-scale Activities Associated with the 2005 Sep. 7th Event
NASA Astrophysics Data System (ADS)
Zong, Weiguo
We present a multi-wavelength study on large-scale activities associated with a significant solar event. On 2005 September 7, a flare classified as bigger than X17 was observed. Combining with Hα 6562.8 ˚, He I 10830 ˚and soft X-ray observations, three large-scale activities were A A found to propagate over a long distance on the solar surface. 1) The first large-scale activity emanated from the flare site, which propagated westward around the solar equator and appeared as sequential brightenings. With MDI longitudinal magnetic field map, the activity was found to propagate along the magnetic network. 2) The second large-scale activity could be well identified both in He I 10830 ˚images and soft X-ray images and appeared as diffuse emission A enhancement propagating away. The activity started later than the first one and was not centric on the flare site. Moreover, a rotation was found along with the bright front propagating away. 3) The third activity was ahead of the second one, which was identified as a "winking" filament. The three activities have different origins, which were seldom observed in one event. Therefore this study is useful to understand the mechanism of large-scale activities on solar surface.
Dissecting the large-scale galactic conformity
NASA Astrophysics Data System (ADS)
Seo, Seongu
2018-01-01
Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Srivastava, A. K.; Goossens, M.
2013-11-01
We present rare observational evidence of vertical kink oscillations in a laminar and diffused large-scale plasma curtain as observed by the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. The X6.9-class flare in active region 11263 on 2011 August 9 induces a global large-scale disturbance that propagates in a narrow lane above the plasma curtain and creates a low density region that appears as a dimming in the observational image data. This large-scale propagating disturbance acts as a non-periodic driver that interacts asymmetrically and obliquely with the top of the plasma curtain and triggers the observed oscillations. In themore » deeper layers of the curtain, we find evidence of vertical kink oscillations with two periods (795 s and 530 s). On the magnetic surface of the curtain where the density is inhomogeneous due to coronal dimming, non-decaying vertical oscillations are also observed (period ≈ 763-896 s). We infer that the global large-scale disturbance triggers vertical kink oscillations in the deeper layers as well as on the surface of the large-scale plasma curtain. The properties of the excited waves strongly depend on the local plasma and magnetic field conditions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorski, K.M.
1991-03-01
The relation between cosmic microwave background (CMB) anisotropies and large-scale galaxy streaming motions is examined within the framework of inflationary cosmology. The minimal Sachs and Wolfe (1967) CMB anisotropies at large angular scales in the models with initial Harrison-Zel'dovich spectrum of inhomogeneity normalized to the local large-scale bulk flow, which are independent of the Hubble constant and specific nature of dark matter, are found to be within the anticipated ultimate sensitivity limits of COBE's Differential Microwave Radiometer experiment. For example, the most likely value of the quadrupole coefficient is predicted to be a2 not less than 7 x 10 tomore » the -6th, where equality applies to the limiting minimal model. If (1) COBE's DMR instruments perform well throughout the two-year period; (2) the anisotropy data are not marred by the systematic errors; (3) the large-scale motions retain their present observational status; (4) there is no statistical conspiracy in a sense of the measured bulk flow being of untypically high and the large-scale anisotropy of untypically low amplitudes; and (5) the low-order multipoles in the all-sky primordial fireball temperature map are not detected, the inflationary paradigm will have to be questioned. 19 refs.« less
Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.
Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai
2008-03-15
A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.
How to simulate global cosmic strings with large string tension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klaer, Vincent B.; Moore, Guy D., E-mail: vklaer@theorie.ikp.physik.tu-darmstadt.de, E-mail: guy.moore@physik.tu-darmstadt.de
Global string networks may be relevant in axion production in the early Universe, as well as other cosmological scenarios. Such networks contain a large hierarchy of scales between the string core scale and the Hubble scale, ln( f {sub a} / H ) ∼ 70, which influences the network dynamics by giving the strings large tensions T ≅ π f {sub a} {sup 2} ln( f {sub a} / H ). We present a new numerical approach to simulate such global string networks, capturing the tension without an exponentially large lattice.
Liu, Yuqiong; Du, Qingyun; Wang, Qi; Yu, Huanyun; Liu, Jianfeng; Tian, Yu; Chang, Chunying; Lei, Jing
2017-07-01
The causation between bioavailability of heavy metals and environmental factors are generally obtained from field experiments at local scales at present, and lack sufficient evidence from large scales. However, inferring causation between bioavailability of heavy metals and environmental factors across large-scale regions is challenging. Because the conventional correlation-based approaches used for causation assessments across large-scale regions, at the expense of actual causation, can result in spurious insights. In this study, a general approach framework, Intervention calculus when the directed acyclic graph (DAG) is absent (IDA) combined with the backdoor criterion (BC), was introduced to identify causation between the bioavailability of heavy metals and the potential environmental factors across large-scale regions. We take the Pearl River Delta (PRD) in China as a case study. The causal structures and effects were identified based on the concentrations of heavy metals (Zn, As, Cu, Hg, Pb, Cr, Ni and Cd) in soil (0-20 cm depth) and vegetable (lettuce) and 40 environmental factors (soil properties, extractable heavy metals and weathering indices) in 94 samples across the PRD. Results show that the bioavailability of heavy metals (Cd, Zn, Cr, Ni and As) was causally influenced by soil properties and soil weathering factors, whereas no causal factor impacted the bioavailability of Cu, Hg and Pb. No latent factor was found between the bioavailability of heavy metals and environmental factors. The causation between the bioavailability of heavy metals and environmental factors at field experiments is consistent with that on a large scale. The IDA combined with the BC provides a powerful tool to identify causation between the bioavailability of heavy metals and environmental factors across large-scale regions. Causal inference in a large system with the dynamic changes has great implications for system-based risk management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid services efficient provisioning over the network coding-enabled elastic optical networks
NASA Astrophysics Data System (ADS)
Wang, Xin; Gu, Rentao; Ji, Yuefeng; Kavehrad, Mohsen
2017-03-01
As a variety of services have emerged, hybrid services have become more common in real optical networks. Although the elastic spectrum resource optimizations over the elastic optical networks (EONs) have been widely investigated, little research has been carried out on the hybrid services of the routing and spectrum allocation (RSA), especially over the network coding-enabled EON. We investigated the RSA for the unicast service and network coding-based multicast service over the network coding-enabled EON with the constraints of time delay and transmission distance. To address this issue, a mathematical model was built to minimize the total spectrum consumption for the hybrid services over the network coding-enabled EON under the constraints of time delay and transmission distance. The model guarantees different routing constraints for different types of services. The immediate nodes over the network coding-enabled EON are assumed to be capable of encoding the flows for different kinds of information. We proposed an efficient heuristic algorithm of the network coding-based adaptive routing and layered graph-based spectrum allocation algorithm (NCAR-LGSA). From the simulation results, NCAR-LGSA shows highly efficient performances in terms of the spectrum resources utilization under different network scenarios compared with the benchmark algorithms.
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-01-01
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs. PMID:28786915
Network and User-Perceived Performance of Web Page Retrievals
NASA Technical Reports Server (NTRS)
Kruse, Hans; Allman, Mark; Mallasch, Paul
1998-01-01
The development of the HTTP protocol has been driven by the need to improve the network performance of the protocol by allowing the efficient retrieval of multiple parts of a web page without the need for multiple simultaneous TCP connections between a client and a server. We suggest that the retrieval of multiple page elements sequentially over a single TCP connection may result in a degradation of the perceived performance experienced by the user. We attempt to quantify this perceived degradation through the use of a model which combines a web retrieval simulation and an analytical model of TCP operation. Starting with the current HTTP/l.1 specification, we first suggest a client@side heuristic to improve the perceived transfer performance. We show that the perceived speed of the page retrieval can be increased without sacrificing data transfer efficiency. We then propose a new client/server extension to the HTTP/l.1 protocol to allow for the interleaving of page element retrievals. We finally address the issue of the display of advertisements on web pages, and in particular suggest a number of mechanisms which can make efficient use of IP multicast to send advertisements to a number of clients within the same network.
Performance Evaluation of Telemedicine System based on multicasting over Heterogeneous Network.
Yun, H Y; Yoo, S K; Kim, D K; Rim Kim, Sung
2005-01-01
For appropriate diagnosis, medical image such as high quality image of patient's affected part and vital signal, patient information, and teleconferencing data for communication between specialists will be transmitted. After connecting patient and specialist the center, sender acquires patient data and transmits to the center through TCP/IP protocol. Data that is transmitted to center is retransmitted to each specialist side that accomplish connection after being copied according to listener's number from transmission buffer. At transmission of medical information data in network, transmission delay and loss occur by the change of buffer size, packet size, number of user and kind of networks. As there lies the biggest delay possibility in ADSL, buffer Size should be established by 1Mbytes first to minimize transmission regionalism and each packet's size must be set accordingly to MTU Size in order to improve network efficiency by maximum. Also, listener's number should be limited by less than 6 people. Data transmission consisted smoothly all in experiment result in common use network- ADSL, VDSL, WLAN, LAN-. But, possibility of delay appeared most greatly in ADSL that has the most confined bandwidth. To minimize the possibility of delay, some adjustment is needed such as buffer size, number of receiver, packet size.
Applications of satellite technology to broadband ISDN networks
NASA Technical Reports Server (NTRS)
Price, Kent M.; Kwan, Robert K.; Chitre, D. M.; Henderson, T. R.; White, L. W.; Morgan, W. L.
1992-01-01
Two satellite architectures for delivering broadband integrated services digital network (B-ISDN) service are evaluated. The first is assumed integral to an existing terrestrial network, and provides complementary services such as interconnects to remote nodes as well as high-rate multicast and broadcast service. The interconnects are at a 155 Mbs rate and are shown as being met with a nonregenerative multibeam satellite having 10-1.5 degree spots. The second satellite architecture focuses on providing private B-ISDN networks as well as acting as a gateway to the public network. This is conceived as being provided by a regenerative multibeam satellite with on-board ATM (asynchronous transfer mode) processing payload. With up to 800 Mbs offered, higher satellite EIRP is required. This is accomplished with 12-0.4 degree hopping beams, covering a total of 110 dwell positions. It is estimated the space segment capital cost for architecture one would be about $190M whereas the second architecture would be about $250M. The net user cost is given for a variety of scenarios, but the cost for 155 Mbs services is shown to be about $15-22/minute for 25 percent system utilization.
A Lightweight Protocol for Secure Video Streaming
Morkevicius, Nerijus; Bagdonas, Kazimieras
2018-01-01
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing “Fog Node-End Device” layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard. PMID:29757988
A Lightweight Protocol for Secure Video Streaming.
Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis
2018-05-14
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.
Chip-set for quality of service support in passive optical networks
NASA Astrophysics Data System (ADS)
Ringoot, Edwin; Hoebeke, Rudy; Slabbinck, B. Hans; Verhaert, Michel
1998-10-01
In this paper the design of a chip-set for QoS provisioning in ATM-based Passive Optical Networks is discussed. The implementation of a general-purpose switch chip on the Optical Network Unit is presented, with focus on the design of the cell scheduling and buffer management logic. The cell scheduling logic supports `colored' grants, priority jumping and weighted round-robin scheduling. The switch chip offers powerful buffer management capabilities enabling the efficient support of GFR and UBR services. Multicast forwarding is also supported. In addition, the architecture of a MAC controller chip developed for a SuperPON access network is introduced. In particular, the permit scheduling logic and its implementation on the Optical Line Termination will be discussed. The chip-set enables the efficient support of services with different service requirements on the SuperPON. The permit scheduling logic built into the MAC controller chip in combination with the cell scheduling and buffer management capabilities of the switch chip can be used by network operators to offer guaranteed service performance to delay sensitive services, and to efficiently and fairly distribute any spare capacity to delay insensitive services.
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
NASA Astrophysics Data System (ADS)
Thorslund, J.; Jarsjo, J.; Destouni, G.
2017-12-01
The quality of freshwater resources is increasingly impacted by human activities. Humans also extensively change the structure of landscapes, which may alter natural hydrological processes. To manage and maintain freshwater of good water quality, it is critical to understand how pollutants are released into, transported and transformed within the hydrological system. Some key scientific questions include: What are net downstream impacts of pollutants across different hydroclimatic and human disturbance conditions, and on different scales? What are the functions within and between components of the landscape, such as wetlands, on mitigating pollutant load delivery to downstream recipients? We explore these questions by synthesizing results from several relevant case study examples of intensely human-impacted hydrological systems. These case study sites have been specifically evaluated in terms of net impact of human activities on pollutant input to the aquatic system, as well as flow-path distributions trough wetlands as a potential ecosystem service of pollutant mitigation. Results shows that although individual wetlands have high retention capacity, efficient net retention effects were not always achieved at a larger landscape scale. Evidence suggests that the function of wetlands as mitigation solutions to pollutant loads is largely controlled by large-scale parallel and circular flow-paths, through which multiple wetlands are interconnected in the landscape. To achieve net mitigation effects at large scale, a large fraction of the polluted large-scale flows must be transported through multiple connected wetlands. Although such large-scale flow interactions are critical for assessing water pollution spreading and fate through the landscape, our synthesis shows a frequent lack of knowledge at such scales. We suggest ways forward for addressing the mismatch between the large scales at which key pollutant pressures and water quality changes take place and the relatively scale at which most studies and implementations are currently made. These suggestions can help bridge critical knowledge gaps, as needed for improving water quality predictions and mitigation solutions under human and environmental changes.
Effects of large-scale wind driven turbulence on sound propagation
NASA Technical Reports Server (NTRS)
Noble, John M.; Bass, Henry E.; Raspet, Richard
1990-01-01
Acoustic measurements made in the atmosphere have shown significant fluctuations in amplitude and phase resulting from the interaction with time varying meteorological conditions. The observed variations appear to have short term and long term (1 to 5 minutes) variations at least in the phase of the acoustic signal. One possible way to account for this long term variation is the use of a large scale wind driven turbulence model. From a Fourier analysis of the phase variations, the outer scales for the large scale turbulence is 200 meters and greater, which corresponds to turbulence in the energy-containing subrange. The large scale turbulence is assumed to be elongated longitudinal vortex pairs roughly aligned with the mean wind. Due to the size of the vortex pair compared to the scale of the present experiment, the effect of the vortex pair on the acoustic field can be modeled as the sound speed of the atmosphere varying with time. The model provides results with the same trends and variations in phase observed experimentally.
Latest COBE results, large-scale data, and predictions of inflation
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1992-01-01
One of the predictions of the inflationary scenario of cosmology is that the initial spectrum of primordial density fluctuations (PDFs) must have the Harrison-Zeldovich (HZ) form. Here, in order to test the inflationary scenario, predictions of the microwave background radiation (MBR) anisotropies measured by COBE are computed based on large-scale data for the universe and assuming Omega-1 and the HZ spectrum on large scales. It is found that the minimal scale where the spectrum can first enter the HZ regime is found, constraining the power spectrum of the mass distribution to within the bias factor b. This factor is determined and used to predict parameters of the MBR anisotropy field. For the spectrum of PDFs that reaches the HZ regime immediately after the scale accessible to the APM catalog, the numbers on MBR anisotropies are consistent with the COBE detections and thus the standard inflation can indeed be considered a viable theory for the origin of the large-scale structure in the universe.
Liu, Ke; Zhang, Jian; Bao, Jie
2015-11-01
A two stage hydrolysis of corn stover was designed to solve the difficulties between sufficient mixing at high solids content and high power input encountered in large scale bioreactors. The process starts with the quick liquefaction to convert solid cellulose to liquid slurry with strong mixing in small reactors, then followed the comprehensive hydrolysis to complete saccharification into fermentable sugars in large reactors without agitation apparatus. 60% of the mixing energy consumption was saved by removing the mixing apparatus in large scale vessels. Scale-up ratio was small for the first step hydrolysis reactors because of the reduced reactor volume. For large saccharification reactors in the second step, the scale-up was easy because of no mixing mechanism was involved. This two stage hydrolysis is applicable for either simple hydrolysis or combined fermentation processes. The method provided a practical process option for industrial scale biorefinery processing of lignocellulose biomass. Copyright © 2015 Elsevier Ltd. All rights reserved.
Homogenization of Large-Scale Movement Models in Ecology
Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.
2011-01-01
A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.
NASA Astrophysics Data System (ADS)
Wu, Qiujie; Tan, Liu; Xu, Sen; Liu, Dabin; Min, Li
2018-04-01
Numerous accidents of emulsion explosive (EE) are attributed to uncontrolled thermal decomposition of ammonium nitrate emulsion (ANE, the intermediate of EE) and EE in large scale. In order to study the thermal decomposition characteristics of ANE and EE in different scales, a large-scale test of modified vented pipe test (MVPT), and two laboratory-scale tests of differential scanning calorimeter (DSC) and accelerating rate calorimeter (ARC) were applied in the present study. The scale effect and water effect both play an important role in the thermal stability of ANE and EE. The measured decomposition temperatures of ANE and EE in MVPT are 146°C and 144°C, respectively, much lower than those in DSC and ARC. As the size of the same sample in DSC, ARC, and MVPT successively increases, the onset temperatures decrease. In the same test, the measured onset temperature value of ANE is higher than that of EE. The water composition of the sample stabilizes the sample. The large-scale test of MVPT can provide information for the real-life operations. The large-scale operations have more risks, and continuous overheating should be avoided.
Large scale modulation of high frequency acoustic waves in periodic porous media.
Boutin, Claude; Rallu, Antoine; Hans, Stephane
2012-12-01
This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.
Anisotropies of the cosmic microwave background in nonstandard cold dark matter models
NASA Technical Reports Server (NTRS)
Vittorio, Nicola; Silk, Joseph
1992-01-01
Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.
Performance of lap splices in large-scale column specimens affected by ASR and/or DEF.
DOT National Transportation Integrated Search
2012-06-01
This research program conducted a large experimental program, which consisted of the design, construction, : curing, deterioration, and structural load testing of 16 large-scale column specimens with a critical lap splice : region, and then compared ...
Large Scale Survey Data in Career Development Research
ERIC Educational Resources Information Center
Diemer, Matthew A.
2008-01-01
Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
NASA Astrophysics Data System (ADS)
Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey
2017-04-01
Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value
Beaglehole, Ben; Frampton, Chris M; Boden, Joseph M; Mulder, Roger T; Bell, Caroline J
2017-11-01
Following the onset of the Canterbury, New Zealand earthquakes, there were widespread concerns that mental health services were under severe strain as a result of adverse consequences on mental health. We therefore examined Health of the Nation Outcome Scales data to see whether this could inform our understanding of the impact of the Canterbury earthquakes on patients attending local specialist mental health services. Health of the Nation Outcome Scales admission data were analysed for Canterbury mental health services prior to and following the Canterbury earthquakes. These findings were compared to Health of the Nation Outcome Scales admission data from seven other large District Health Boards to delineate local from national trends. Percentage changes in admission numbers were also calculated before and after the earthquakes for Canterbury and the seven other large district health boards. Admission Health of the Nation Outcome Scales scores in Canterbury increased after the earthquakes for adult inpatient and community services, old age inpatient and community services, and Child and Adolescent inpatient services compared to the seven other large district health boards. Admission Health of the Nation Outcome Scales scores for Child and Adolescent community services did not change significantly, while admission Health of the Nation Outcome Scales scores for Alcohol and Drug services in Canterbury fell compared to other large district health boards. Subscale analysis showed that the majority of Health of the Nation Outcome Scales subscales contributed to the overall increases found. Percentage changes in admission numbers for the Canterbury District Health Board and the seven other large district health boards before and after the earthquakes were largely comparable with the exception of admissions to inpatient services for the group aged 4-17 years which showed a large increase. The Canterbury earthquakes were followed by an increase in Health of the Nation Outcome Scales scores for attendees of local mental health services compared to other large district health boards. This suggests that patients presented with greater degrees of psychiatric distress, social disruption, behavioural change and impairment as a result of the earthquakes.
2016-08-10
AFRL-AFOSR-JP-TR-2016-0073 Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation ...2016 4. TITLE AND SUBTITLE Large-scale Linear Optimization through Machine Learning: From Theory to Practical System Design and Implementation 5a...performances on various machine learning tasks and it naturally lends itself to fast parallel implementations . Despite this, very little work has been
Mapping the integrated Sachs-Wolfe effect
NASA Astrophysics Data System (ADS)
Manzotti, A.; Dodelson, S.
2014-12-01
On large scales, the anisotropies in the cosmic microwave background (CMB) reflect not only the primordial density field but also the energy gain when photons traverse decaying gravitational potentials of large scale structure, what is called the integrated Sachs-Wolfe (ISW) effect. Decomposing the anisotropy signal into a primordial piece and an ISW component, the main secondary effect on large scales, is more urgent than ever as cosmologists strive to understand the Universe on those scales. We present a likelihood technique for extracting the ISW signal combining measurements of the CMB, the distribution of galaxies, and maps of gravitational lensing. We test this technique with simulated data showing that we can successfully reconstruct the ISW map using all the data sets together. Then we present the ISW map obtained from a combination of real data: the NRAO VLA sky survey (NVSS) galaxy survey, temperature anisotropies, and lensing maps made by the Planck satellite. This map shows that, with the data sets used and assuming linear physics, there is no evidence, from the reconstructed ISW signal in the Cold Spot region, for an entirely ISW origin of this large scale anomaly in the CMB. However a large scale structure origin from low redshift voids outside the NVSS redshift range is still possible. Finally we show that future surveys, thanks to a better large scale lensing reconstruction will be able to improve the reconstruction signal to noise which is now mainly coming from galaxy surveys.
NASA Astrophysics Data System (ADS)
Best, J.
2004-05-01
The origin and scaling of large-scale coherent flow structures has been of central interest in furthering understanding of the nature of turbulent boundary layers, and recent work has shown the presence of large-scale turbulent flow structures that may extend through the whole flow depth. Such structures may dominate the entrainment of bedload sediment and advection of fine sediment in suspension. However, we still know remarkably little of the interactions between the dynamics of coherent flow structures and sediment transport, and its implications for ecosystem dynamics. This paper will discuss the first results of two-phase particle imaging velocimetry (PIV) that has been used to visualize large-scale turbulent flow structures moving over a flat bed in a water channel, and the motion of sand particles within these flows. The talk will outline the methodology, involving the fluorescent tagging of sediment and its discrimination from the fluid phase, and show results that illustrate the key role of these large-scale structures in the transport of sediment. Additionally, the presence of these structures will be discussed in relation to the origin of vorticity within flat-bed boundary layers and recent models that envisage these large-scale motions as being linked to whole-flow field structures. Discussion will focus on if these recent models simply reflect the organization of turbulent boundary layer structure and vortex packets, some of which are amply visualised at the laminar-turbulent transition.
NASA Technical Reports Server (NTRS)
Corke, T. C.; Guezennec, Y.; Nagib, H. M.
1981-01-01
The effects of placing a parallel-plate turbulence manipulator in a boundary layer are documented through flow visualization and hot wire measurements. The boundary layer manipulator was designed to manage the large scale structures of turbulence leading to a reduction in surface drag. The differences in the turbulent structure of the boundary layer are summarized to demonstrate differences in various flow properties. The manipulator inhibited the intermittent large scale structure of the turbulent boundary layer for at least 70 boundary layer thicknesses downstream. With the removal of the large scale, the streamwise turbulence intensity levels near the wall were reduced. The downstream distribution of the skin friction was also altered by the introduction of the manipulator.
Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing
Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong
2014-01-01
This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931
The role of large scale motions on passive scalar transport
NASA Astrophysics Data System (ADS)
Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano
2014-11-01
We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.
A simple model of intraseasonal oscillations
NASA Astrophysics Data System (ADS)
Fuchs, Željka; Raymond, David J.
2017-06-01
The intraseasonal oscillations and in particular the MJO have been and still remain a "holy grail" of today's atmospheric science research. Why does the MJO propagate eastward? What makes it unstable? What is the scaling for the MJO, i.e., why does it prefer long wavelengths or planetary wave numbers 1-3? What is the westward moving component of the intraseasonal oscillation? Though linear WISHE has long been discounted as a plausible model for intraseasonal oscillations and the MJO, the version we have developed explains many of the observed features of those phenomena, in particular, the preference for large zonal scale. In this model version, the moisture budget and the increase of precipitation with tropospheric humidity lead to a "moisture mode." The destabilization of the large-scale moisture mode occurs via WISHE only and there is no need to postulate large-scale radiatively induced instability or negative effective gross moist stability. Our WISHE-moisture theory leads to a large-scale unstable eastward propagating mode in n = -1 case and a large-scale unstable westward propagating mode in n = 1 case. We suggest that the n = -1 case might be connected to the MJO and the observed westward moving disturbance to the observed equatorial Rossby mode.
Cross-indexing of binary SIFT codes for large-scale image search.
Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi
2014-05-01
In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.
The influence of large-scale wind power on global climate.
Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J
2004-11-16
Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.