NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
Lightweight causal and atomic group multicast
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Schiper, Andre; Stephenson, Pat
1991-01-01
The ISIS toolkit is a distributed programming environment based on support for virtually synchronous process groups and group communication. A suite of protocols is presented to support this model. The approach revolves around a multicast primitive, called CBCAST, which implements a fault-tolerant, causally ordered message delivery. This primitive can be used directly or extended into a totally ordered multicast primitive, called ABCAST. It normally delivers messages immediately upon reception, and imposes a space overhead proportional to the size of the groups to which the sender belongs, usually a small number. It is concluded that process groups and group communication can achieve performance and scaling comparable to that of a raw message transport layer. This finding contradicts the widespread concern that this style of distributed computing may be unacceptably costly.
A decentralized software bus based on IP multicas ting
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd
1995-01-01
We describe decentralized reconfigurable implementation of a conference management system based on the low-level Internet Protocol (IP) multicasting protocol. IP multicasting allows low-cost, world-wide, two-way transmission of data between large numbers of conferencing participants through the Multicasting Backbone (MBone). Each conference is structured as a software bus -- a messaging system that provides a run-time interconnection model that acts as a separate agent (i.e., the bus) for routing, queuing, and delivering messages between distributed programs. Unlike the client-server interconnection model, the software bus model provides a level of indirection that enhances the flexibility and reconfigurability of a distributed system. Current software bus implementations like POLYLITH, however, rely on a centralized bus process and point-to-point protocols (i.e., TCP/IP) to route, queue, and deliver messages. We implement a software bus called the MULTIBUS that relies on a separate process only for routing and uses a reliable IP multicasting protocol for delivery of messages. The use of multicasting means that interconnections are independent of IP machine addresses. This approach allows reconfiguration of bus participants during system execution without notifying other participants of new IP addresses. The use of IP multicasting also permits an economy of scale in the number of participants. We describe the MULITIBUS protocol elements and show how our implementation performs better than centralized bus implementations.
NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.
Enabling end-user network monitoring via the multicast consolidated proxy monitor
NASA Astrophysics Data System (ADS)
Kanwar, Anshuman; Almeroth, Kevin C.; Bhattacharyya, Supratik; Davy, Matthew
2001-07-01
The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.
Authenticated IGMP for Controlling Access to Multicast Distribution Tree
NASA Astrophysics Data System (ADS)
Park, Chang-Seop; Kang, Hyun-Sun
A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.
A proposed group management scheme for XTP multicast
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
A Secure Multicast Framework in Large and High-Mobility Network Groups
NASA Astrophysics Data System (ADS)
Lee, Jung-San; Chang, Chin-Chen
With the widespread use of Internet applications such as Teleconference, Pay-TV, Collaborate tasks, and Message services, how to construct and distribute the group session key to all group members securely is becoming and more important. Instead of adopting the point-to-point packet delivery, these emerging applications are based upon the mechanism of multicast communication, which allows the group member to communicate with multi-party efficiently. There are two main issues in the mechanism of multicast communication: Key Distribution and Scalability. The first issue is how to distribute the group session key to all group members securely. The second one is how to maintain the high performance in large network groups. Group members in conventional multicast systems have to keep numerous secret keys in databases, which makes it very inconvenient for them. Furthermore, in case that a member joins or leaves the communication group, many involved participants have to change their own secret keys to preserve the forward secrecy and the backward secrecy. We consequently propose a novel version for providing secure multicast communication in large network groups. Our proposed framework not only preserves the forward secrecy and the backward secrecy but also possesses better performance than existing alternatives. Specifically, simulation results demonstrate that our scheme is suitable for high-mobility environments.
Multisites Coordination in Shared Multicast Trees
1999-01-01
conferencing, distributed interactive simulations, and collaborative systems. We de- scribe a novel protocol to coordinate multipoint groupwork in the IP...multicast framework. The pro- tocol supports Internet-wide coordination for large and highly-interactive groupwork , relying on trans- mission of
Fault recovery in the reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.
Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Callahan, John R.; Whetten, Brian
1996-01-01
The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.
Many-to-Many Multicast Routing Schemes under a Fixed Topology
Ding, Wei; Wang, Hongfa; Wei, Xuerui
2013-01-01
Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706
Multiuser Transmit Beamforming for Maximum Sum Capacity in Tactical Wireless Multicast Networks
2006-08-01
commonly used extended Kalman filter . See [2, 5, 6] for recent tutorial overviews. In particle filtering , continuous distributions are approximated by...signals (using and developing associated particle filtering tools). Our work on these topics has been reported in seven (IEEE, SIAM) journal papers and...multidimensional scaling, tracking, intercept, particle filters . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT 18. SECURITY CLASSIFICATION OF
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
Dynamic multicast routing scheme in WDM optical network
NASA Astrophysics Data System (ADS)
Zhu, Yonghua; Dong, Zhiling; Yao, Hong; Yang, Jianyong; Liu, Yibin
2007-11-01
During the information era, the Internet and the service of World Wide Web develop rapidly. Therefore, the wider and wider bandwidth is required with the lower and lower cost. The demand of operation turns out to be diversified. Data, images, videos and other special transmission demands share the challenge and opportunity with the service providers. Simultaneously, the electrical equipment has approached their limit. So the optical communication based on the wavelength division multiplexing (WDM) and the optical cross-connects (OXCs) shows great potentials and brilliant future to build an optical network based on the unique technical advantage and multi-wavelength characteristic. In this paper, we propose a multi-layered graph model with inter-path between layers to solve the problem of multicast routing wavelength assignment (RWA) contemporarily by employing an efficient graph theoretic formulation. And at the same time, an efficient dynamic multicast algorithm named Distributed Message Copying Multicast (DMCM) mechanism is also proposed. The multicast tree with minimum hops can be constructed dynamically according to this proposed scheme.
A high performance totally ordered multicast protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Whetten, Brian; Kaplan, Simon
1995-01-01
This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Schiper, Andre; Stephenson, Pat
1990-01-01
A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.
The multidriver: A reliable multicast service using the Xpress Transfer Protocol
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.
1990-01-01
A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
Multicast for savings in cache-based video distribution
NASA Astrophysics Data System (ADS)
Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf
1999-12-01
Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.
Bulk data transfer distributer: a high performance multicast model in ALMA ACS
NASA Astrophysics Data System (ADS)
Cirami, R.; Di Marcantonio, P.; Chiozzi, G.; Jeram, B.
2006-06-01
A high performance multicast model for the bulk data transfer mechanism in the ALMA (Atacama Large Millimeter Array) Common Software (ACS) is presented. The ALMA astronomical interferometer will consist of at least 50 12-m antennas operating at millimeter wavelength. The whole software infrastructure for ALMA is based on ACS, which is a set of application frameworks built on top of CORBA. To cope with the very strong requirements for the amount of data that needs to be transported by the software communication channels of the ALMA subsystems (a typical output data rate expected from the Correlator is of the order of 64 MB per second) and with the potential CORBA bottleneck due to parameter marshalling/de-marshalling, usage of IIOP protocol, etc., a transfer mechanism based on the ACE/TAO CORBA Audio/Video (A/V) Streaming Service has been developed. The ACS Bulk Data Transfer architecture bypasses the CORBA protocol with an out-of-bound connection for the data streams (transmitting data directly in TCP or UDP format), using at the same time CORBA for handshaking and leveraging the benefits of ACS middleware. Such a mechanism has proven to be capable of high performances, of the order of 800 Mbits per second on a 1Gbit Ethernet network. Besides a point-to-point communication model, the ACS Bulk Data Transfer provides a multicast model. Since the TCP protocol does not support multicasting and all the data must be correctly delivered to all ALMA subsystems, a distributer mechanism has been developed. This paper focuses on the ACS Bulk Data Distributer, which mimics a multicast behaviour managing data dispatching to all receivers willing to get data from the same sender.
A heuristic for efficient data distribution management in distributed simulation
NASA Astrophysics Data System (ADS)
Gupta, Pankaj; Guha, Ratan K.
2005-05-01
In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
NASA Technical Reports Server (NTRS)
Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick;
2001-01-01
A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishihara, T
Currently, the problem at hand is in distributing identical copies of OEP and filter software to a large number of farm nodes. One of the common methods used to transfer these softwares is through unicast. Unicast protocol faces the problem of repetitiously sending the same data over the network. Since the sending rate is limited, this process poses to be a bottleneck. Therefore, one possible solution to this problem lies in creating a reliable multicast protocol. A specific type of multicast protocol is the Bulk Multicast Protocol [4]. This system consists of one sender distributing data to many receivers. Themore » sender delivers data at a given rate of data packets. In response to that, the receiver replies to the sender with a status packet which contains information about the packet loss in terms of Negative Acknowledgment. The probability of the status packet sent back to the sender is+, where N is the number of receivers. The protocol is designed to have approximately 1 status packet for each data packet sent. In this project, we were able to show that the time taken for the complete transfer of a file to multiple receivers was about 12 times faster with multicast than by the use of unicast. The implementation of this experimental protocol shows remarkable improvement in mass data transfer to a large number of farm machines.« less
An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs
NASA Astrophysics Data System (ADS)
Basalamah, Anas; Sato, Takuro
For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.
Demonstration of flexible multicasting and aggregation functionality for TWDM-PON
NASA Astrophysics Data System (ADS)
Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Zhu, Jinglong; Tian, Yu; Wu, Zhongying; Peng, Huangfa; Xu, Yongchi; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan
2017-06-01
The time- and wavelength-division multiplexed passive optical network (TWDM-PON) has been recognized as an attractive solution to provide broadband access for the next-generation networks. In this paper, we propose flexible service multicasting and aggregation functionality for TWDM-PON utilizing multiple-pump four-wave-mixing (FWM) and cyclic arrayed waveguide grating (AWG). With the proposed scheme, multiple TWDM-PON links share a single optical line terminal (OLT), which can greatly reduce the network deployment expense and achieve efficient network resource utilization by load balancing among different optical distribution networks (ODNs). The proposed scheme is compatible with existing TDM-PON infrastructure with fixed-wavelength OLT transmitter, thus smooth service upgrade can be achieved. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment with 10-Gb/s OOK and 10-Gb/s QPSK orthogonal frequency division multiplexing (OFDM) signal multicasting and aggregating to seven PON links. Compared with back-to-back (BTB) channel, the newly generated multicasting OOK signal and OFDM signal have power penalty of 1.6 dB and 2 dB at the BER of 10-3, respectively. For the aggregation of multiple channels, no obvious power penalty is observed. What is more, to verify the flexibility of the proposed scheme, we reconfigure the wavelength selective switch (WSS) and adjust the number of pumps to realize flexible multicasting functionality. One to three, one to seven, one to thirteen and one to twenty-one multicasting are achieved without modifying OLT structure.
Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru
2016-02-08
Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.
Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun; Vandenhoute, Marc
2000-10-01
In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.
Fingerprint multicast in secure video streaming.
Zhao, H Vicky; Liu, K J Ray
2006-01-01
Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.
Research on Collaborative Technology in Distributed Virtual Reality System
NASA Astrophysics Data System (ADS)
Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi
2018-01-01
Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.
Multicasting in Wireless Communications (Ad-Hoc Networks): Comparison against a Tree-Based Approach
NASA Astrophysics Data System (ADS)
Rizos, G. E.; Vasiliadis, D. C.
2007-12-01
We examine on-demand multicasting in ad hoc networks. The Core Assisted Mesh Protocol (CAMP) is a well-known protocol for multicast routing in ad-hoc networks, generalizing the notion of core-based trees employed for internet multicasting into multicast meshes that have much richer connectivity than trees. On the other hand, wireless tree-based multicast routing protocols use much simpler structures for determining route paths, using only parent-child relationships. In this work, we compare the performance of the CAMP protocol against the performance of wireless tree-based multicast routing protocols, in terms of two important factors, namely packet delay and ratio of dropped packets.
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
IPTV multicast with peer-assisted lossy error control
NASA Astrophysics Data System (ADS)
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
NASA Astrophysics Data System (ADS)
Li, Ze; Zhang, Min; Wang, Danshi; Cui, Yue
2017-09-01
We propose a flexible and reconfigurable wavelength-division multiplexing (WDM) multicast scheme supporting downstream emergency multicast communication for WDM optical access network (WDM-OAN) via a multicast module (MM) based on four-wave mixing (FWM) in a semiconductor optical amplifier. It serves as an emergency measure to dispose of the burst, large bandwidth, and real-time multicast service with fast service provisioning and high resource efficiency. It also plays the role of physical backup in cases of big data migration or network disaster caused by invalid lasers or modulator failures. It provides convenient and reliable multicast service and emergency protection for WDM-OAN without modifying WDM-OAN structure. The strategies of an MM setting at the optical line terminal and remote node are discussed to apply this scheme to passive optical networks and active optical networks, respectively. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment in which one-to-six/eight 10-Gbps nonreturn-to-zero-differential phase-shift keying WDM multicasts in both strategies are successfully transmitted over single-mode fiber of 20.2 km. One-to-many reconfigurable WDM multicasts dealing with higher data rate and other modulation formats of multicast service are possible through the proposed scheme. It can be applied to different WDM access technologies, e.g., time-wavelength-division multiplexing-OAN and coherent WDM-OAN, and upgraded smoothly.
Degree-constrained multicast routing for multimedia communications
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugeng; Li, Guidan
2005-02-01
Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.
A Stateful Multicast Access Control Mechanism for Future Metro-Area-Networks.
ERIC Educational Resources Information Center
Sun, Wei-qiang; Li, Jin-sheng; Hong, Pei-lin
2003-01-01
Multicasting is a necessity for a broadband metro-area-network; however security problems exist with current multicast protocols. A stateful multicast access control mechanism, based on MAPE, is proposed. The architecture of MAPE is discussed, as well as the states maintained and messages exchanged. The scheme is flexible and scalable. (Author/AEF)
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Optical multicast system for data center networks.
Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren
2015-08-24
We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.
A novel WDM passive optical network architecture supporting two independent multicast data streams
NASA Astrophysics Data System (ADS)
Qiu, Yang; Chan, Chun-Kit
2012-01-01
We propose a novel scheme to perform optical multicast overlay of two independent multicast data streams on a wavelength-division-multiplexed (WDM) passive optical network. By controlling a sinusoidal clock signal and shifting the wavelength at the optical line terminal (OLT), the delivery of the two multicast data, being carried by the generated optical tones, can be independently and flexibly controlled. Simultaneous transmission of 10-Gb/s unicast downstream and upstream data as well as two independent 10-Gb/s multicast data was successfully demonstrated.
Lu, Guo-Wei; Bo, Tianwai; Sakamoto, Takahide; Yamamoto, Naokatsu; Chan, Calvin Chun-Kit
2016-10-03
Recently the ever-growing demand for dynamic and high-capacity services in optical networks has resulted in new challenges that require improved network agility and flexibility in order for network resources to become more "consumable" and dynamic, or elastic, in response to requests from higher network layers. Flexible and scalable wavelength conversion or multicast is one of the most important technologies needed for developing agility in the physical layer. This paper will investigate how, using a reconfigurable coherent multi-carrier as a pump, the multicast scalability and the flexibility in wavelength allocation of the converted signals can be effectively improved. Moreover, the coherence in the multiple carriers prevents the phase noise transformation from the local pump to the converted signals, which is imperative for the phase-noise-sensitive multi-level single- or multi-carrier modulated signal. To verify the feasibility of the proposed scheme, we experimentally demonstrate the wavelength multicast of coherent optical orthogonal frequency division multiplexing (CO-OFDM) signals using a reconfigurable coherent multi-carrier pump, showing flexibility in wavelength allocation, scalability in multicast, and tolerance against pump phase noise. Less than 0.5 dB and 1.8 dB power penalties at a bit-error rate (BER) of 10-3 are obtained for the converted CO-OFDM-quadrature phase-shift keying (QPSK) and CO-OFDM-16-ary quadrature amplitude modulation (16QAM) signals, respectively, even when using a distributed feedback laser (DFB) as a pump source. In contrast, with a free-running pumping scheme, the phase noise from DFB pumps severely deteriorates the CO-OFDM signals, resulting in a visible error-floor at a BER of 10-2 in the converted CO-OFDM-16QAM signals.
Remote software upload techniques in future vehicles and their performance analysis
NASA Astrophysics Data System (ADS)
Hossain, Irina
Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.
Issues in designing transport layer multicast facilities
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined.
NASA Astrophysics Data System (ADS)
Allani, Mouna; Garbinato, Benoît; Pedone, Fernando
An increasing number of Peer-to-Peer (P2P) Internet applications rely today on data dissemination as their cornerstone, e.g., audio or video streaming, multi-party games. These applications typically depend on some support for multicast communication, where peers interested in a given data stream can join a corresponding multicast group. As a consequence, the efficiency, scalability, and reliability guarantees of these applications are tightly coupled with that of the underlying multicast mechanism.
Issues in providing a reliable multicast facility
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Strayer, W. Timothy; Weaver, Alfred C.
1990-01-01
Issues involved in point-to-multipoint communication are presented and the literature for proposed solutions and approaches surveyed. Particular attention is focused on the ideas and implementations that align with the requirements of the environment of interest. The attributes of multicast receiver groups that might lead to useful classifications, what the functionality of a management scheme should be, and how the group management module can be implemented are examined. The services that multicasting facilities can offer are presented, followed by mechanisms within the communications protocol that implements these services. The metrics of interest when evaluating a reliable multicast facility are identified and applied to four transport layer protocols that incorporate reliable multicast.
Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa
2017-05-16
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks.
Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa
2017-01-01
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks. PMID:28509841
Multicast routing for wavelength-routed WDM networks with dynamic membership
NASA Astrophysics Data System (ADS)
Huang, Nen-Fu; Liu, Te-Lung; Wang, Yao-Tzung; Li, Bo
2000-09-01
Future broadband networks must support integrated services and offer flexible bandwidth usage. In our previous work, we explore the optical link control layer on the top of optical layer that enables the possibility of bandwidth on-demand service directly over wavelength division multiplexed (WDM) networks. Today, more and more applications and services such as video-conferencing software and Virtual LAN service require multicast support over the underlying networks. Currently, it is difficult to provide wavelength multicast over the optical switches without optical/electronic conversions although the conversion takes extra cost. In this paper, based on the proposed wavelength router architecture (equipped with ATM switches to offer O/E and E/O conversions when necessary), a dynamic multicast routing algorithm is proposed to furnish multicast services over WDM networks. The goal is to joint a new group member into the multicast tree so that the cost, including the link cost and the optical/electronic conversion cost, is kept as less as possible. The effectiveness of the proposed wavelength router architecture as well as the dynamic multicast algorithm is evaluated by simulation.
Mobile Multicast in Hierarchical Proxy Mobile IPV6
NASA Astrophysics Data System (ADS)
Hafizah Mohd Aman, Azana; Hashim, Aisha Hassan A.; Mustafa, Amin; Abdullah, Khaizuran
2013-12-01
Mobile Internet Protocol Version 6 (MIPv6) environments have been developing very rapidly. Many challenges arise with the fast progress of MIPv6 technologies and its environment. Therefore the importance of improving the existing architecture and operations increases. One of the many challenges which need to be addressed is the need for performance improvement to support mobile multicast. Numerous approaches have been proposed to improve mobile multicast performance. This includes Context Transfer Protocol (CXTP), Hierarchical Mobile IPv6 (HMIPv6), Fast Mobile IPv6 (FMIPv6) and Proxy Mobile IPv6 (PMIPv6). This document describes multicast context transfer in hierarchical proxy mobile IPv6 (H-PMIPv6) to provide better multicasting performance in PMIPv6 domain.
Efficient Group Coordination in Multicast Trees
2001-01-01
describe a novel protocol to coordinate multipoint groupwork within the IP-multicast framework. The protocol supports Internet-wide coordination for large...and highly-interactive groupwork , relying on the dissemination of coordination directives among group members across a shared end-to-end multicast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ennis, G.; Lala, T.K.
This document presents the results of a study undertaken by First Pacific Networks as part of EPRI Project RP-3567-01 regarding the support of broadcast services within the EPRI Utility Communications Architecture (UCA) protocols and the use of such services by UCA applications. This report has focused on the requirements and architectural implications of broadcast within UCA. A subsequent phase of this project is to develop specific recommendations for extending CUA so as to support broadcast. The conclusions of this report are presented in Section 5. The authors summarize the major conclusions as follows: broadcast and multicast support would be verymore » useful within UCA, not only for utility-specific applications but also simply to support the network engineering of a large-scale communications system, in this regard, UCA is no different from other large network systems which have found broadcast and multicast to be of substantial benefit for a variety of system management purposes; the primary architectural impact of broadcast and multicast falls on the UCA network level (which would need to be enhanced) and the UCA application level (which would be the user of broadcast); there is a useful subset of MMS services which could take advantage of broadcast; the UCA network level would need to be enhanced both in the areas of addressing and routing so as to properly support broadcast. A subsequent analysis will be required to define the specific enhancements to UCA required to support broadcast and multicast.« less
WDM Network and Multicasting Protocol Strategies
Zaim, Abdul Halim
2014-01-01
Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM), it is easier to take the advantage of optical networks and optical burst switching (OBS) and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET) and Just In Time (JIT) reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes. PMID:24744683
Point-to-Point Multicast Communications Protocol
NASA Technical Reports Server (NTRS)
Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.
1987-01-01
This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.
High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications
NASA Technical Reports Server (NTRS)
Callahan, John; Montgomery, Todd; Whetten, Brian
1997-01-01
Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.
Qin, Jun; Lu, Guo-Wei; Sakamoto, Takahide; Akahane, Kouichi; Yamamoto, Naokatsu; Wang, Danshi; Wang, Cheng; Wang, Hongxiang; Zhang, Min; Kawanishi, Tetsuya; Ji, Yuefeng
2014-12-01
In this paper, we experimentally demonstrate simultaneous multichannel wavelength multicasting (MWM) and exclusive-OR logic gate multicasting (XOR-LGM) for three 10Gbps non-return-to-zero differential phase-shift-keying (NRZ-DPSK) signals in quantum-dot semiconductor optical amplifier (QD-SOA) by exploiting the four-wave mixing (FWM) process. No additional pump is needed in the scheme. Through the interaction of the input three 10Gbps DPSK signal lights in QD-SOA, each channel is successfully multicasted to three wavelengths (1-to-3 for each), totally 3-to-9 MWM, and at the same time, three-output XOR-LGM is obtained at three different wavelengths. All the new generated channels are with a power penalty less than 1.2dB at a BER of 10(-9). Degenerate and non-degenerate FWM components are fully used in the experiment for data and logic multicasting.
MTP: An atomic multicast transport protocol
NASA Technical Reports Server (NTRS)
Freier, Alan O.; Marzullo, Keith
1990-01-01
Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.
Design alternatives for process group membership and multicast
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo
2018-03-01
The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.
Mobility based key management technique for multicast security in mobile ad hoc networks.
Madhusudhanan, B; Chitra, S; Rajan, C
2015-01-01
In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality.
NASA Astrophysics Data System (ADS)
Liao, Luhua; Li, Lemin; Wang, Sheng
2006-12-01
We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.
NASA Astrophysics Data System (ADS)
Wu, Fei; Shao, Shihai; Tang, Youxi
2016-10-01
To enable simultaneous multicast downlink transmit and receive operations on the same frequency band, also known as full-duplex links between an access point and mobile users. The problem of minimizing the total power of multicast transmit beamforming is considered from the viewpoint of ensuring the suppression amount of near-field line-of-sight self-interference and guaranteeing prescribed minimum signal-to-interference-plus-noise-ratio (SINR) at each receiver of the multicast groups. Based on earlier results for multicast groups beamforming, the joint problem is easily shown to be NP-hard. A semidefinite relaxation (SDR) technique with linear program power adjust method is proposed to solve the NP-hard problem. Simulation shows that the proposed method is feasible even when the local receive antenna in nearfield and the mobile user in far-filed are in the same direction.
An efficient group multicast routing for multimedia communication
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugen; Yan, Xinfang
2004-04-01
Group multicasting is a kind of communication mechanism whereby each member of a group sends messages to all the other members of the same group. Group multicast routing algorithms capable of satisfying quality of service (QoS) requirements of multimedia applications are essential for high-speed networks. We present a heuristic algorithm for group multicast routing with end to end delay constraint. Source-specific routing trees for each member are generated in our algorithm, which satisfy member"s bandwidth and end to end delay requirements. Simulations over random network were carried out to compare proposed algorithm performance with Low and Song"s. The experimental results show that our proposed algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover, our algorithm achieves good performance in balancing traffic, which can avoid link blocking and enhance the network behavior efficiently.
Group-multicast capable optical virtual private ring with contention avoidance
NASA Astrophysics Data System (ADS)
Peng, Yunfeng; Du, Shu; Long, Keping
2008-11-01
A ring based optical virtual private network (OVPN) employing contention sensing and avoidance is proposed to deliver multiple-to-multiple group-multicast traffic. The network architecture is presented and its operation principles as well as performance are investigated. The main contribution of this article is the presentation of an innovative group-multicast capable OVPN architecture with technologies available today.
NASA Astrophysics Data System (ADS)
Woradit, Kampol; Guyot, Matthieu; Vanichchanunt, Pisit; Saengudomlert, Poompat; Wuttisittikulkij, Lunchakorn
While the problem of multicast routing and wavelength assignment (MC-RWA) in optical wavelength division multiplexing (WDM) networks has been investigated, relatively few researchers have considered network survivability for multicasting. This paper provides an optimization framework to solve the MC-RWA problem in a multi-fiber WDM network that can recover from a single-link failure with shared protection. Using the light-tree (LT) concept to support multicast sessions, we consider two protection strategies that try to reduce service disruptions after a link failure. The first strategy, called light-tree reconfiguration (LTR) protection, computes a new multicast LT for each session affected by the failure. The second strategy, called optical branch reconfiguration (OBR) protection, tries to restore a logical connection between two adjacent multicast members disconnected by the failure. To solve the MC-RWA problem optimally, we propose an integer linear programming (ILP) formulation that minimizes the total number of fibers required for both working and backup traffic. The ILP formulation takes into account joint routing of working and backup traffic, the wavelength continuity constraint, and the limited splitting degree of multicast-capable optical cross-connects (MC-OXCs). After showing some numerical results for optimal solutions, we propose heuristic algorithms that reduce the computational complexity and make the problem solvable for large networks. Numerical results suggest that the proposed heuristic yields efficient solutions compared to optimal solutions obtained from exact optimization.
QoS Adaptation in Multimedia Multicast Conference Applications for E-Learning Services
ERIC Educational Resources Information Center
Deusdado, Sérgio; Carvalho, Paulo
2006-01-01
The evolution of the World Wide Web service has incorporated new distributed multimedia conference applications, powering a new generation of e-learning development and allowing improved interactivity and prohuman relations. Groupware applications are increasingly representative in the Internet home applications market, however, the Quality of…
Saguaro: A Distributed Operating System Based on Pools of Servers.
1988-03-25
asynchronous message passing, multicast, and semaphores are supported. We have found this flexibility to be very useful for distributed programming. The...variety of communication primitives provided by SR has facilitated the research of Stella Atkins, who was a visiting professor at Arizona during Spring...data bits in a raw communication channel to help keep the source and destination synchronized , Psync explicitly embeds timing information drawn from the
Extensible Interest Management for Scalable Persistent Distributed Virtual Environments
1999-12-01
Calvin, Cebula et al. 1995; Morse, Bic et al. 2000) uses a two grid, with each grid cell having two multicast addresses. An entity expresses interest...Entity distribution for experimental runs 78 s I * • ...... ^..... * * a» Sis*«*»* 1 ***** Jj |r...Multiple Users and Shared Applications with VRML. VRML 97, Monterey, CA. pp. 33-40. Calvin, J. O., D. P. Cebula , et al. (1995). Data Subscription in
Performance investigation of optical multicast overlay system using orthogonal modulation format
NASA Astrophysics Data System (ADS)
Singh, Simranjit; Singh, Sukhbir; Kaur, Ramandeep; Kaler, R. S.
2015-03-01
We proposed a bandwidth efficient wavelength division multiplexed-passive optical network (WDM-PON) to simultaneously transmit 60 Gb/s unicast and 10 Gb/s multicast services with 10 Gb/s upstream. The differential phase shift keying (DPSK) multicast signal is superimposed onto multiplexed non-return to zero/polarization shift keying (NRZ/PolSK) orthogonal modulated data signals. Upstream amplitude shift keying (ASK) signals formed without use of any additional light source and superimposed onto received unicast NRZ/PolSK signal before being transmitted back to optical line terminal (OLT). We also investigated the proposed WDM-PON system for variable optical input power, transmission distance of single mode fiber in multicast enable and disable mode. The measured Quality factor for all unicast and multicast signal is in acceptable range (>6). The original contribution of this paper is to propose a bandwidth efficient WDM-PON system that could be projected even in high speed scenario at reduced channel spacing and expected to be more technical viable due to use of optical orthogonal modulation formats.
Scalable Multicast Protocols for Overlapped Groups in Broker-Based Sensor Networks
NASA Astrophysics Data System (ADS)
Kim, Chayoung; Ahn, Jinho
In sensor networks, there are lots of overlapped multicast groups because of many subscribers, associated with their potentially varying specific interests, querying every event to sensors/publishers. And gossip based communication protocols are promising as one of potential solutions providing scalability in P(Publish)/ S(Subscribe) paradigm in sensor networks. Moreover, despite the importance of both guaranteeing message delivery order and supporting overlapped multicast groups in sensor or P2P networks, there exist little research works on development of gossip-based protocols to satisfy all these requirements. In this paper, we present two versions of causally ordered delivery guaranteeing protocols for overlapped multicast groups. The one is based on sensor-broker as delegates and the other is based on local views and delegates representing subscriber subgroups. In the sensor-broker based protocol, sensor-broker might lead to make overlapped multicast networks organized by subscriber's interests. The message delivery order has been guaranteed consistently and all multicast messages are delivered to overlapped subscribers using gossip based protocols by sensor-broker. Therefore, these features of the sensor-broker based protocol might be significantly scalable rather than those of the protocols by hierarchical membership list of dedicated groups like traditional committee protocols. And the subscriber-delegate based protocol is much stronger rather than fully decentralized protocols guaranteeing causally ordered delivery based on only local views because the message delivery order has been guaranteed consistently by all corresponding members of the groups including delegates. Therefore, this feature of the subscriber-delegate protocol is a hybrid approach improving the inherent scalability of multicast nature by gossip-based technique in all communications.
NASA Astrophysics Data System (ADS)
Emmerson, S. R.; Veeraraghavan, M.; Chen, S.; Ji, X.
2015-12-01
Results of a pilot deployment of a major new version of the Unidata Local Data Manager (LDM-7) are presented. The Unidata LDM was developed by the University Corporation for Atmospheric Research (UCAR) and comprises a suite of software for the distribution and local processing of data in near real-time. It is widely used in the geoscience community to distribute observational data and model output, most notably as the foundation of the Unidata Internet Data Distribution (IDD) system run by UCAR, but also in private networks operated by NOAA, NASA, USGS, etc. The current version, LDM-6, uses at least one unicast TCP connection per receiving host. With over 900 connections, the bit-rate of total outgoing IDD traffic from UCAR averages approximately 3.0 GHz, with peak data rates exceeding 6.6 GHz. Expected increases in data volume suggest that a more efficient distribution mechanism will be required in the near future. LDM-7 greatly reduces the outgoing bandwidth requirement by incorporating a recently-developed "semi-reliable" IP multicast protocol while retaining the unicast TCP mechanism for reliability. During the summer of 2015, UCAR and the University of Virginia conducted a pilot deployment of the Unidata LDM-7 among U.S. university participants with access to the Internet2 network. Results of this pilot program, along with comparisons to the existing Unidata LDM-6 system, are presented.
Inertial Motion Tracking for Inserting Humans into a Networked Synthetic Environment
2007-08-31
tracking methods. One method requires markers on the tracked buman body, and other method does not use nmkers. OPTOTRAK from Northem Digital Inc. is a...of using multicasting protocols. Unfortunately, most routers on the Internet are not configured for multicasting. A technique called tunneling is...used to overcome this problem. Tunneling is a software solution that m s on the end point routerslcomputers and allows multicast packets to traverse
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
Integrating security in a group oriented distributed system
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth; Gong, LI
1992-01-01
A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.
Hines, Michael; Kumar, Sameer; Schürmann, Felix
2011-01-01
For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
The reliable multicast protocol application programming interface
NASA Technical Reports Server (NTRS)
Montgomery , Todd; Whetten, Brian
1995-01-01
The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.
Multicast backup reprovisioning problem for Hamiltonian cycle-based protection on WDM networks
NASA Astrophysics Data System (ADS)
Din, Der-Rong; Huang, Jen-Shen
2014-03-01
As networks grow in size and complexity, the chance and the impact of failures increase dramatically. The pre-allocated backup resources cannot provide 100% protection guarantee when continuous failures occur in a network. In this paper, the multicast backup re-provisioning problem (MBRP) for Hamiltonian cycle (HC)-based protection on WDM networks for the link-failure case is studied. We focus on how to recover the protecting capabilities of Hamiltonian cycle against the subsequent link-failures on WDM networks for multicast transmissions, after recovering the multicast trees affected by the previous link-failure. Since this problem is a hard problem, an algorithm, which consists of several heuristics and a genetic algorithm (GA), is proposed to solve it. The simulation results of the proposed method are also given. Experimental results indicate that the proposed algorithm can solve this problem efficiently.
The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1995-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Digital multi-channel stabilization of four-mode phase-sensitive parametric multicasting.
Liu, Lan; Tong, Zhi; Wiberg, Andreas O J; Kuo, Bill P P; Myslivets, Evgeny; Alic, Nikola; Radic, Stojan
2014-07-28
Stable four-mode phase-sensitive (4MPS) process was investigated as a means to enhance two-pump driven parametric multicasting conversion efficiency (CE) and signal to noise ratio (SNR). Instability of multi-beam, phase sensitive (PS) device that inherently behaves as an interferometer, with output subject to ambient induced fluctuations, was addressed theoretically and experimentally. A new stabilization technique that controls phases of three input waves of the 4MPS multicaster and maximizes CE was developed and described. Stabilization relies on digital phase-locked loop (DPLL) specifically was developed to control pump phases to guarantee stable 4MPS operation that is independent of environmental fluctuations. The technique also controls a single (signal) input phase to optimize the PS-induced improvement of the CE and SNR. The new, continuous-operation DPLL has allowed for fully stabilized PS parametric broadband multicasting, demonstrating CE improvement over 20 signal copies in excess of 10 dB.
Multicast Routing of Hierarchical Data
NASA Technical Reports Server (NTRS)
Shacham, Nachum
1992-01-01
The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.
Wang, Danshi; Zhang, Min; Qin, Jun; Lu, Guo-Wei; Wang, Hongxiang; Huang, Shanguo
2014-09-08
We propose a multifunctional optical switching unit based on the bidirectional liquid crystal on silicon (LCoS) and semiconductor optical amplifier (SOA) architecture. Add/drop, wavelength conversion, format conversion, and WDM multicast are experimentally demonstrated. Due to the bidirectional characteristic, the LCoS device cannot only multiplex the input signals, but also de-multiplex the converted signals. Dual-channel wavelength conversion and format conversion from 2 × 25Gbps differential quadrature phase-shift-keying (DQPSK) to 2 × 12.5Gbps differential phase-shift-keying (DPSK) based on four-wave mixing (FWM) in SOA is obtained with only one pump. One-to-six WDM multicast of 25Gbps DQPSK signals with two pumps is also achieved. All of the multicast channels are with a power penalty less than 1.1 dB at FEC threshold of 3.8 × 10⁻³.
Design and Implementation of Replicated Object Layer
NASA Technical Reports Server (NTRS)
Koka, Sudhir
1996-01-01
One of the widely used techniques for construction of fault tolerant applications is the replication of resources so that if one copy fails sufficient copies may still remain operational to allow the application to continue to function. This thesis involves the design and implementation of an object oriented framework for replicating data on multiple sites and across different platforms. Our approach, called the Replicated Object Layer (ROL) provides a mechanism for consistent replication of data over dynamic networks. ROL uses the Reliable Multicast Protocol (RMP) as a communication protocol that provides for reliable delivery, serialization and fault tolerance. Besides providing type registration, this layer facilitates distributed atomic transactions on replicated data. A novel algorithm called the RMP Commit Protocol, which commits transactions efficiently in reliable multicast environment is presented. ROL provides recovery procedures to ensure that site and communication failures do not corrupt persistent data, and male the system fault tolerant to network partitions. ROL will facilitate building distributed fault tolerant applications by performing the burdensome details of replica consistency operations, and making it completely transparent to the application.Replicated databases are a major class of applications which could be built on top of ROL.
Optimization of multicast optical networks with genetic algorithm
NASA Astrophysics Data System (ADS)
Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng
2007-11-01
In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Back pressure based multicast scheduling for fair bandwidth allocation.
Sarkar, Saswati; Tassiulas, Leandros
2005-09-01
We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.
XML Tactical Chat (XTC): The Way Ahead for Navy Chat
2007-09-01
multicast transmissions via sophisticated pruning algorithms, while allowing multicast packets to “ tunnel ” through IP routers. [Macedonia, Brutzman 1994...conference was Jabber Inc. who added some great insight into the power of Jabber. • Great features including blackberry handheld connectivity and
Reliable WDM multicast in optical burst-switched networks
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun
2000-09-01
IN this paper,l we present a reliable WDM (Wavelength-Division Multiplexing) multicast protocol in optical burst-switched (OBS) networks. Since the burst dropping (loss) probability may be potentially high in a heavily loaded OBS backbone network, reliable multicast protocols that have developed for IP networks at the transport (or application) layer may incur heavy overheads such as a large number of duplicate retransmissions. In addition, it may take a longer time for an end host to detect and then recover from burst dropping (loss) occurred at the WDM layer. For efficiency reasons, we propose burst loss recovery within the OBS backbone (i.e., at the WDM link layer). The proposed protocol requires two additional functions to be performed by the WDM switch controller: subcasting and maintaining burst states, when the WDM switch has more than one downstream on the WDM multicast tree. We show that these additional functions are simple to implement and the overhead associated with them is manageable.
Analysis on Multicast Routing Protocols for Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Xiang, Ma
As the Mobile Ad Hoc Networks technologies face a series of challenges like dynamic changes of topological structure, existence of unidirectional channel, limited wireless transmission bandwidth, the capability limitations of mobile termination and etc, therefore, the research to mobile Ad Hoc network routings inevitablely undertake a more important task than those to other networks. Multicast is a mode of communication transmission oriented to group computing, which sends the data to a group of host computers by using single source address. In a typical mobile Ad Hoc Network environment, multicast has a significant meaning. On the one hand, the users of mobile Ad Hoc Network usually need to form collaborative working groups; on the other hand, this is also an important means of fully using the broadcast performances of wireless communication and effectively using the limited wireless channel resources. This paper summarizes and comparatively analyzes the routing mechanisms of various existing multicast routing protocols according to the characteristics of mobile Ad Hoc network.
NASA Astrophysics Data System (ADS)
Bock, Carlos; Prat, Josep
2005-04-01
A hybrid WDM/TDM PON architecture implemented by means of two cascaded Arrayed Waveguide Gratings (AWG) is presented. Using the Free Spectral Range (FSR) periodicity of AWGs we transmit unicast and multicast traffic on different wavelengths to each Optical Network Unit (ONU). The OLT is equipped with two laser stacks, a tunable one for unicast transmission and a fixed one for multicast transmission. We propose the ONU to be reflective in order to avoid any light source at the Costumer Premises Equipment (CPE). Optical transmission tests demonstrate correct transmission at 2.5 Gbps up to 30 km.
Enhanced Performance & Functionality of Tunable Delay Lines
2012-08-01
Figure 6. Experimental setup. Transmitter is capable of generating 80-Gb/s RZ-DQPSK, 40-Gb/s RZ-DPSK and 40-Gb/s RZ-OOK modulation formats. Phase...Power penalty with respect to B2B of each channel for 2-, 4-, 8-fold multicasting. (c) Pulsewidth as a function of DGD along with eye diagrams of 2...63 Figure 99. Concept. (a) A distributed optical network ; (b) NOLMs for
Scalable Technology for a New Generation of Collaborative Applications
2007-04-01
of the International Symposium on Distributed Computing (DISC), Cracow, Poland, September 2005. Classic Paxos vs. Fast Paxos: Caveat Emptor, Flavio...grou or able and fast multicast primitive to layer under high-level latency across dimensions as varied as group size [10, 17],abstractions such as...servers, networked via fast , dedicated interconnects. The system to subscribe to a fraction of the equities on the software stack running on a single
Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems
NASA Technical Reports Server (NTRS)
Shu, Wei Wennie; Porter, John
2000-01-01
The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.
2003-09-01
This restriction limits the deployment to small and medium sized enterprises. The Internet cannot universally use DVMRP for this reason. In addition...20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE September 2003 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE... University , 1996 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCE from
A Low-Complexity Subgroup Formation with QoS-Aware for Enhancing Multicast Services in LTE Networks
NASA Astrophysics Data System (ADS)
Algharem, M.; Omar, M. H.; Rahmat, R. F.; Budiarto, R.
2018-03-01
The high demand of Multimedia services on in Long Term Evolution (LTE) and beyond networks forces the networks operators to find a solution that can handle the huge traffic. Along with this, subgroup formation techniques are introduced to overcome the limitations of the Conventional Multicast Scheme (CMS) by splitting the multicast users into several subgroups based on the users’ channels quality signal. However, finding the best subgroup configuration with low complexity is need more investigations. In this paper, an efficient and simple subgroup formation mechanisms are proposed. The proposed mechanisms take the transmitter MAC queue in account. The effectiveness of the proposed mechanisms is evaluated and compared with CMS in terms of throughput, fairness, delay, Block Error Rate (BLER).
Proxy-assisted multicasting of video streams over mobile wireless networks
NASA Astrophysics Data System (ADS)
Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody
2005-03-01
This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.
A Secure Group Communication Architecture for a Swarm of Autonomous Unmanned Aerial Vehicles
2008-03-01
members to use the same decryption key. This shared decryption key is called the Session Encryption Key ( SEK ) or Traffic Encryption Key (TEK...Since everyone shares the SEK , members need to hold additional Key Encryption Keys (KEK) that are used to securely distribute the SEK to each valid...managing this process. To preserve the secrecy of the multicast data, the SEK needs to be updated upon certain events such as a member joining and
NASA Technical Reports Server (NTRS)
Stoenescu, Tudor M.; Woo, Simon S.
2009-01-01
In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.
Multicasting based optical inverse multiplexing in elastic optical network.
Guo, Bingli; Xu, Yingying; Zhu, Paikun; Zhong, Yucheng; Chen, Yuanxiang; Li, Juhao; Chen, Zhangyuan; He, Yongqi
2014-06-16
Optical multicasting based inverse multiplexing (IM) is introduced in spectrum allocation of elastic optical network to resolve the spectrum fragmentation problem, where superchannels could be split and fit into several discrete spectrum blocks in the intermediate node. We experimentally demonstrate it with a 1-to-7 optical superchannel multicasting module and selecting/coupling components. Also, simulation results show that, comparing with several emerging spectrum defragmentation solutions (e.g., spectrum conversion, split spectrum), IM could reduce blocking performance significantly but without adding too much system complexity as split spectrum. On the other hand, service fairness for traffic with different granularity of these schemes is investigated for the first time and it shows that IM performs better than spectrum conversion and almost as well as split spectrum, especially for smaller size traffic under light traffic intensity.
NASA Astrophysics Data System (ADS)
Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo
2006-01-01
Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.
Reliable multicast protocol specifications protocol operations
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd; Whetten, Brian
1995-01-01
This appendix contains the complete state tables for Reliable Multicast Protocol (RMP) Normal Operation, Multi-RPC Extensions, Membership Change Extensions, and Reformation Extensions. First the event types are presented. Afterwards, each RMP operation state, normal and extended, is presented individually and its events shown. Events in the RMP specification are one of several things: (1) arriving packets, (2) expired alarms, (3) user events, (4) exceptional conditions.
Tera-node Network Technology (Task 3) Scalable Personal Telecommunications
2000-03-14
Simulation results of this work may be found in http://north.east.isi.edu/spt/ audio.html. 6. Internet Research Task Force Reliable Multicast...Adaptation, 4. Multimedia Proxy Caching, 5. Experiments with the Rate Adaptation Protocol (RAP) 6. Providing leadership and innovation to the Internet ... Research Task Force (IRTF) Reliable Multicast Research Group (RMRG) 1. End-to-end Architecture for Quality-adaptive Streaming Applications over the
Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript
2014-09-01
scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of
Overview of AMS (CCSDS Asynchronous Message Service)
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2006-01-01
This viewgraph presentation gives an overview of the Consultative Committee for Space Data Systems (CCSDS) Asynchronous Message Service (AMS). The topics include: 1) Key Features; 2) A single AMS continuum; 3) The AMS Protocol Suite; 4) A multi-continuum venture; 5) Constraining transmissions; 6) Security; 7) Fault Tolerance; 8) Performance of Reference Implementation; 9) AMS vs Multicast (1); 10) AMS vs Multicast (2); 11) RAMS testing exercise; and 12) Results.
Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI
NASA Astrophysics Data System (ADS)
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2015-09-01
In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.
NASA Astrophysics Data System (ADS)
Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin
2016-01-01
Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
Multicast Parametric Synchronous Sampling
2011-09-01
enhancement in a parametric mixer device. Fig. 4 shows the principle of generating uniform, high quality replicas extending over previously un-attainable...critical part of the MPASS architecture and is responsible for the direct and continuous acquisition of data across all of the multicast signal copies...ii) ability to copy THz signals with impunity to tens of replicas ; (iii) all-optical delays > 1.9 us; (iv) 10’s of THz-fast all-optical sampling of
Reliable multicast protocol specifications flow control and NACK policy
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.
Fixed-rate layered multicast congestion control
NASA Astrophysics Data System (ADS)
Bing, Zhang; Bing, Yuan; Zengji, Liu
2006-10-01
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.
Ahlawat, Meenu; Bostani, Ameneh; Tehranchi, Amirhossein; Kashyap, Raman
2013-08-01
We experimentally demonstrate the possibility of agile multicasting for wavelength division multiplexing (WDM) networks, of a single-channel to two and seven channels over the C band, also extendable to S and L bands. This is based on cascaded χ(2) nonlinear mixing processes, namely, second-harmonic generation (SHG)-sum-frequency generation (SFG) and difference-frequency generation (DFG) in a 20-mm-long step-chirped periodically poled lithium niobate crystal, specially designed and fabricated for a 28-nm-wide SH-SF bandwidth centered at around 1.55 μm. The multiple idlers are simultaneously tuned by detuning the pump wavelengths within the broad SH-SF bandwidth. By selectively tuning the pump wavelengths over less than 10 and 6 nm, respectively, multicasting into two and seven idlers is successfully achieved across ~70 WDM channels within the 50 GHz International Telecommunication Union grid spacing.
AF-TRUST, Air Force Team for Research in Ubiquitous Secure Technology
2010-07-26
Charles Sutton, J. D. Tygar, and Kai Xia. Book chapter in Jeffrey J. P. Tsai and Philip S. Yu (eds.) Machine Learning in Cyber Trust: Security, Privacy...enterprise, tactical, embedded systems and command and control levels. From these studies, commissioned by Dr . Sekar Chandersekaran of the Secretary of the...Data centers avoid IP Multicast because of a series of problems with the technology. • Dr . Multicast (the MCMD), a system that maps traditional I PMC
Secure Hierarchical Multicast Routing and Multicast Internet Anonymity
1998-06-01
Multimedia, Summer 94, pages 76{79, 94. [15] David Chaum . Blind signatures for untraceable payments. In Proc. Crypto, pages 199{203, 1982. [16] David L...use of digital signatures , which consist of a cryptographic hash of the message encrypted with the private key of the signer. Digitally-signed messages... signature on the request and on the certi cate it contains. Notice that the location service need not retrieve the initiator’s public key as it is contained
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
An Economic Case for End System Multicast
NASA Astrophysics Data System (ADS)
Analoui, Morteza; Rezvani, Mohammad Hossein
This paper presents a non-strategic model for the end-system multicast networks based on the concept of replica exchange economy. We believe that microeconomics is a good candidate to investigate the problem of selfishness of the end-users (peers) in order to maximize the aggregate throughput. In this solution concept, the decisions that a peer might make, does not affect the actions of the other peers at all. The proposed mechanism tunes the price of the service in such a way that general equilibrium holds.
DDS as middleware of the Southern African Large Telescope control system
NASA Astrophysics Data System (ADS)
Maartens, Deneys S.; Brink, Janus D.
2016-07-01
The Southern African Large Telescope (SALT) software control system1 is realised as a distributed control system, implemented predominantly in National Instruments' LabVIEW. The telescope control subsystems communicate using cyclic, state-based messages. Currently, transmitting a message is accomplished by performing an HTTP PUT request to a WebDAV directory on a centralised Apache web server, while receiving is based on polling the web server for new messages. While the method works, it presents a number of drawbacks; a scalable distributed communication solution with minimal overhead is a better fit for control systems. This paper describes our exploration of the Data Distribution Service (DDS). DDS is a formal standard specification, defined by the Object Management Group (OMG), that presents a data-centric publish-subscribe model for distributed application communication and integration. It provides an infrastructure for platform- independent many-to-many communication. A number of vendors provide implementations of the DDS standard; RTI, in particular, provides a DDS toolkit for LabVIEW. This toolkit has been evaluated against the needs of SALT, and a few deficiencies have been identified. We have developed our own implementation that interfaces LabVIEW to DDS in order to address our specific needs. Our LabVIEW DDS interface implementation is built against the RTI DDS Core component, provided by RTI under their Open Community Source licence. Our needs dictate that the interface implementation be platform independent. Since we have access to the RTI DDS Core source code, we are able to build the RTI DDS libraries for any of the platforms on which we require support. The communications functionality is based on UDP multicasting. Multicasting is an efficient communications mechanism with low overheads which avoids duplicated point-to-point transmission of data on a network where there are multiple recipients of the data. In the paper we present a performance evaluation of DDS against the current HTTP-based implementation as well as the historical DataSocket implementation. We conclude with a summary and describe future work.
Digital Multicasting of Multiple Audio Streams
NASA Technical Reports Server (NTRS)
Macha, Mitchell; Bullock, John
2007-01-01
The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.
Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola
2014-02-10
We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.
A framework using cluster-based hybrid network architecture for collaborative virtual surgery.
Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann
2009-12-01
Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.
Distributed Ship Navigation Control System Based on Dual Network
NASA Astrophysics Data System (ADS)
Yao, Ying; Lv, Wu
2017-10-01
Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.
Multicasting for all-optical multifiber networks
NASA Astrophysics Data System (ADS)
Kã¶Ksal, Fatih; Ersoy, Cem
2007-02-01
All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2017-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead. PMID:28736582
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2016-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.
NASA Astrophysics Data System (ADS)
Singh, Sukhbir; Singh, Surinder
2017-11-01
This paper investigated the effect of FWM and its suppression using optical phase conjugation modules in dispersion managed hybrid WDM-OTDM multicast overlay system. Interaction between propagating wavelength signals at higher power level causes new FWM component generation that can significant limit the system performance. OPC module consists of the pump signal and 0.6 km HNLF implemented in midway of optical link to generate destructive phase FWM components. Investigation revealed that by use of even OPC module in optical link reduces the FWM power and mitigate the interaction between wavelength signals at variable signal input power, dispersion parameter (β2) and transmission distance. System performance comparison is also made between without DM-OPC module, with DM and with DM-OPC module in scenario of FWM tolerance. The BER performance of hybrid WDM-OTDM multicast system using OPC module is improved by multiplication factor of 2 as comparable to dispersion managed and coverage distance is increased by factor of 2 as in Singh and Singh (2016).
Experimental Evaluation of Unicast and Multicast CoAP Group Communication
Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet
2016-01-01
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262
Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.
Jung, Young-Ho; Choi, Jihoon
2017-02-25
A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.
A Loss Tolerant Rate Controller for Reliable Multicast
NASA Technical Reports Server (NTRS)
Montgomery, Todd
1997-01-01
This paper describes the design, specification, and performance of a Loss Tolerant Rate Controller (LTRC) for use in controlling reliable multicast senders. The purpose of this rate controller is not to adapt to congestion (or loss) on a per loss report basis (such as per received negative acknowledgment), but instead to use loss report information and perceived state to decide more prudent courses of action for both the short and long term. The goal of this controller is to be responsive to congestion, but not overly reactive to spurious independent loss. Performance of the controller is verified through simulation results.
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
NASA Astrophysics Data System (ADS)
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks
NASA Astrophysics Data System (ADS)
Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu
Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.
Verification and validation of a reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks
NASA Astrophysics Data System (ADS)
Luo, Hongbin; Li, Lemin; Yu, Hongfang
2006-12-01
Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.
Optical network scaling: roles of spectral and spatial aggregation.
Arık, Sercan Ö; Ho, Keang-Po; Kahn, Joseph M
2014-12-01
As the bit rates of routed data streams exceed the throughput of single wavelength-division multiplexing channels, spectral and spatial traffic aggregation become essential for optical network scaling. These aggregation techniques reduce network routing complexity by increasing spectral efficiency to decrease the number of fibers, and by increasing switching granularity to decrease the number of switching components. Spectral aggregation yields a modest decrease in the number of fibers but a substantial decrease in the number of switching components. Spatial aggregation yields a substantial decrease in both the number of fibers and the number of switching components. To quantify routing complexity reduction, we analyze the number of multi-cast and wavelength-selective switches required in a colorless, directionless and contentionless reconfigurable optical add-drop multiplexer architecture. Traffic aggregation has two potential drawbacks: reduced routing power and increased switching component size.
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks
Hammad, Karim; El Bakly, Ahmed M.
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.
Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.
NASA Astrophysics Data System (ADS)
Rezvani, Mohammad Hossein; Analoui, Morteza
2010-11-01
We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.
Wang, Ke; Nirmalathas, Ampalavanapillai; Lim, Christina; Skafidas, Efstratios; Alameh, Kamal
2013-07-01
In this paper, we propose and experimentally demonstrate a free-space based high-speed reconfigurable card-to-card optical interconnect architecture with broadcast capability, which is required for control functionalities and efficient parallel computing applications. Experimental results show that 10 Gb/s data can be broadcast to all receiving channels for up to 30 cm with a worst-case receiver sensitivity better than -12.20 dBm. In addition, arbitrary multicasting with the same architecture is also investigated. 10 Gb/s reconfigurable point-to-point link and multicast channels are simultaneously demonstrated with a measured receiver sensitivity power penalty of ~1.3 dB due to crosstalk.
Secure distribution for high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Liu, Jin; Sun, Jing; Xu, Zheng Q.
2010-09-01
The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.
Bahşi, Hayretdin; Levi, Albert
2010-01-01
Wireless sensor networks (WSNs) generally have a many-to-one structure so that event information flows from sensors to a unique sink. In recent WSN applications, many-to-many structures evolved due to the need for conveying collected event information to multiple sinks. Privacy preserved data collection models in the literature do not solve the problems of WSN applications in which network has multiple un-trusted sinks with different level of privacy requirements. This study proposes a data collection framework bases on k-anonymity for preventing record disclosure of collected event information in WSNs. Proposed method takes the anonymity requirements of multiple sinks into consideration by providing different levels of privacy for each destination sink. Attributes, which may identify an event owner, are generalized or encrypted in order to meet the different anonymity requirements of sinks at the same anonymized output. If the same output is formed, it can be multicasted to all sinks. The other trivial solution is to produce different anonymized outputs for each sink and send them to related sinks. Multicasting is an energy efficient data sending alternative for some sensor nodes. Since minimization of energy consumption is an important design criteria for WSNs, multicasting the same event information to multiple sinks reduces the energy consumption of overall network.
Admission and Preventive Load Control for Delivery of Multicast and Broadcast Services via S-UMTS
NASA Astrophysics Data System (ADS)
Angelou, E.; Koutsokeras, N.; Andrikopoulos, I.; Mertzanis, I.; Karaliopoulos, M.; Henrio, P.
2003-07-01
An Admission Control strategy is proposed for unidirectional satellite systems delivering multicast and broadcast services to mobile users. In such systems, both the radio interface and the targeted services impose particular requirements on the RRM task. We briefly discuss the RRM requirements that stem from the services point of view and from the features of the SATIN access scheme that differentiate it from the conventional T-UMTS radio interface. The main functional entities of RRM and the alternative modes of operation are outlined and the proposed Admission Control algorithm is described in detail. The results from the simulation study that demonstrate its performance for a number of different scenarios are finally presented and conclusions derived.
NASA Astrophysics Data System (ADS)
Cheng, Yuh-Jiuh; Yeh, Tzuoh-Chyau; Cheng, Shyr-Yuan
2011-09-01
In this paper, a non-blocking multicast optical packet switch based on fiber Bragg grating technology with optical output buffers is proposed. Only the header of optical packets is converted to electronic signals to control the fiber Bragg grating array of input ports and the packet payloads should be transparently destined to their output ports so that the proposed switch can reduce electronic interfaces as well as the bit rate. The modulation and the format of packet payloads may be non-standard where packet payloads could also include different wavelengths for increasing the volume of traffic. The advantage is obvious: the proposed switch could transport various types of traffic. An easily implemented architecture which can provide multicast services is also presented. An optical output buffer is designed to queue the packets if more than one incoming packet should reach to the same destination output port or including any waiting packets in optical output buffer that will be sent to the output port at a time slot. For preserving service-packet sequencing and fairness of routing sequence, a priority scheme and a round-robin algorithm are adopted at the optical output buffer. The fiber Bragg grating arrays for both input ports and output ports are designed for routing incoming packets using optical code division multiple access technology.
Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks
NASA Astrophysics Data System (ADS)
Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan
In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.
Polarization-insensitive PAM-4-carrying free-space orbital angular momentum (OAM) communications.
Liu, Jun; Wang, Jian
2016-02-22
We present a simple configuration incorporating single polarization-sensitive phase-only liquid crystal spatial light modulator (SLM) to facilitate polarization-insensitive free-space optical communications employing orbital angular momentum (OAM) modes. We experimentally demonstrate several polarization-insensitive optical communication subsystems by propagating a single OAM mode, multicasting 4 and 10 OAM modes, and multiplexing 8 OAM modes, respectively. Free-space polarization-insensitive optical communication links using OAM modes that carry four-level pulse-amplitude modulation (PAM-4) signal are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties are less than 1 dB in both polarization-insensitive N-fold OAM modes multicasting and multiple OAM modes multiplexing at a bit-error rate (BER) of 2e-3 (enhanced forward-error correction (EFEC) threshold).
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory
NASA Astrophysics Data System (ADS)
Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo
2017-04-01
In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex (smooth) objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the application of the framework to some resource allocation problems in communication networks. In particular, we consider two non-trivial case-study applications, namely: (generalizations of) i) the rate profile maximization in MIMO interference broadcast networks; and the ii) the max-min fair multicast multigroup beamforming problem in a multi-cell environment. We develop a new class of algorithms enjoying the following distinctive features: i) they are \\emph{distributed} across the base stations (with limited signaling) and lead to subproblems whose solutions are computable in closed form; and ii) differently from current relaxation-based schemes (e.g., semidefinite relaxation), they are proved to always converge to d-stationary solutions of the aforementioned class of nonconvex problems. Numerical results show that the proposed (distributed) schemes achieve larger worst-case rates (resp. signal-to-noise interference ratios) than state-of-the-art centralized ones while having comparable computational complexity.
TeCo3D: a 3D telecooperation application based on VRML and Java
NASA Astrophysics Data System (ADS)
Mauve, Martin
1998-12-01
In this paper we present a method for sharing collaboration- unaware VRML content, e.g. 3D models which were not specifically developed for use in a distributed environment. This functionality is an essential requirement for the inclusion of arbitrary VRML content, as generated by standard CAD or animation software, into teleconferencing sessions. We have developed a 3D TeleCooperation (TeCo3D) prototype to demonstrate the feasibility of our approach. The basic services provided by the prototype are the distribution of cooperation unaware VRML content, the sharing of user interactions, and the joint viewing of the content. In order to achieve maximum portability, the prototype was developed completely in Java. This paper presents general aspects of sharing VRML content as well as the concepts, the architecture and the services of the TeCo3D prototype. Our approach relies on existing VRML browsers as the VRML presentation and execution engines while reliable multicast is used as the means of communication to provide for scalability.
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
Guest Editor's introduction: Special issue on distributed virtual environments
NASA Astrophysics Data System (ADS)
Lea, Rodger
1998-09-01
Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com
Where-Fi: a dynamic energy-efficient multimedia distribution framework for MANETs
NASA Astrophysics Data System (ADS)
Mohapatra, Shivajit; Carbunar, Bogdan; Pearce, Michael; Chaudhri, Rohit; Vasudevan, Venu
2008-01-01
Next generation mobile ad-hoc applications will revolve around users' need for sharing content/presence information with co-located devices. However, keeping such information fresh requires frequent meta-data exchanges, which could result in significant energy overheads. To address this issue, we propose distributed algorithms for energy efficient dissemination of presence and content usage information between nodes in mobile ad-hoc networks. First, we introduce a content dissemination protocol (called CPMP) for effectively distributing frequent small meta-data updates between co-located devices using multicast. We then develop two distributed algorithms that use the CPMP protocol to achieve "phase locked" wake up cycles for all the participating nodes in the network. The first algorithm is designed for fully-connected networks and then extended in the second to handle hidden terminals. The "phase locked" schedules are then exploited to adaptively transition the network interface to a deep sleep state for energy savings. We have implemented a prototype system (called "Where-Fi") on several Motorola Linux-based cell phone models. Our experimental results show that for all network topologies our algorithms were able to achieve "phase locking" between nodes even in the presence of hidden terminals. Moreover, we achieved battery lifetime extensions of as much as 28% for fully connected networks and about 20% for partially connected networks.
Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter
NASA Astrophysics Data System (ADS)
Ashizawa, Kunitaka; Sato, Takehiro; Tokuhashi, Kazumasa; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki; Oki, Eiji
This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30msec, which is acceptable for on-demand broadcast services.
MDP: Reliable File Transfer for Space Missions
NASA Technical Reports Server (NTRS)
Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)
2002-01-01
This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990's, MDP is now in daily use by both the US Post Office and the DoD. This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Neck Oriented Reliable Multicast) is in the process of becoming an IETF standard.
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... offerings. Television and Class A television broadcast stations may make these announcements visually or... multicast audio programming streams, in a manner that appropriately alerts its audience to the fact that it is listening to a digital audio broadcast. No other insertion between the station's call letters and...
Secure Multicast Tree Structure Generation Method for Directed Diffusion Using A* Algorithms
NASA Astrophysics Data System (ADS)
Kim, Jin Myoung; Lee, Hae Young; Cho, Tae Ho
The application of wireless sensor networks to areas such as combat field surveillance, terrorist tracking, and highway traffic monitoring requires secure communication among the sensor nodes within the networks. Logical key hierarchy (LKH) is a tree based key management model which provides secure group communication. When a sensor node is added or evicted from the communication group, LKH updates the group key in order to ensure the security of the communications. In order to efficiently update the group key in directed diffusion, we propose a method for secure multicast tree structure generation, an extension to LKH that reduces the number of re-keying messages by considering the addition and eviction ratios of the history data. For the generation of the proposed key tree structure the A* algorithm is applied, in which the branching factor at each level can take on different value. The experiment results demonstrate the efficiency of the proposed key tree structure against the existing key tree structures of fixed branching factors.
Context-based user grouping for multi-casting in heterogeneous radio networks
NASA Astrophysics Data System (ADS)
Mannweiler, C.; Klein, A.; Schneider, J.; Schotten, H. D.
2011-08-01
Along with the rise of sophisticated smartphones and smart spaces, the availability of both static and dynamic context information has steadily been increasing in recent years. Due to the popularity of social networks, these data are complemented by profile information about individual users. Making use of this information by classifying users in wireless networks enables targeted content and advertisement delivery as well as optimizing network resources, in particular bandwidth utilization, by facilitating group-based multi-casting. In this paper, we present the design and implementation of a web service for advanced user classification based on user, network, and environmental context information. The service employs simple and advanced clustering algorithms for forming classes of users. Available service functionalities include group formation, context-aware adaptation, and deletion as well as the exposure of group characteristics. Moreover, the results of a performance evaluation, where the service has been integrated in a simulator modeling user behavior in heterogeneous wireless systems, are presented.
NADIR: A Flexible Archiving System Current Development
NASA Astrophysics Data System (ADS)
Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.
2014-05-01
The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.
The Development of Interactive Distance Learning in Taiwan: Challenges and Prospects.
ERIC Educational Resources Information Center
Chu, Clarence T.
1999-01-01
Describes three types of interactive distance-education systems under development in Taiwan: real-time multicast systems; virtual-classroom systems; and curriculum-on-demand systems. Discusses the use of telecommunications and computer technology in higher education, problems and challenges, and future prospects. (Author/LRW)
Australian DefenceScience. Volume 16, Number 1, Autumn
2008-01-01
are carried via VOIP technology, and multicast IP traffic for audio -visual communications is also supported. The SSATIN system overall is seen to...Artificial Intelligence and Soft Computing Palma de Mallorca, Spain http://iasted.com/conferences/home-628.html 1 - 3 Sep 2008 Visualisation , Imaging and
Global Interoperability of High Definition Video Streams Via ACTS and Intelsat
NASA Technical Reports Server (NTRS)
Hsu, Eddie; Wang, Charles; Bergman, Larry; Pearman, James; Bhasin, Kul; Clark, Gilbert; Shopbell, Patrick; Gill, Mike; Tatsumi, Haruyuki; Kadowaki, Naoto
2000-01-01
In 1993, a proposal at the Japan.-U.S. Cooperation in Space Program Workshop lead to a subsequent series of satellite communications experiments and demonstrations, under the title of Trans-Pacific High Data Rate Satellite Communications Experiments. The first of which is a joint collaboration between government and industry teams in the United States and Japan that successfully demonstrated distributed high definition video (HDV) post-production on a global scale using a combination of high data rate satellites and terrestrial fiber optic asynchronous transfer mode (ATM) networks. The HDV experiment is the first GIBN experiment to establish a dual-hop broadband satellite link for the transmission of digital HDV over ATM. This paper describes the team's effort in using the NASA Advanced Communications Technology Satellite (ACTS) at rates up to OC-3 (155 Mbps) between Los Angeles and Honolulu, and using Intelsat at rates up to DS-3 (45 Mbps) between Kapolei and Tokyo, with which HDV source material was transmitted between Sony Pictures High Definition Center (SPHDC) in Los Angeles and Sony Visual Communication Center (VCC) in Shinagawa, Tokyo. The global-scale connection also used terrestrial networks in Japan, the States of Hawaii and California. The 1.2 Gbps digital HDV stream was compressed down to 22.5 Mbps using a proprietary Mitsubishi MPEG-2 codec that was ATM AAL-5 compatible. The codec: employed four-way parallel processing. Improved versions of the codec are now commercially available. The successful post-production activity performed in Tokyo with a HDV clip transmitted from Los Angeles was predicated on the seamless interoperation of all the equipment between the sites, and was an exciting example in deploying a global-scale information infrastructure involving a combination of broadband satellites and terrestrial fiber optic networks. Correlation of atmospheric effects with cell loss, codec drop-out, and picture quality were made. Current efforts in the Trans-Pacific series plan to examine the use of Internet Protocol (IP)-related technologies over such an infrastructure. The use of IP allows the general public to be an integral part of the exciting activities, helps to examine issues in constructing the solar-system internet, and affords an opportunity to tap the research results from the (reliable) multicast and distributed systems communities. The current Trans- Pacific projects, including remote astronomy and digital library (visible human) are briefly described.
Virtually-synchronous communication based on a weak failure suspector
NASA Technical Reports Server (NTRS)
Schiper, Andre; Ricciardi, Aleta
1993-01-01
Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.
Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2009-09-08
Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.
Internet technologies and requirements for telemedicine
NASA Technical Reports Server (NTRS)
Lamaster, H.; Meylor, J.; Meylor, F.
1997-01-01
Internet technologies are briefly introduced and those applicable for telemedicine are reviewed. Multicast internet technologies are described. The National Aeronautics and Space Administration (NASA) 'Telemedicine Space-bridge to Russia' project is described and used to derive requirements for internet telemedicine. Telemedicine privacy and Quality of Service (QoS) requirements are described.
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... free over-the-air signal, including multicast and high definition digital signals. (c) Election cycle... first retransmission consent-mandatory carriage election cycle shall be for a four-year period... carriage election cycle, and all cycles thereafter, shall be for a period of three years (e.g. the second...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2014 CFR
2014-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2013 CFR
2013-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2011 CFR
2011-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2012 CFR
2012-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
Web server for priority ordered multimedia services
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund
2001-10-01
In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.
The Development of CyberLearning in Dual-Mode: Higher Education Institutions in Taiwan.
ERIC Educational Resources Information Center
Chen, Yau Jane
2002-01-01
Open and distance education in Taiwan has evolved into cyberlearning. Over half (56 percent) of the conventional universities and colleges have been upgraded to dual-mode institutions offering real-time multicast instructional systems using videoconferencing, cable television, virtual classrooms, and curriculum-on-demand systems. The Ministry of…
Digital Video and the Internet: A Powerful Combination.
ERIC Educational Resources Information Center
Barron, Ann E.; Orwig, Gary W.
1995-01-01
Provides an overview of digital video and outlines hardware and software necessary for interactive training on the World Wide Web and for videoconferences via the Internet. Lists sites providing additional information on digital video, on CU-SeeMe software, and on MBONE (Multicast BackBONE), a technology that permits real-time transmission of…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-30
...'s channel number, as stated on the station's license, and/or the station's network affiliation may... Stations, choosing to include the station's channel number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station...
Multimedia C for Remote Language Teaching over SuperJANET.
ERIC Educational Resources Information Center
Matthews, E.; And Others
1996-01-01
Describes work carried out as part of a remote language teaching research investigation, which is looking into the use of multicast, multimedia conferencing over SuperJANET. The fundamental idea is to investigate the feasibility of sharing language teaching resources among universities within the United Kingdom by using the broadband SuperJANET…
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
75 FR 53198 - Rate Adjustment for the Satellite Carrier Compulsory License
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Part 386 [Docket No. 2010-4 CRB Satellite Rate] Rate Adjustment for the Satellite Carrier Compulsory License AGENCY: Copyright Royalty Board, Library... last day of a given month. (2) In the case of a station engaged in digital multicasting, the rates set...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management
ERIC Educational Resources Information Center
Osawa, Noritaka; Asai, Kikuo
2008-01-01
A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…
Using Interactive Broadband Multicasting in a Museum Lifelong Learning Program.
ERIC Educational Resources Information Center
Steinbach, Leonard
The Cleveland Museum of Art has embarked on an innovative approach for delivering high quality video-on-demand and live interactive cultural programming, along with Web-based complementary material, to seniors in assisted living residence facilities, community-based centers, and disabled persons in their homes. The project is made possible in part…
Cooperation and information replication in wireless networks.
Poularakis, Konstantinos; Tassiulas, Leandros
2016-03-06
A significant portion of today's network traffic is due to recurring downloads of a few popular contents. It has been observed that replicating the latter in caches installed at network edges-close to users-can drastically reduce network bandwidth usage and improve content access delay. Such caching architectures are gaining increasing interest in recent years as a way of dealing with the explosive traffic growth, fuelled further by the downward slope in storage space price. In this work, we provide an overview of caching with a particular emphasis on emerging network architectures that enable caching at the radio access network. In this context, novel challenges arise due to the broadcast nature of the wireless medium, which allows simultaneously serving multiple users tuned into a multicast stream, and the mobility of the users who may be frequently handed off from one cell tower to another. Existing results indicate that caching at the wireless edge has a great potential in removing bottlenecks on the wired backbone networks. Taking into consideration the schedule of multicast service and mobility profiles is crucial to extract maximum benefit in network performance. © 2016 The Author(s).
Protocol Architecture Model Report
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
CLON: Overlay Networks and Gossip Protocols for Cloud Environments
NASA Astrophysics Data System (ADS)
Matos, Miguel; Sousa, António; Pereira, José; Oliveira, Rui; Deliot, Eric; Murray, Paul
Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource consumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links as happens in Cloud Computing infrastructures composed by several interconnected data centers across the globe.
A Security Architecture for Fault-Tolerant Systems
1993-06-03
aspect of our effort to achieve better performance is integrating the system into microkernel -based operating systems. 4 Summary and discussion In...135-171, June 1983. [vRBC+92] R. van Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels . In...Proceedings of the USENIX Microkernels and Other Kernel Architectures Workshop, April 1992. 29
The Xpress Transfer Protocol (XTP): A tutorial (expanded version)
NASA Technical Reports Server (NTRS)
Sanders, Robert M.; Weaver, Alfred C.
1990-01-01
The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4.
Reliable communication in the presence of failures
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, Thomas A.
1987-01-01
The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistant orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols is the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.
OAM-labeled free-space optical flow routing.
Gao, Shecheng; Lei, Ting; Li, Yangjin; Yuan, Yangsheng; Xie, Zhenwei; Li, Zhaohui; Yuan, Xiaocong
2016-09-19
Space-division multiplexing allows unprecedented scaling of bandwidth density for optical communication. Routing spatial channels among transmission ports is critical for future scalable optical network, however, there is still no characteristic parameter to label the overlapped optical carriers. Here we propose a free-space optical flow routing (OFR) scheme by using optical orbital angular moment (OAM) states to label optical flows and simultaneously steer each flow according to their OAM states. With an OAM multiplexer and a reconfigurable OAM demultiplexer, massive individual optical flows can be routed to the demanded optical ports. In the routing process, the OAM beams act as data carriers at the same time their topological charges act as each carrier's labels. Using this scheme, we experimentally demonstrate switching, multicasting and filtering network functions by simultaneously steer 10 input optical flows on demand to 10 output ports. The demonstration of data-carrying OFR with nonreturn-to-zero signals shows that this process enables synchronous processing of massive spatial channels and flexible optical network.
Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks
NASA Astrophysics Data System (ADS)
Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu
P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.
75 FR 52267 - Waiver of Statement of Account Filing Deadline for the 2010/1 Period
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-25
... available in a print format, a PDF format, and a software ``fill-in'' format created by Gralin Associates... retransmission of multicast streams. The paper and PDF versions of the form have been available to cable... recognize that the paper and PDF versions of the SOA have been available since July, many large and small...
High Performance Computing Multicast
2012-02-01
responsiveness, first-tier applications often implement replicated in- memory key-value stores , using them to store state or to cache data from services...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response. A...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
... to offer remote multi-cast ITCH Wave Ports for clients co-located at other third party data centers... delivery of third party market data to market center clients via a wireless network using millimeter wave... Multi- cast ITCH Wave Ports for clients co-located at other third-party data centers, through which...
Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani
2015-01-01
Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting.
Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani
2015-01-01
Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting. PMID:26236773
Toward fidelity between specification and implementation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing
1994-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
A Novel Group Coordination Protocol for Collaborative Multimedia Systems
1998-01-01
technology have advanced considerably, ef- ficient group coordination support for applications characterized by synchronous and wide-area groupwork is...As a component within a general coordination architecture for many-to-many groupwork , floor control coexists with proto- cols for reliable ordered...multicast and media synchronization at a sub-application level. Orchestration of multiparty groupwork with fine-grained and fair floor control is an
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-28
... that is in test mode in excess of one. (c)-(f) No change. (g) Other Port Fees Remote Multi-cast ITCH... environment to test upcoming NASDAQ releases and product enhancements, as well as test software prior to... public in accordance with the provisions of 5 U.S.C. 552, will be available for Web site viewing and...
Internetworking satellite and local exchange networks for personal communications applications
NASA Technical Reports Server (NTRS)
Wolff, Richard S.; Pinck, Deborah
1993-01-01
The demand for personal communications services has shown unprecedented growth, and the next decade and beyond promise an era in which the needs for ubiquitous, transparent and personalized access to information will continue to expand in both scale and scope. The exchange of personalized information is growing from two-way voice to include data communications, electronic messaging and information services, image transfer, video, and interactive multimedia. The emergence of new land-based and satellite-based wireless networks illustrates the expanding scale and trend toward globalization and the need to establish new local exchange and exchange access services to meet the communications needs of people on the move. An important issue is to identify the roles that satellite networking can play in meeting these new communications needs. The unique capabilities of satellites, in providing coverage to large geographic areas, reaching widely dispersed users, for position location determination, and in offering broadcast and multicast services, can complement and extend the capabilities of terrestrial networks. As an initial step in exploring the opportunities afforded by the merger of satellite-based and land-based networks, several experiments utilizing the NASA ACTS satellite and the public switched local exchange network were undertaken to demonstrate the use of satellites in the delivery of personal communications services.
Improved Lower Bounds on the Price of Stability of Undirected Network Design Games
NASA Astrophysics Data System (ADS)
Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.
Traffic Generator (TrafficGen) Version 1.4.2: Users Guide
2016-06-01
events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with
GUMP: Adapting Client/Server Messaging Protocols into Peer-to-Peer Serverless Environments
2010-06-11
and other related metadata, such as message re- ceiver ID (for supporting multiple connections) and so forth. The Proxy consumes the message and uses...the underlying discovery subsystem and multicast to process the message and translate the request into behaviour suitable for the un- derlying...communication i.e. a chat. Jingle (XEP-0166) [26] is a related specification that de- fines an extension to the XMPP protocol for initiating and
Tactical Mobile Communications (Communications tactiques mobiles)
1999-11-01
13]. randomly at the network nodes. Each multicast group Our studies do, in fact, support this conjecture. consists of the source node plus at least...Capability investigate the MMR concept in some more detail. The study was contracted to a group which Multi-role denotes the capability to support a...through the HW- and SW-resources of the frontends can be incorporated in a task-dedicated GPU. Functions can be grouped into four categories: MMR
Multimedia Data Capture with Multicast Dissemination for Online Distance Learning
2001-12-01
Juan Gril and Dr. Don Brutzman to wrap the multiple videos in a user- friendly environment. The web pages also contain the original PowerPoint...this CD, Juan Gril , a volunteer for the Siggraph 2001 Online Committee, created web pages that match the style and functionality desired by the...leader. The Committee for 2001 consisted of Don Brutzman, Stephen. Matsuba, Mike Collins, Allen Dutton, Juan Gril , Mike Hunsberger, Jerry Isdale
2002-09-01
Secure Multicast......................................................................24 i. Message Digests and Message Authentication Codes ( MACs ...that is, the needs of the VE will determine what the design will look like (e.g., reliable vs . unreliable data communications). In general, there...Molva00] and [Abdalla00]. i. Message Digests and Message Authentication Codes ( MACs ) Message digests and MACs are used for data integrity verification
Robust Airborne Networking Extensions (RANGE)
2008-02-01
IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code
Contact Graph Routing Enhancements Developed in ION for DTN
NASA Technical Reports Server (NTRS)
Segui, John S.; Burleigh, Scott
2013-01-01
The Interplanetary Overlay Network (ION) software suite is an open-source, flight-ready implementation of networking protocols including the Delay/Disruption Tolerant Networking (DTN) Bundle Protocol (BP), the CCSDS (Consultative Committee for Space Data Systems) File Delivery Protocol (CFDP), and many others including the Contact Graph Routing (CGR) DTN routing system. While DTN offers the capability to tolerate disruption and long signal propagation delays in transmission, without an appropriate routing protocol, no data can be delivered. CGR was built for space exploration networks with scheduled communication opportunities (typically based on trajectories and orbits), represented as a contact graph. Since CGR uses knowledge of future connectivity, the contact graph can grow rather large, and so efficient processing is desired. These enhancements allow CGR to scale to predicted NASA space network complexities and beyond. This software improves upon CGR by adopting an earliest-arrival-time cost metric and using the Dijkstra path selection algorithm. Moving to Dijkstra path selection also enables construction of an earliest- arrival-time tree for multicast routing. The enhancements have been rolled into ION 3.0 available on sourceforge.net.
Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing
2017-04-20
The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.
Collaboration Services: Enabling Chat in Disadvantaged Grids
2014-06-01
grids in the tactical domain" [2]. The main focus of this group is to identify what we call tactical SOA foundation services. By this we mean which...Here, only IPv4 is supported, as differences relating to IPv4 and IPv6 addressing meant that this functionality was not easily extended to use IPv6 ...multicast groups. Our IPv4 implementation is fully compliant with the specification, whereas the IPv6 implementation uses our own interpretation of
Design and Implementation of the MARG Human Body Motion Tracking System
2004-10-01
7803-8463-6/041$20.00 ©:!004 IEEE 625 OPTOTRAK from Northern Digital Inc. is a typical example of a marker-based system [I 0]. Another is the...technique called tunneling is :used to overcome this problem. Tunneling is a software solution that runs on the end point routers/computers and allows...multicast packets to traverse the network by putting them into unicast packets. MUTUP overcomes the tunneling problem using shared memory in the
Network connectivity enhancement by exploiting all optical multicast in semiconductor ring laser
NASA Astrophysics Data System (ADS)
Siraj, M.; Memon, M. I.; Shoaib, M.; Alshebeili, S.
2015-03-01
The use of smart phone and tablet applications will provide the troops for executing, controlling and analyzing sophisticated operations with the commanders providing crucial documents directly to troops wherever and whenever needed. Wireless mesh networks (WMNs) is a cutting edge networking technology which is capable of supporting Joint Tactical radio System (JTRS).WMNs are capable of providing the much needed bandwidth for applications like hand held radios and communication for airborne and ground vehicles. Routing management tasks can be efficiently handled through WMNs through a central command control center. As the spectrum space is congested, cognitive radios are a much welcome technology that will provide much needed bandwidth. They can self-configure themselves, can adapt themselves to the user requirement, provide dynamic spectrum access for minimizing interference and also deliver optimal power output. Sometimes in the indoor environment, there are poor signal issues and reduced coverage. In this paper, a solution utilizing (CR WMNs) over optical network is presented by creating nanocells (PCs) inside the indoor environment. The phenomenon of four-wave mixing (FWM) is exploited to generate all-optical multicast using semiconductor ring laser (SRL). As a result same signal is transmitted at different wavelengths. Every PC is assigned a unique wavelength. By using CR technology in conjunction with PC will not only solve network coverage issue but will provide a good bandwidth to the secondary users.
NASA Astrophysics Data System (ADS)
Duan, Haoran
1997-12-01
This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 μm CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.
The Use of End-to-End Multicast Measurements for Characterizing Internal Network Behavior
2002-08-01
dropping on the basis Random Early Detection ( RED ) [17] is another mechanism by which packet loss may become decorrelated. It remains to be seen whether...this mechanism will be widely deployed in communications networks. On the other hand, the use of RED to merely mark packets will not break correlations...Tail and Random Early Detection ( RED ) buffer discard methods, [17]. We compared the inferred loss and delay with actual probe loss and delay. We found
Chip-set for quality of service support in passive optical networks
NASA Astrophysics Data System (ADS)
Ringoot, Edwin; Hoebeke, Rudy; Slabbinck, B. Hans; Verhaert, Michel
1998-10-01
In this paper the design of a chip-set for QoS provisioning in ATM-based Passive Optical Networks is discussed. The implementation of a general-purpose switch chip on the Optical Network Unit is presented, with focus on the design of the cell scheduling and buffer management logic. The cell scheduling logic supports `colored' grants, priority jumping and weighted round-robin scheduling. The switch chip offers powerful buffer management capabilities enabling the efficient support of GFR and UBR services. Multicast forwarding is also supported. In addition, the architecture of a MAC controller chip developed for a SuperPON access network is introduced. In particular, the permit scheduling logic and its implementation on the Optical Line Termination will be discussed. The chip-set enables the efficient support of services with different service requirements on the SuperPON. The permit scheduling logic built into the MAC controller chip in combination with the cell scheduling and buffer management capabilities of the switch chip can be used by network operators to offer guaranteed service performance to delay sensitive services, and to efficiently and fairly distribute any spare capacity to delay insensitive services.
A Real-Time Executive for Multiple-Computer Clusters.
1984-12-01
in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by ’b4, internal _loopback by 02"b4, clear loooback by ’b4, go offline by Ŝ"b4, eo online by ’b4, onboard _diagnostic by Oa’b4, cdr
Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation
NASA Technical Reports Server (NTRS)
Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat
2011-01-01
The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.
An approach to verification and validation of a reliable multicasting protocol: Extended Abstract
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.
NASA Astrophysics Data System (ADS)
Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae
In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.
Internet Voice Distribution System (IVoDS) Utilization in Remote Payload Operations
NASA Technical Reports Server (NTRS)
Best, Susan; Bradford, Bob; Chamberlain, Jim; Nichols, Kelvin; Bailey, Darrell (Technical Monitor)
2002-01-01
Due to limited crew availability to support science and the large number of experiments to be operated simultaneously, telescience is key to a successful International Space Station (ISS) science program. Crew, operations personnel at NASA centers, and researchers at universities and companies around the world must work closely together to perform scientific experiments on-board ISS. NASA has initiated use of Voice over Internet Protocol (VoIP) to supplement the existing HVoDS mission voice communications system used by researchers. The Internet Voice Distribution System (IVoDS) connects researchers to mission support "loops" or conferences via Internet Protocol networks such as the high-speed Internet 2. Researchers use IVoDS software on personal computers to talk with operations personnel at NASA centers. IVoDS also has the capability, if authorized, to allow researchers to communicate with the ISS crew during experiment operations. NODS was developed by Marshall Space Flight Center with contractors A2 Technology, Inc. FVC, Lockheed- Martin, and VoIP Group. IVoDS is currently undergoing field-testing with full deployment for up to 50 simultaneous users expected in 2002. Research is currently being performed to take full advantage of the digital world - the Personal Computer and Internet Protocol networks - to qualitatively enhance communications among ISS operations personnel. In addition to the current voice capability, video and data-sharing capabilities are being investigated. Major obstacles being addressed include network bandwidth capacity and strict security requirements. Techniques being investigated to reduce and overcome these obstacles include emerging audio-video protocols and network technology including multicast and quality-of-service.
Advanced Map For Real-Time Process Control
NASA Astrophysics Data System (ADS)
Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto
1987-10-01
MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.
A generic flexible and robust approach for intelligent real-time video-surveillance systems
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Delaigle, Jean-Francois; Bastide, Arnaud; Macq, Benoit
2004-05-01
In this article we present a generic, flexible and robust approach for an intelligent real-time video-surveillance system. A previous version of the system was presented in [1]. The goal of these advanced tools is to provide help to operators by detecting events of interest in visual scenes and highlighting alarms and compute statistics. The proposed system is a multi-camera platform able to handle different standards of video inputs (composite, IP, IEEE1394 ) and which can basically compress (MPEG4), store and display them. This platform also integrates advanced video analysis tools, such as motion detection, segmentation, tracking and interpretation. The design of the architecture is optimised to playback, display, and process video flows in an efficient way for video-surveillance application. The implementation is distributed on a scalable computer cluster based on Linux and IP network. It relies on POSIX threads for multitasking scheduling. Data flows are transmitted between the different modules using multicast technology and under control of a TCP-based command network (e.g. for bandwidth occupation control). We report here some results and we show the potential use of such a flexible system in third generation video surveillance system. We illustrate the interest of the system in a real case study, which is the indoor surveillance.
Ultra-High Capacity Silicon Photonic Interconnects through Spatial Multiplexing
NASA Astrophysics Data System (ADS)
Chen, Christine P.
The market for higher data rate communication is driving the semiconductor industry to develop new techniques of writing at smaller scales, while continuing to scale bandwidth at low power consumption. Silicon photonic (SiPh) devices offer a potential solution to the electronic interconnect bandwidth bottleneck. SiPh leverages the technology commensurate of decades of fabrication development with the unique functionality of next-generation optical interconnects. Finer fabrication techniques have allowed for manufacturing physical characteristics of waveguide structures that can support multiple modes in a single waveguide. By refining modal characteristics in photonic waveguide structures, through mode multiplexing with the asymmetric y-junction and microring resonator, higher aggregate data bandwidth is demonstrated via various combinations of spatial multiplexing, broadening applications supported by the integrated platform. The main contributions of this dissertation are summarized as follows. Experimental demonstrations of new forms of spatial multiplexing combined together exhibit feasibility of data transmission through mode-division multiplexing (MDM), mode-division and wavelength-division multiplexing (MDM-WDM), and mode-division and polarization-division multiplexing (MDM-PDM) through a C-band, Si photonic platform. Error-free operation through mode multiplexers and demultiplexers show how data can be viably scaled on multiple modes and with existing spatial domains simultaneously. Furthermore, we explore expanding device channel support from two to three arms. Finding that a slight mismatch in the third arm can increase crosstalk contributions considerably, especially when increasing data rate, we explore a methodical way to design the asymmetric y-junction device by considering its angles and multiplexer/demultiplexer arm width. By taking into consideration device fabrication variations, we turn towards optimizing device performance post-fabrication. Through ModePROP simulations, optimizing device performance dynamically post-fabrication is analyzed, through either electro-optical or thermo-optical means. By biasing the arm introducing the slight spectral offset, we can quantifiably improve device performance. Scaling bandwidth is experimentally demonstrated through the device at 3 modes, 2 wavelengths, and 40 Gb/s data rate for 240 Gb/s aggregate bandwidth, with the potential to reduce power penalty per the device optimization process we described. A main motivation for this on-chip spatial multiplexing is the need to reduce costs. As the laser source serves as the greatest power consumer in an optical system, mode-division multiplexing and other forms of spatial multiplexing can be implemented to push its potentially prohibitive cost metrics down. In order to demonstrate an intelligent platform capable of dynamically multicasting data and reallocating power as needed by the system, we must first initialize the switch fabric to control with an electronic interface. A dithering mechanism, whereby exact cross, bar, and sub-percentage states are enforced through the device, is described here. Such a method could be employed for actuating the device table of bias values to states automatically. We then employ a dynamic power reallocation algorithm through a data acquisition unit, showing real-time channel recovery for channels experiencing power loss by diverting power from paths that could tolerate it. The data that is being multicast through the system is experimentally shown to be error-free at 40 Gb/s data rate, when transmitting from one to three clients and going from automatic bar/cross states to equalized power distribution. For the last portion of this topic, the switch fabric was inserted into a high-performance computing system. In order to run benchmarks at 10 Gb/s data ontop of the switch fabric, a newer model of the control plane was implemented to toggle states according to the command issued by the server. Such a programmable mechanism will prove necessary in future implementations of optical subsystems embedded inside larger systems, like data centers. Beyond the specific control plane demonstrated, the idea of an intelligent photonic layer can be applied to alleviate many kinds of optical channel abnormalities or accommodate for switching based on different patterns in data transmission. Finally, the experimental demonstration of a coherent perfect absorption Si modulator is exhibited, showing a viable extinction ratio of 24.5 dB. Using this coherent perfect absorption mechanism to demodulate signals, there is the added benefit of differential reception. Currently, an automated process for data collection is employed at a faster time scale than instabilities present in fibers in the setup with future implementations eliminating the off-chip phase modulator for greater signal stability. The field of SiPh has developed to a stage where specific application domains can take off and compete according to industrial-level standards. The work in this dissertation contributes to experimental demonstration of a newly developing area of mode-division multiplexing for substantially increasing bandwidth on-chip. While implementing the discussed photonic devices in dynamic systems, various attributes of integrated photonics are leveraged with existing electronic technologies. Future generations of computing systems should then be designed by implementing both system and device level considerations. (Abstract shortened by ProQuest.).
Lightweight and Statistical Techniques for Petascale PetaScale Debugging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errorsmore » in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis to a small set of nodes or by identifying equivalence classes of nodes and sampling our debug targets from them. We implemented these techniques as lightweight tools that efficiently work on the full scale of the target machine. We explored four lightweight debugging refinements: generic classification parameters, such as stack traces, application-specific classification parameters, such as global variables, statistical data acquisition techniques and machine learning based approaches to perform root cause analysis. Work done under this project can be divided into two categories, new algorithms and techniques for scalable debugging, and foundation infrastructure work on our MRNet multicast-reduction framework for scalability, and Dyninst binary analysis and instrumentation toolkits.« less
Addressing practical challenges in utility optimization of mobile wireless sensor networks
NASA Astrophysics Data System (ADS)
Eswaran, Sharanya; Misra, Archan; La Porta, Thomas; Leung, Kin
2008-04-01
This paper examines the practical challenges in the application of the distributed network utility maximization (NUM) framework to the problem of resource allocation and sensor device adaptation in a mission-centric wireless sensor network (WSN) environment. By providing rich (multi-modal), real-time information about a variety of (often inaccessible or hostile) operating environments, sensors such as video, acoustic and short-aperture radar enhance the situational awareness of many battlefield missions. Prior work on the applicability of the NUM framework to mission-centric WSNs has focused on tackling the challenges introduced by i) the definition of an individual mission's utility as a collective function of multiple sensor flows and ii) the dissemination of an individual sensor's data via a multicast tree to multiple consuming missions. However, the practical application and performance of this framework is influenced by several parameters internal to the framework and also by implementation-specific decisions. This is made further complex due to mobile nodes. In this paper, we use discrete-event simulations to study the effects of these parameters on the performance of the protocol in terms of speed of convergence, packet loss, and signaling overhead thereby addressing the challenges posed by wireless interference and node mobility in ad-hoc battlefield scenarios. This study provides better understanding of the issues involved in the practical adaptation of the NUM framework. It also helps identify potential avenues of improvement within the framework and protocol.
Multimedia And Internetworking Architecture Infrastructure On Interactive E-Learning System
NASA Astrophysics Data System (ADS)
Indah, K. A. T.; Sukarata, G.
2018-01-01
Interactive e-learning is a distance learning method that involves information technology, electronic system or computer as one means of learning system used for teaching and learning process that is implemented without having face to face directly between teacher and student. A strong dependence on emerging technologies greatly influences the way in which the architecture is designed to produce a powerful interactive e-learning network. In this paper analyzed an architecture model where learning can be done interactively, involving many participants (N-way synchronized distance learning) using video conferencing technology. Also used broadband internet network as well as multicast techniques as a troubleshooting method for bandwidth usage can be efficient.
Representation and Integration of Scientific Information
NASA Technical Reports Server (NTRS)
1998-01-01
The objective of this Joint Research Interchange with NASA-Ames was to investigate how the Tsimmis technology could be used to represent and integrate scientific information. The main goal of the Tsimmis project is to allow a decision maker to find information of interest from such sources, fuse it, and process it (e.g., summarize it, visualize it, discover trends). Another important goal is the easy incorporation of new sources, as well the ability to deal with sources whose structure or services evolve. During the Interchange we had research meetings approximately every month or two. The funds provided by NASA supported work that lead to the following two papers: Fusion Queries over Internet Databases; Efficient Query Subscription Processing in a Multicast Environment.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System
Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi
2015-01-01
WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication. PMID:26351658
Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System.
Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi
2015-01-01
WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication.
Robertson, Brian; Zhang, Zichen; Yang, Haining; Redmond, Maura M; Collings, Neil; Liu, Jinsong; Lin, Ruisheng; Jeziorska-Chapman, Anna M; Moore, John R; Crossland, William A; Chu, D P
2012-04-20
It is shown that reflective liquid crystal on silicon (LCOS) spatial light modulator (SLM) based interconnects or fiber switches that use defocus to reduce crosstalk can be evaluated and optimized using a fractional Fourier transform if certain optical symmetry conditions are met. Theoretically the maximum allowable linear hologram phase error compared to a Fourier switch is increased by a factor of six before the target crosstalk for telecom applications of -40 dB is exceeded. A Gerchberg-Saxton algorithm incorporating a fractional Fourier transform modified for use with a reflective LCOS SLM is used to optimize multi-casting holograms in a prototype telecom switch. Experiments are in close agreement to predicted performance.
Modelling and temporal performances evaluation of networked control systems using (max, +) algebra
NASA Astrophysics Data System (ADS)
Ammour, R.; Amari, S.
2015-01-01
In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.
Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P
2004-01-01
Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
A Protocol for Scalable Loop-Free Multicast Routing
1997-01-01
N ÁŜ+@"B�tQDVO:(#ÄŜ+ÄIO8+@D@DÀiÄq/"VÂaÀi:!8A:<B?V+B3/"V+I 8AQ "kVÄDV|¾VO¾K/"VO:J;¬K�@ÄDVNt�QDV:(+:=V|QD@D:zV+8cI#ඝ/RöV> Ä...8@DVOK¶Å�)/"Ài:�:zÀcQDÄzVO:yø|K¶@.W¢K¶ÅiQD:=Vù iIc5�T�@*(@�QR¾7/"VO:�ÀA :=VS8c;qÀi@"ILÀcQRÃMBqŜOWV I 8AQ "kV+B7¸qî�÷ï ½�ÄzÀ�8A:z:=K�WVI
NASA Astrophysics Data System (ADS)
Deng, Ning
In recent years, optical phase modulation has attracted much research attention in the field of fiber optic communications. Compared with the traditional optical intensity-modulated signal, one of the main merits of the optical phase-modulated signal is the better transmission performance. For optical phase modulation, in spite of the comprehensive study of its transmission performance, only a little research has been carried out in terms of its functions, applications and signal processing for future optical networks. These issues are systematically investigated in this thesis. The research findings suggest that optical phase modulation and its signal processing can greatly facilitate flexible network functions and high bandwidth which can be enjoyed by end users. In the thesis, the most important physical-layer technology, signal processing and multiplexing, are investigated with optical phase-modulated signals. Novel and advantageous signal processing and multiplexing approaches are proposed and studied. Experimental investigations are also reported and discussed in the thesis. Optical time-division multiplexing and demultiplexing. With the ever-increasing demand on communication bandwidth, optical time division multiplexing (OTDM) is an effective approach to upgrade the capacity of each wavelength channel in current optical systems. OTDM multiplexing can be simply realized, however, the demultiplexing requires relatively complicated signal processing and stringent timing control, and thus hinders its practicability. To tackle this problem, in this thesis a new OTDM scheme with hybrid DPSK and OOK signals is proposed. Experimental investigation shows this scheme can greatly enhance the demultiplexing timing misalignment and improve the demultiplexing performance, and thus make OTDM more practical and cost effective. All-optical signal processing. In current and future optical communication systems and networks, the data rate per wavelength has been approaching the speed limitation of electronics. Thus, all-optical signal processing techniques are highly desirable to support the necessary optical switching functionalities in future ultrahigh-speed optical packet-switching networks. To cope with the wide use of optical phase-modulated signals, in the thesis, an all-optical logic for DPSK or PSK input signals is developed, for the first time. Based on four-wave mixing in semiconductor optical amplifier, the structure of the logic gate is simple, compact, and capable of supporting ultrafast operation. In addition to the general logic processing, a simple label recognition scheme, as a specific signal processing function, is proposed for phase-modulated label signals. The proposed scheme can recognize any incoming label pattern according to the local pattern, and is potentially capable of handling variable-length label patterns. Optical access network with multicast overlay and centralized light sources. In the arena of optical access networks, wavelength division multiplexing passive optical network (WDM-PON) is a promising technology to deliver high-speed data traffic. However, most of proposed WDM-PONs only support conventional point-to-point service, and cannot meet the requirement of increasing demand on broadcast and multicast service. In this thesis, a simple network upgrade is proposed based on the traditional PON architecture to support both point-to-point and multicast service. In addition, the two service signals are modulated on the same lightwave carrier. The upstream signal is also remodulated on the same carrier at the optical network unit, which can significantly relax the requirement on wavelength management at the network unit.
NASA Astrophysics Data System (ADS)
Cannon, Brice M.
This thesis investigates the all-optical combination of amplitude and phase modulated signals into one unified multi-level phase modulated signal, utilizing the Kerr nonlinearity of cross-phase modulation (XPM). Predominantly, the first experimental demonstration of simultaneous polarization-insensitive phase-transmultiplexing and multicasting (PI-PTMM) will be discussed. The PI-PTMM operation combines the data of a single 10-Gbaud carrier-suppressed return-to-zero (CSRZ) on-off keyed (OOK) pump signal and 4x10-Gbaud return-to-zero (RZ) binary phase-shift keyed (BPSK) probe signals to generate 4x10-GBd RZ-quadrature phase-shift keyed (QPSK) signals utilizing a highly nonlinear, birefringent photonic crystal fiber (PCF). Since XPM is a highly polarization dependent nonlinearity, a polarization sensitivity reduction technique was used to alleviate the fluctuations due to the remotely generated signals' unpredictable states of polarization (SOP). The measured amplified spontaneous emission (ASE) limited receiver sensitivity optical signal-to-noise ratio (OSNR) penalty of the PI-PTMM signal relative to the field-programmable gate array (FPGA) pre-coded RZ-DQPSK baseline at a forward-error correction (FEC) limit of 10-3 BER was ≈ 0.3 dB. In addition, the OSNR of the remotely generated CSRZ-OOK signal could be degraded to ≈ 29 dB/0.1nm, before the bit error rate (BER) performance of the PI-PTMM operation began to exponentially degrade. A 138-km dispersion-managed recirculating loop system with a 100-GHz, 13-channel mixed-format dense-wavelength-division multiplexed (DWDM) transmitter was constructed to investigate the effect of metro/long-haul transmission impairments. The PI-PTMM DQPSK and the FPGA pre-coded RZ-DQPSK baseline signals were transmitted 1,900 km and 2,400 km in the nonlinearity-limited transmission regime before reaching the 10-3 BER FEC limit. The relative reduction in transmission distance for the PI-PTMM signal was due to the additional transmitter impairments in the PCF that interact negatively with the transmission fiber.
NASA Technical Reports Server (NTRS)
Stehle, Roy H.; Ogier, Richard G.
1993-01-01
Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.
NASA Technical Reports Server (NTRS)
2007-01-01
Topics include: Advanced Systems for Monitoring Underwater Sounds; Wireless Data-Acquisition System for Testing Rocket Engines; Processing Raw HST Data With Up-to-Date Calibration Data; Mobile Collection and Automated Interpretation of EEG Data; System for Secure Integration of Aviation Data; Servomotor and Controller Having Large Dynamic Range; Digital Multicasting of Multiple Audio Streams; Translator for Optimizing Fluid-Handling Components; AIRSAR Web-Based Data Processing; Pattern Matcher for Trees Constructed From Lists; Reducing a Knowledge-Base Search Space When Data Are Missing; Ground-Based Correction of Remote-Sensing Spectral Imagery; State-Chart Autocoder; Pointing History Engine for the Spitzer Space Telescope; Low-Friction, High-Stiffness Joint for Uniaxial Load Cell; Magnet-Based System for Docking of Miniature Spacecraft; Electromechanically Actuated Valve for Controlling Flow Rate; Plumbing Fixture for a Microfluidic Cartridge; Camera Mount for a Head-Up Display; Core-Cutoff Tool; Recirculation of Laser Power in an Atomic Fountain; Simplified Generation of High-Angular-Momentum Light Beams; Imaging Spectrometer on a Chip; Interferometric Quantum-Nondemolition Single-Photon Detectors; Ring-Down Spectroscopy for Characterizing a CW Raman Laser; Complex Type-II Interband Cascade MQW Photodetectors; Single-Point Access to Data Distributed on Many Processors; Estimating Dust and Water Ice Content of the Martian Atmosphere From THEMIS Data; Computing a Stability Spectrum by Use of the HHT; Theoretical Studies of Routes to Synthesis of Tetrahedral N4; Estimation Filter for Alignment of the Spitzer Space Telescope; Antenna for Measuring Electric Fields Within the Inner Heliosphere; Improved High-Voltage Gas Isolator for Ion Thruster; and Hybrid Mobile Communication Networks for Planetary Exploration.
The distribution of free electrons in the inner galaxy from pulsar dispersion measures
NASA Technical Reports Server (NTRS)
Harding, D. S.; Harding, A. K.
1981-01-01
The dispersion measures of a sample of 149 pulsars in the inner Galaxy (absolute value of l 50 deg) were statistically analyzed to deduce the large-scale distribution of free thermal electrons in this region. The dispersion measure distribution of these pulsars shows significant evidence for a decrease in the electron scale height from a local value greater than the pulsar scale height to a value less than the pulsar scale height at galactocentric radii inside of approximately 7 kpc. An increase in the electron density (to a value around .15/cu cm at 4 to 5 kpc) must accompany such a decrease in scale height. There is also evidence for a large-scale warp in the electron distribution below the b + 0 deg plane inside the Solar circle. A model is proposed for the electron distribution which incorporates these features and Monte Carlo generated dispersion measure distributions are presented for parameters which best reproduce the observed pulsar distributions.
Load balancing for massively-parallel soft-real-time systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hailperin, M.
1988-09-01
Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less
Remote Observing and Automatic FTP on Kitt Peak
NASA Astrophysics Data System (ADS)
Seaman, Rob; Bohannan, Bruce
As part of KPNO's Internet-based observing services we experimented with the publically available audio, video and whiteboard MBONE clients (vat, nv, wb and others) in both point-to-point and multicast modes. While bandwidth is always a constraint on the Internet, it is less of a constraint to operations than many might think. These experiments were part of two new Internet-based observing services offered to KPNO observers beginning with the Fall 1995 semester: a remote observing station and an automatic FTP data queue. The remote observing station seeks to duplicate the KPNO IRAF/ICE observing environment on a workstation at the observer's home institution. The automatic FTP queue is intended to support those observing programs that require quick transport of data back to the home institution, for instance, for near real time reductions to aid in observing tactics. We also discuss the early operational results of these services.
Efficient Assignment of Multiple E-MBMS Sessions towards LTE
NASA Astrophysics Data System (ADS)
Alexiou, Antonios; Bouras, Christos; Kokkinos, Vasileios
One of the major prerequisites for Long Term Evolution (LTE) networks is the mass provision of multimedia services to mobile users. To this end, Evolved - Multimedia Broadcast/Multicast Service (E-MBMS) is envisaged to play an instrumental role during LTE standardization process and ensure LTE’s proliferation in mobile market. E-MBMS targets at the economic delivery, in terms of power and spectral efficiency, of multimedia data from a single source entity to multiple destinations. This paper proposes a novel mechanism for efficient radio bearer selection during E-MBMS transmissions in LTE networks. The proposed mechanism is based on the concept of transport channels combination in any cell of the network. Most significantly, the mechanism manages to efficiently deliver multiple E-MBMS sessions. The performance of the proposed mechanism is evaluated and compared with several radio bearer selection mechanisms in order to highlight the enhancements that it provides.
Enabling Optical Network Test Bed for 5G Tests
NASA Astrophysics Data System (ADS)
Giuntini, Marco; Grazioso, Paolo; Matera, Francesco; Valenti, Alessandro; Attanasio, Vincenzo; Di Bartolo, Silvia; Nastri, Emanuele
2017-03-01
In this work, we show some experimental approaches concerning optical network design dedicated to 5G infrastructures. In particular, we show some implementations of network slicing based on Carrier Ethernet forwarding, which will be very suitable in the context of 5G heterogeneous networks, especially looking at services for vertical enterprises. We also show how to adopt a central unit (orchestrator) to automatically manage such logical paths according to quality-of-service requirements, which can be monitored at the user location. We also illustrate how novel all-optical processes, such as the ones based on all-optical wavelength conversion, can be used for multicasting, enabling development of TV broadcasting based on 4G-5G terminals. These managing and forwarding techniques, operating on optical links, are tested in a wireless environment on Wi-Fi cells and emulating LTE and WiMAX systems by means of the NS-3 code.
Trading leads to scale-free self-organization
NASA Astrophysics Data System (ADS)
Ebert, M.; Paul, W.
2012-12-01
Financial markets display scale-free behavior in many different aspects. The power-law behavior of part of the distribution of individual wealth has been recognized by Pareto as early as the nineteenth century. Heavy-tailed and scale-free behavior of the distribution of returns of different financial assets have been confirmed in a series of works. The existence of a Pareto-like distribution of the wealth of market participants has been connected with the scale-free distribution of trading volumes and price-returns. The origin of the Pareto-like wealth distribution, however, remained obscure. Here we show that in a market where the imbalance of supply and demand determines the direction of prize changes, it is the process of trading itself that spontaneously leads to a self-organization of the market with a Pareto-like wealth distribution for the market participants and at the same time to a scale-free behavior of return fluctuations and trading volume distributions.
Nolen, Matthew S.; Magoulick, Daniel D.; DiStefano, Robert J.; Imhoff, Emily M.; Wagner, Brian K.
2014-01-01
We found that a range of environmental variables were important in predicting crayfish distribution and abundance at multiple spatial scales and their importance was species-, response variable- and scale dependent. We would encourage others to examine the influence of spatial scale on species distribution and abundance patterns.
A CBLT and MCST capable VME slave interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wuerthwein, F.; Strohman, C.; Honscheid, K.
1996-12-31
We report on the development of a VME slave interface for the CLEO III detector implemented in an ALTERA EPM7256 CPLD. This includes the first implementation of the chained block transfer protocol (CBLT) and multi-cast cycles (MCST) as defined by the VME-P task group of VIPA. Within VME64 there is no operation that guarantees efficient readout of large blocks of data that are sparsely distributed among a series of slave modules in a VME crate. This has led the VME-P task group of VIPA to specify protocols that enable a master to address many slaves at a single address. Whichmore » slave is to drive the data bus is determined by a token passing mechanism that uses the *IACKOUT, *IACKIN daisy chain. This protocol requires no special features from the master besides conformance to VME64. Non-standard features are restricted to the VME slave interface. The CLEO III detector comprises {approximately}400,000 electronic channels that have to be digitized, sparsified, and stored within 20{mu}s in order to incur less than 2% dead time at an anticipated trigger rate of 1000Hz. 95% of these channels are accounted for by only two detector subsystems, the silicon microstrip detector (125,000 channels), and the ring imaging Cerenkov detector (RICH) (230,400 channels). After sparsification either of these two detector subsystems is expected to provide event fragments on the order of 10KBytes, spread over 4, and 8 VME crates, respectively. We developed a chip set that sparsifies, tags, and stores the incoming digital data on the data boards, and includes a VME slave interface that implements MCST and CUT protocols. In this poster, we briefly describe this chip set and then discuss the VME slave interface in detail.« less
NASA Astrophysics Data System (ADS)
Analoui, Morteza; Rezvani, Mohammad Hossein
2011-01-01
Behavior modeling has recently been investigated for designing self-organizing mechanisms in the context of communication networks in order to exploit the natural selfishness of the users with the goal of maximizing the overall utility. In strategic behavior modeling, the users of the network are assumed to be game players who seek to maximize their utility with taking into account the decisions that the other players might make. The essential difference between the aforementioned researches and this work is that it incorporates the non-strategic decisions in order to design the mechanism for the overlay network. In this solution concept, the decisions that a peer might make does not affect the actions of the other peers at all. The theory of consumer-firm developed in microeconomics is a model of the non-strategic behavior that we have adopted in our research. Based on it, we have presented distributed algorithms for peers' "joining" and "leaving" operations. We have modeled the overlay network as a competitive economy in which the content provided by an origin server can be viewed as commodity and the origin server and the peers who multicast the content to their downside are considered as the firms. On the other hand, due to the dual role of the peers in the overlay network, they can be considered as the consumers as well. On joining to the overlay economy, each peer is provided with an income and tries to get hold of the service regardless to the behavior of the other peers. We have designed the scalable algorithms in such a way that the existence of equilibrium price (known as Walrasian equilibrium price) is guaranteed.
Next Generation Integrated Environment for Collaborative Work Across Internets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey B. Newman
2009-02-24
We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drotar, Alexander P.; Quinn, Erin E.; Sutherland, Landon D.
2012-07-30
Project description is: (1) Build a high performance computer; and (2) Create a tool to monitor node applications in Component Based Tool Framework (CBTF) using code from Lightweight Data Metric Service (LDMS). The importance of this project is that: (1) there is a need a scalable, parallel tool to monitor nodes on clusters; and (2) New LDMS plugins need to be able to be easily added to tool. CBTF stands for Component Based Tool Framework. It's scalable and adjusts to different topologies automatically. It uses MRNet (Multicast/Reduction Network) mechanism for information transport. CBTF is flexible and general enough to bemore » used for any tool that needs to do a task on many nodes. Its components are reusable and 'EASILY' added to a new tool. There are three levels of CBTF: (1) frontend node - interacts with users; (2) filter nodes - filters or concatenates information from backend nodes; and (3) backend nodes - where the actual work of the tool is done. LDMS stands for lightweight data metric servies. It's a tool used for monitoring nodes. Ltool is the name of the tool we derived from LDMS. It's dynamically linked and includes the following components: Vmstat, Meminfo, Procinterrupts and more. It works by: Ltool command is run on the frontend node; Ltool collects information from the backend nodes; backend nodes send information to the filter nodes; and filter nodes concatenate information and send to a database on the front end node. Ltool is a useful tool when it comes to monitoring nodes on a cluster because the overhead involved with running the tool is not particularly high and it will automatically scale to any size cluster.« less
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
Polling-Based High-Bit-Rate Packet Transfer in a Microcellular Network to Allow Fast Terminals
NASA Astrophysics Data System (ADS)
Hoa, Phan Thanh; Lambertsen, Gaute; Yamada, Takahiko
A microcellular network will be a good candidate for the future broadband mobile network. It is expected to support high-bit-rate connection for many fast mobile users if the handover is processed fast enough to lessen its impact on QoS requirements. One of the promising techniques is believed to use for the wireless interface in such a microcellular network is the WLAN (Wireless LAN) technique due to its very high wireless channel rate. However, the less capability of mobility support of this technique must be improved to be able to expand its utilization for the microcellular environment. The reason of its less support mobility is large handover latency delay caused by contention-based handover to the new BS (base station) and delay of re-forwarding data from the old to new BS. This paper presents a proposal of multi-polling and dynamic LMC (Logical Macro Cell) to reduce mentioned above delays. Polling frame for an MT (Mobile Terminal) is sent from every BS belonging to the same LMC — a virtual single macro cell that is a multicast group of several adjacent micro-cells in which an MT is communicating. Instead of contending for the medium of a new BS during handover, the MT responds to the polling sent from that new BS to enable the transition. Because only one BS of the LMC receives the polling ACK (acknowledgement) directly from the MT, this ACK frame has to be multicast to all BSs of the same LMC through the terrestrial network to continue sending the next polling cycle at each BS. Moreover, when an MT hands over to a new cell, its current LMC is switched over to a newly corresponding LMC to prevent the future contending for a new LMC. By this way, an MT can do handover between micro-cells of an LMC smoothly because the redundant resource is reserved for it at neighboring cells, no need to contend with others. Our simulation results using the OMNeT++ simulator illustrate the performance achievements of the multi-polling and dynamic LMC scheme in eliminating handover latency, packet loss and keeping mobile users' throughput stable in the high traffic load condition though it causes somewhat overhead on the neighboring cells.
Operational surface UV radiation product from GOME-2 and AVHRR/3 data
NASA Astrophysics Data System (ADS)
Kujanpää, J.; Kalakoski, N.
2015-05-01
The surface ultraviolet (UV) radiation product, version 1.20, generated operationally in the framework of the Satellite Application Facility on Ozone and Atmospheric Chemistry Monitoring (O3M SAF) of the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) is described. The product is based on the total ozone column derived from the measurements of the second Global Ozone Monitoring Experiment (GOME-2) instrument aboard EUMETSAT's polar orbiting meteorological operational (Metop) satellites. The input total ozone product is generated by the German Aerospace Center (DLR) also within the O3M SAF framework. Polar orbiting satellites provide global coverage but infrequent sampling of the diurnal cloud cover. The diurnal variation of the surface UV radiation is extremely strong due to modulation by solar elevation and rapidly changing cloud cover. At the minimum, one sample of the cloud cover in the morning and another in the afternoon are needed to derive daily maximum and daily integrated surface UV radiation quantities. This is achieved by retrieving cloud optical depth from the channel 1 reflectance of the third Advanced Very High Resolution Radiometer (AVHRR/3) instrument aboard both Metop in the morning orbit (daytime descending node around 09:30 LT) and Polar Orbiting Environmental Satellites (POES) of the National Oceanic and Atmospheric Administration (NOAA) in the afternoon orbit (daytime ascending node around 14:30 LT). In addition, more overpasses are used at high latitudes where the swaths of consecutive orbits overlap. The input satellite data are received from EUMETSAT's Multicast Distribution System (EUMETCast) using commercial telecommunication satellites for broadcasting the data to the user community. The surface UV product includes daily maximum dose rates and integrated daily doses with different biological weighting functions, integrated UVB and UVA radiation, solar noon UV Index and daily maximum photolysis frequencies of ozone and nitrogen dioxide at the surface level. The quantities are computed in a 0.5° × 0.5° regular latitude-longitude grid and stored as daily files in the hierarchical data format (HDF5) within two weeks from sensing. The product files are archived in the O3M SAF distributed archive and can be ordered via the EUMETSAT Data Centre.
Scaling and universality in heart rate variability distributions
NASA Technical Reports Server (NTRS)
Rosenblum, M. G.; Peng, C. K.; Mietus, J. E.; Havlin, S.; Stanley, H. E.; Goldberger, A. L.
1998-01-01
We find that a universal homogeneous scaling form describes the distribution of cardiac variations for a group of healthy subjects, which is stable over a wide range of time scales. However, a similar scaling function does not exist for a group with a common cardiopulmonary instability associated with sleep apnea. Subtle differences in the distributions for the day- and night-phase dynamics for healthy subjects are detected.
Scaling and universality in heart rate variability distributions
NASA Astrophysics Data System (ADS)
Ivanov, P. Ch; Rosenblum, M. G.; Peng, C.-K.; Mietus, J. E.; Havlin, S.; Stanley, H. E.; Goldberger, A. L.
We find that a universal homogeneous scaling form describes the distributions of cardiac variations for a group of healthy subjects, which is stable over a wide range of time scales. However, a similar scaling function does not exist for a group with a common cardiopulmonary instability associated with sleep apnea. Subtle differences in the distributions for the day- and night-phase dynamics for healthy subjects are detected.
Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.
2014-01-01
Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807
Eiserhardt, Wolf L.; Svenning, Jens-Christian; Kissling, W. Daniel; Balslev, Henrik
2011-01-01
Background The palm family occurs in all tropical and sub-tropical regions of the world. Palms are of high ecological and economical importance, and display complex spatial patterns of species distributions and diversity. Scope This review summarizes empirical evidence for factors that determine palm species distributions, community composition and species richness such as the abiotic environment (climate, soil chemistry, hydrology and topography), the biotic environment (vegetation structure and species interactions) and dispersal. The importance of contemporary vs. historical impacts of these factors and the scale at which they function is discussed. Finally a hierarchical scale framework is developed to guide predictor selection for future studies. Conclusions Determinants of palm distributions, composition and richness vary with spatial scale. For species distributions, climate appears to be important at landscape and broader scales, soil, topography and vegetation at landscape and local scales, hydrology at local scales, and dispersal at all scales. For community composition, soil appears important at regional and finer scales, hydrology, topography and vegetation at landscape and local scales, and dispersal again at all scales. For species richness, climate and dispersal appear to be important at continental to global scales, soil at landscape and broader scales, and topography at landscape and finer scales. Some scale–predictor combinations have not been studied or deserve further attention, e.g. climate on regional to finer scales, and hydrology and topography on landscape and broader scales. The importance of biotic interactions – apart from general vegetation structure effects – for the geographic ecology of palms is generally underexplored. Future studies should target scale–predictor combinations and geographic domains not studied yet. To avoid biased inference, one should ideally include at least all predictors previously found important at the spatial scale of investigation. PMID:21712297
Growth models and the expected distribution of fluctuating asymmetry
Graham, John H.; Shimizu, Kunio; Emlen, John M.; Freeman, D. Carl; Merkel, John
2003-01-01
Multiplicative error accounts for much of the size-scaling and leptokurtosis in fluctuating asymmetry. It arises when growth involves the addition of tissue to that which is already present. Such errors are lognormally distributed. The distribution of the difference between two lognormal variates is leptokurtic. If those two variates are correlated, then the asymmetry variance will scale with size. Inert tissues typically exhibit additive error and have a gamma distribution. Although their asymmetry variance does not exhibit size-scaling, the distribution of the difference between two gamma variates is nevertheless leptokurtic. Measurement error is also additive, but has a normal distribution. Thus, the measurement of fluctuating asymmetry may involve the mixing of additive and multiplicative error. When errors are multiplicative, we recommend computing log E(l) − log E(r), the difference between the logarithms of the expected values of left and right sides, even when size-scaling is not obvious. If l and r are lognormally distributed, and measurement error is nil, the resulting distribution will be normal, and multiplicative error will not confound size-related changes in asymmetry. When errors are additive, such a transformation to remove size-scaling is unnecessary. Nevertheless, the distribution of l − r may still be leptokurtic.
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Scale Mixture Models with Applications to Bayesian Inference
NASA Astrophysics Data System (ADS)
Qin, Zhaohui S.; Damien, Paul; Walker, Stephen
2003-11-01
Scale mixtures of uniform distributions are used to model non-normal data in time series and econometrics in a Bayesian framework. Heteroscedastic and skewed data models are also tackled using scale mixture of uniform distributions.
Ho, Andrew D; Yu, Carol C
2015-06-01
Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
Jeffrey E. Schneiderman; Hong S. He; Frank R. Thompson; William D. Dijak; Jacob S. Fraser
2015-01-01
Tree species distribution and abundance are affected by forces operating across a hierarchy of ecological scales. Process and species distribution models have been developed emphasizing forces at different scales. Understanding model agreement across hierarchical scales provides perspective on prediction uncertainty and ultimately enables policy makers and managers to...
Inferred Lunar Boulder Distributions at Decimeter Scales
NASA Technical Reports Server (NTRS)
Baloga, S. M.; Glaze, L. S.; Spudis, P. D.
2012-01-01
Block size distributions of impact deposits on the Moon are diagnostic of the impact process and environmental effects, such as target lithology and weathering. Block size distributions are also important factors in trafficability, habitability, and possibly the identification of indigenous resources. Lunar block sizes have been investigated for many years for many purposes [e.g., 1-3]. An unresolved issue is the extent to which lunar block size distributions can be extrapolated to scales smaller than limits of resolution of direct measurement. This would seem to be a straightforward statistical application, but it is complicated by two issues. First, the cumulative size frequency distribution of observable boulders rolls over due to resolution limitations at the small end. Second, statistical regression provides the best fit only around the centroid of the data [4]. Confidence and prediction limits splay away from the best fit at the endpoints resulting in inferences in the boulder density at the CPR scale that can differ by many orders of magnitude [4]. These issues were originally investigated by Cintala and McBride [2] using Surveyor data. The objective of this study was to determine whether the measured block size distributions from Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC-NAC) images (m-scale resolution) can be used to infer the block size distribution at length scales comparable to Mini-RF Circular Polarization Ratio (CPR) scales, nominally taken as 10 cm. This would set the stage for assessing correlations of inferred block size distributions with CPR returns [6].
Statistical analyses support power law distributions found in neuronal avalanches.
Klaus, Andreas; Yu, Shan; Plenz, Dietmar
2011-01-01
The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.
Reusser, D.A.; Lee, H.
2008-01-01
Habitat models can be used to predict the distributions of marine and estuarine non-indigenous species (NIS) over several spatial scales. At an estuary scale, our goal is to predict the estuaries most likely to be invaded, but at a habitat scale, the goal is to predict the specific locations within an estuary that are most vulnerable to invasion. As an initial step in evaluating several habitat models, model performance for a suite of benthic species with reasonably well-known distributions on the Pacific coast of the US needs to be compared. We discuss the utility of non-parametric multiplicative regression (NPMR) for predicting habitat- and estuary-scale distributions of native and NIS. NPMR incorporates interactions among variables, allows qualitative and categorical variables, and utilizes data on absence as well as presence. Preliminary results indicate that NPMR generally performs well at both spatial scales and that distributions of NIS are predicted as well as those of native species. For most species, latitude was the single best predictor, although similar model performance could be obtained at both spatial scales with combinations of other habitat variables. Errors of commission were more frequent at a habitat scale, with omission and commission errors approximately equal at an estuary scale. ?? 2008 International Council for the Exploration of the Sea. Published by Oxford Journals. All rights reserved.
A Method for Estimating Noise from Full-Scale Distributed Exhaust Nozzles
NASA Technical Reports Server (NTRS)
Kinzie, Kevin W.; Schein, David B.
2004-01-01
A method to estimate the full-scale noise suppression from a scale model distributed exhaust nozzle (DEN) is presented. For a conventional scale model exhaust nozzle, Strouhal number scaling using a scale factor related to the nozzle exit area is typically applied that shifts model scale frequency in proportion to the geometric scale factor. However, model scale DEN designs have two inherent length scales. One is associated with the mini-nozzles, whose size do not change in going from model scale to full scale. The other is associated with the overall nozzle exit area which is much smaller than full size. Consequently, lower frequency energy that is generated by the coalesced jet plume should scale to lower frequency, but higher frequency energy generated by individual mini-jets does not shift frequency. In addition, jet-jet acoustic shielding by the array of mini-nozzles is a significant noise reduction effect that may change with DEN model size. A technique has been developed to scale laboratory model spectral data based on the premise that high and low frequency content must be treated differently during the scaling process. The model-scale distributed exhaust spectra are divided into low and high frequency regions that are then adjusted to full scale separately based on different physics-based scaling laws. The regions are then recombined to create an estimate of the full-scale acoustic spectra. These spectra can then be converted to perceived noise levels (PNL). The paper presents the details of this methodology and provides an example of the estimated noise suppression by a distributed exhaust nozzle compared to a round conic nozzle.
NASA Astrophysics Data System (ADS)
Ma, Xin-Xin; Lin, Zhan; Jin, Hong-Lin; Chen, Hua-Ran; Jiao, Li-Guo
2017-11-01
In this study, the distribution characteristics of scale height at various solar activity levels were statistically analyzed using the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) radio occultation data for 2007-2013. The results show that: (1) in the mid-high latitude region, the daytime (06-17LT) scale height exhibits annual variations in the form of a single peak structure with the crest appearing in summer. At the high latitude region, an annual variation is also observed for nighttime (18-05LT) scale height; (2) changes in the spatial distribution of the scale height occur. The crests are deflected towards the north during daytime (12-14LT) at a geomagnetic longitude of 60°W-180°W, and they are distributed roughly along the geomagnetic equator at 60°W-180°E. In the approximate region of 120°W-150°E and 50°S-80°S, the scale height values are significantly higher than those in other mid-latitude areas. This region enlarges with increased solar activity, and shows an approximately symmetric distribution about 0° geomagnetic longitude. Nighttime (00-02LT) scale height values in the high-latitude region are larger than those in the low-mid latitude region. These results could serve as reference for the study of ionosphere distribution and construction of the corresponding profile model.
NASA Astrophysics Data System (ADS)
Wang, Honghuan; Xing, Fangyuan; Yin, Hongxi; Zhao, Nan; Lian, Bizhan
2016-02-01
With the explosive growth of network services, the reasonable traffic scheduling and efficient configuration of network resources have an important significance to increase the efficiency of the network. In this paper, an adaptive traffic scheduling policy based on the priority and time window is proposed and the performance of this algorithm is evaluated in terms of scheduling ratio. The routing and spectrum allocation are achieved by using the Floyd shortest path algorithm and establishing a node spectrum resource allocation model based on greedy algorithm, which is proposed by us. The fairness index is introduced to improve the capability of spectrum configuration. The results show that the designed traffic scheduling strategy can be applied to networks with multicast and broadcast functionalities, and makes them get real-time and efficient response. The scheme of node spectrum configuration improves the frequency resource utilization and gives play to the efficiency of the network.
Research on performance of three-layer MG-OXC system based on MLAG and OCDM
NASA Astrophysics Data System (ADS)
Wang, Yubao; Ren, Yanfei; Meng, Ying; Bai, Jian
2017-10-01
At present, as traffic volume which optical transport networks convey and species of traffic grooming methods increase rapidly, optical switching techniques are faced with a series of issues, such as more requests for the number of wavelengths and complicated structure management and implementation. This work introduces optical code switching based on wavelength switching, constructs the three layers multi-granularity optical cross connection (MG-OXC) system on the basis of optical code division multiplexing (OCDM) and presents a new traffic grooming algorithm. The proposed architecture can improve the flexibility of traffic grooming, reduce the amount of used wavelengths and save the number of consumed ports, hence, it can simplify routing device and enhance the performance of the system significantly. Through analyzing the network model of switching structure on multicast layered auxiliary graph (MLAG) and the establishment of traffic grooming links, and the simulation of blocking probability and throughput, this paper shows the excellent performance of this mentioned architecture.
A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services
NASA Astrophysics Data System (ADS)
Cho, Kenjiro; Birman, Kenneth P.
1994-05-01
This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.
NASA Technical Reports Server (NTRS)
Iannicca, Dennis; Hylton, Alan; Ishac, Joseph
2012-01-01
Delay-Tolerant Networking (DTN) is an active area of research in the space communications community. DTN uses a standard layered approach with the Bundle Protocol operating on top of transport layer protocols known as convergence layers that actually transmit the data between nodes. Several different common transport layer protocols have been implemented as convergence layers in DTN implementations including User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Licklider Transmission Protocol (LTP). The purpose of this paper is to evaluate several stand-alone implementations of negative-acknowledgment based transport layer protocols to determine how they perform in a variety of different link conditions. The transport protocols chosen for this evaluation include Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), Licklider Transmission Protocol (LTP), NACK-Oriented Reliable Multicast (NORM), and Saratoga. The test parameters that the protocols were subjected to are characteristic of common communications links ranging from terrestrial to cis-lunar and apply different levels of delay, line rate, and error.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Implications of the IRAS data for galactic gamma-ray astronomy and EGRET
NASA Technical Reports Server (NTRS)
Stecker, F. W.
1990-01-01
Using the results of gamma-ray, millimeter wave and far infrared surveys of the galaxy, one can derive a logically consistent picture of the large scale distribution of galactic gas and cosmic rays, one tied to the overall processes of stellar birth and destruction on a galactic scale. Using the results of the IRAS far-infrared survey of the galaxy, the large scale radial distribution of galactic far-infrared emission were obtained independently for both the Northern and Southern Hemisphere sides of the Galaxy. It was found that the dominant feature in these distributions to be a broad peak coincident with the 5 kpc molecular gas cloud ring. Also found was evidence of spiral arm features. Strong correlations are evident between the large scale galactic distributions of far infrared emission, gamma-ray emission and total CO emission. There is a particularly tight correlation between the distribution of warm molecular clouds and far-infrared emission on a galactic scale.
Statistical properties of world investment networks
NASA Astrophysics Data System (ADS)
Song, Dong-Ming; Jiang, Zhi-Qiang; Zhou, Wei-Xing
2009-06-01
We have performed a detailed investigation on the world investment networks constructed from the Coordinated Portfolio Investment Survey (CPIS) data of the International Monetary Fund, ranging from 2001 to 2006. The distributions of degrees and node strengths are scale-free. The weight distributions can be well modeled by the Weibull distribution. The maximum flow spanning trees of the world investment networks possess two universal allometric scaling relations, independent of time and the investment type. The topological scaling exponent is 1.17±0.02 and the flow scaling exponent is 1.03±0.01.
Empirical scaling of the length of the longest increasing subsequences of random walks
NASA Astrophysics Data System (ADS)
Mendonça, J. Ricardo G.
2017-02-01
We provide Monte Carlo estimates of the scaling of the length L n of the longest increasing subsequences of n-step random walks for several different distributions of step lengths, short and heavy-tailed. Our simulations indicate that, barring possible logarithmic corrections, {{L}n}∼ {{n}θ} with the leading scaling exponent 0.60≲ θ ≲ 0.69 for the heavy-tailed distributions of step lengths examined, with values increasing as the distribution becomes more heavy-tailed, and θ ≃ 0.57 for distributions of finite variance, irrespective of the particular distribution. The results are consistent with existing rigorous bounds for θ, although in a somewhat surprising manner. For random walks with step lengths of finite variance, we conjecture that the correct asymptotic behavior of L n is given by \\sqrt{n}\\ln n , and also propose the form for the subleading asymptotics. The distribution of L n was found to follow a simple scaling form with scaling functions that vary with θ. Accordingly, when the step lengths are of finite variance they seem to be universal. The nature of this scaling remains unclear, since we lack a working model, microscopic or hydrodynamic, for the behavior of the length of the longest increasing subsequences of random walks.
Anomalous dispersion in correlated porous media: a coupled continuous time random walk approach
NASA Astrophysics Data System (ADS)
Comolli, Alessandro; Dentz, Marco
2017-09-01
We study the causes of anomalous dispersion in Darcy-scale porous media characterized by spatially heterogeneous hydraulic properties. Spatial variability in hydraulic conductivity leads to spatial variability in the flow properties through Darcy's law and thus impacts on solute and particle transport. We consider purely advective transport in heterogeneity scenarios characterized by broad distributions of heterogeneity length scales and point values. Particle transport is characterized in terms of the stochastic properties of equidistantly sampled Lagrangian velocities, which are determined by the flow and conductivity statistics. The persistence length scales of flow and transport velocities are imprinted in the spatial disorder and reflect the distribution of heterogeneity length scales. Particle transitions over the velocity length scales are kinematically coupled with the transition time through velocity. We show that the average particle motion follows a coupled continuous time random walk (CTRW), which is fully parameterized by the distribution of flow velocities and the medium geometry in terms of the heterogeneity length scales. The coupled CTRW provides a systematic framework for the investigation of the origins of anomalous dispersion in terms of heterogeneity correlation and the distribution of conductivity point values. We derive analytical expressions for the asymptotic scaling of the moments of the spatial particle distribution and first arrival time distribution (FATD), and perform numerical particle tracking simulations of the coupled CTRW to capture the full average transport behavior. Broad distributions of heterogeneity point values and lengths scales may lead to very similar dispersion behaviors in terms of the spatial variance. Their mechanisms, however are very different, which manifests in the distributions of particle positions and arrival times, which plays a central role for the prediction of the fate of dissolved substances in heterogeneous natural and engineered porous materials. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2017-04-01
Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.
Scale effect challenges in urban hydrology highlighted with a distributed hydrological model
NASA Astrophysics Data System (ADS)
Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire
2018-01-01
Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration
by innovative methods of model resolution alteration
based on the spatial data variability and scaling of flows in urban hydrology.
NASA Astrophysics Data System (ADS)
Pedretti, Daniele
2017-04-01
Power-law (PL) distributions are widely adopted to define the late-time scaling of solute breakthrough curves (BTCs) during transport experiments in highly heterogeneous media. However, from a statistical perspective, distinguishing between a PL distribution and another tailed distribution is difficult, particularly when a qualitative assessment based on visual analysis of double-logarithmic plotting is used. This presentation aims to discuss the results from a recent analysis where a suite of statistical tools was applied to evaluate rigorously the scaling of BTCs from experiments that generate tailed distributions typically described as PL at late time. To this end, a set of BTCs from numerical simulations in highly heterogeneous media were generated using a transition probability approach (T-PROGS) coupled to a finite different numerical solver of the flow equation (MODFLOW) and a random walk particle tracking approach for Lagrangian transport (RW3D). The T-PROGS fields assumed randomly distributed hydraulic heterogeneities with long correlation scales creating solute channeling and anomalous transport. For simplicity, transport was simulated as purely advective. This combination of tools generates strongly non-symmetric BTCs visually resembling PL distributions at late time when plotted in double log scales. Unlike other combination of modeling parameters and boundary conditions (e.g. matrix diffusion in fractures), at late time no direct link exists between the mathematical functions describing scaling of these curves and physical parameters controlling transport. The results suggest that the statistical tests fail to describe the majority of curves as PL distributed. Moreover, they suggest that PL or lognormal distributions have the same likelihood to represent parametrically the shape of the tails. It is noticeable that forcing a model to reproduce the tail as PL functions results in a distribution of PL slopes comprised between 1.2 and 4, which are the typical values observed during field experiments. We conclude that care must be taken when defining a BTC late time distribution as a power law function. Even though the estimated scaling factors are found to fall in traditional ranges, the actual distribution controlling the scaling of concentration may different from a power-law function, with direct consequences for instance for the selection of effective parameters in upscaling modeling solutions.
Response of deep and shallow tropical maritime cumuli to large-scale processes
NASA Technical Reports Server (NTRS)
Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.
1976-01-01
The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.
A quark model analysis of orbital angular momentum
NASA Astrophysics Data System (ADS)
Scopetta, Sergio; Vento, Vicente
1999-08-01
Orbital Angular Momentum (OAM) twist-two parton distributions are studied. At the low energy, hadronic, scale we calculate them for the relativistic MIT bag model and for non-relativistic potential quark models. We reach the scale of the data by leading order evolution using the OPE and perturbative QCD. We confirm that the contribution of quarks and gluons OAM to the nucleon spin grows with Q2, and it can be relevant at the experimental scale, even if it is negligible at the hadronic scale, irrespective of the model used. The sign and shape of the quark OAM distribution at high Q2 may depend strongly on the relative size of the OAM and spin distributions at the hadronic scale. Sizeable quark OAM distributions at the hadronic scale, as proposed by several authors, can produce the dominant contribution to the nucleon spin at high Q2. As expected by general arguments, we obtain, that the large gluon OAM contribution is almost cancelled by the gluon spin contribution.
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
Shape of growth-rate distribution determines the type of Non-Gibrat’s Property
NASA Astrophysics Data System (ADS)
Ishikawa, Atushi; Fujimoto, Shouji; Mizuno, Takayuki
2011-11-01
In this study, the authors examine exhaustive business data on Japanese firms, which cover nearly all companies in the mid- and large-scale ranges in terms of firm size, to reach several key findings on profits/sales distribution and business growth trends. Here, profits denote net profits. First, detailed balance is observed not only in profits data but also in sales data. Furthermore, the growth-rate distribution of sales has wider tails than the linear growth-rate distribution of profits in log-log scale. On the one hand, in the mid-scale range of profits, the probability of positive growth decreases and the probability of negative growth increases symmetrically as the initial value increases. This is called Non-Gibrat’s First Property. On the other hand, in the mid-scale range of sales, the probability of positive growth decreases as the initial value increases, while the probability of negative growth hardly changes. This is called Non-Gibrat’s Second Property. Under detailed balance, Non-Gibrat’s First and Second Properties are analytically derived from the linear and quadratic growth-rate distributions in log-log scale, respectively. In both cases, the log-normal distribution is inferred from Non-Gibrat’s Properties and detailed balance. These analytic results are verified by empirical data. Consequently, this clarifies the notion that the difference in shapes between growth-rate distributions of sales and profits is closely related to the difference between the two Non-Gibrat’s Properties in the mid-scale range.
Niche models can be used to predict the distributions of marine/estuarine nonindigenous species (NIS) over three spatial scales. The goal at the biogeographic scale is to predict whether a species is likely to invade a geographic region. At the regional scale, the goal is to pr...
Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki
2014-04-01
We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.
Dependence of exponents on text length versus finite-size scaling for word-frequency distributions
NASA Astrophysics Data System (ADS)
Corral, Álvaro; Font-Clos, Francesc
2017-08-01
Some authors have recently argued that a finite-size scaling law for the text-length dependence of word-frequency distributions cannot be conceptually valid. Here we give solid quantitative evidence for the validity of this scaling law, using both careful statistical tests and analytical arguments based on the generalized central-limit theorem applied to the moments of the distribution (and obtaining a novel derivation of Heaps' law as a by-product). We also find that the picture of word-frequency distributions with power-law exponents that decrease with text length [X. Yan and P. Minnhagen, Physica A 444, 828 (2016), 10.1016/j.physa.2015.10.082] does not stand with rigorous statistical analysis. Instead, we show that the distributions are perfectly described by power-law tails with stable exponents, whose values are close to 2, in agreement with the classical Zipf's law. Some misconceptions about scaling are also clarified.
Weighted Scaling in Non-growth Random Networks
NASA Astrophysics Data System (ADS)
Chen, Guang; Yang, Xu-Hua; Xu, Xin-Li
2012-09-01
We propose a weighted model to explain the self-organizing formation of scale-free phenomenon in non-growth random networks. In this model, we use multiple-edges to represent the connections between vertices and define the weight of a multiple-edge as the total weights of all single-edges within it and the strength of a vertex as the sum of weights for those multiple-edges attached to it. The network evolves according to a vertex strength preferential selection mechanism. During the evolution process, the network always holds its total number of vertices and its total number of single-edges constantly. We show analytically and numerically that a network will form steady scale-free distributions with our model. The results show that a weighted non-growth random network can evolve into scale-free state. It is interesting that the network also obtains the character of an exponential edge weight distribution. Namely, coexistence of scale-free distribution and exponential distribution emerges.
Approaching the Linguistic Complexity
NASA Astrophysics Data System (ADS)
Drożdż, Stanisław; Kwapień, Jarosław; Orczyk, Adam
We analyze the rank-frequency distributions of words in selected English and Polish texts. We compare scaling properties of these distributions in both languages. We also study a few small corpora of Polish literary texts and find that for a corpus consisting of texts written by different authors the basic scaling regime is broken more strongly than in the case of comparable corpus consisting of texts written by the same author. Similarly, for a corpus consisting of texts translated into Polish from other languages the scaling regime is broken more strongly than for a comparable corpus of native Polish texts. Moreover, based on the British National Corpus, we consider the rank-frequency distributions of the grammatically basic forms of words (lemmas) tagged with their proper part of speech. We find that these distributions do not scale if each part of speech is analyzed separately. The only part of speech that independently develops a trace of scaling is verbs.
Quantifying Stock Return Distributions in Financial Markets
Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593
Quantifying Stock Return Distributions in Financial Markets.
Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.
Fractals in geology and geophysics
NASA Technical Reports Server (NTRS)
Turcotte, Donald L.
1989-01-01
The definition of a fractal distribution is that the number of objects N with a characteristic size greater than r scales with the relation N of about r exp -D. The frequency-size distributions for islands, earthquakes, fragments, ore deposits, and oil fields often satisfy this relation. This application illustrates a fundamental aspect of fractal distributions, scale invariance. The requirement of an object to define a scale in photograhs of many geological features is one indication of the wide applicability of scale invariance to geological problems; scale invariance can lead to fractal clustering. Geophysical spectra can also be related to fractals; these are self-affine fractals rather than self-similar fractals. Examples include the earth's topography and geoid.
Spatial distribution of GRBs and large scale structure of the Universe
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Rácz, István I.; Balázs, Lajos G.; Tóth, L. Viktor; Horváth, István
We studied the space distribution of the starburst galaxies from Millennium XXL database at z = 0.82. We examined the starburst distribution in the classical Millennium I (De Lucia et al. (2006)) using a semi-analytical model for the genesis of the galaxies. We simulated a starburst galaxies sample with Markov Chain Monte Carlo method. The connection between the large scale structures homogenous and starburst groups distribution (Kofman and Shandarin 1998), Suhhonenko et al. (2011), Liivamägi et al. (2012), Park et al. (2012), Horvath et al. (2014), Horvath et al. (2015)) on a defined scale were checked too.
Yang, Haishui; Zang, Yanyan; Yuan, Yongge; Tang, Jianjun; Chen, Xin
2012-04-12
Arbuscular mycorrhizal fungi (AMF) can form obligate symbioses with the vast majority of land plants, and AMF distribution patterns have received increasing attention from researchers. At the local scale, the distribution of AMF is well documented. Studies at large scales, however, are limited because intensive sampling is difficult. Here, we used ITS rDNA sequence metadata obtained from public databases to study the distribution of AMF at continental and global scales. We also used these sequence metadata to investigate whether host plant is the main factor that affects the distribution of AMF at large scales. We defined 305 ITS virtual taxa (ITS-VTs) among all sequences of the Glomeromycota by using a comprehensive maximum likelihood phylogenetic analysis. Each host taxonomic order averaged about 53% specific ITS-VTs, and approximately 60% of the ITS-VTs were host specific. Those ITS-VTs with wide host range showed wide geographic distribution. Most ITS-VTs occurred in only one type of host functional group. The distributions of most ITS-VTs were limited across ecosystem, across continent, across biogeographical realm, and across climatic zone. Non-metric multidimensional scaling analysis (NMDS) showed that AMF community composition differed among functional groups of hosts, and among ecosystem, continent, biogeographical realm, and climatic zone. The Mantel test showed that AMF community composition was significantly correlated with plant community composition among ecosystem, among continent, among biogeographical realm, and among climatic zone. The structural equation modeling (SEM) showed that the effects of ecosystem, continent, biogeographical realm, and climatic zone were mainly indirect on AMF distribution, but plant had strongly direct effects on AMF. The distribution of AMF as indicated by ITS rDNA sequences showed a pattern of high endemism at large scales. This pattern indicates high specificity of AMF for host at different scales (plant taxonomic order and functional group) and high selectivity from host plants for AMF. The effects of ecosystemic, biogeographical, continental and climatic factors on AMF distribution might be mediated by host plants.
Map scale effects on estimating the number of undiscovered mineral deposits
Singer, D.A.; Menzie, W.D.
2008-01-01
Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring detailed and expensive exploration, and land-use planners benefit from reduced areas of concern. ?? 2008 International Association for Mathematical Geology.
Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings
ERIC Educational Resources Information Center
Morales, Ramses Victor
2009-01-01
Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…
Spatial Distribution of Surface Soil Moisture in a Small Forested Catchment
Predicting the spatial distribution of soil moisture is an important hydrological question. We measured the spatial distribution of surface soil moisture (upper 6 cm) using an Amplitude Domain Reflectometry sensor at the plot scale (2 × 2 m) and small catchment scale (0.84 ha) in...
Fast Grid Frequency Support from Distributed Inverter-Based Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoke, Anderson F
This presentation summarizes power hardware-in-the-loop testing performed to evaluate the ability of distributed inverter-coupled generation to support grid frequency on the fastest time scales. The research found that distributed PV inverters and other DERs can effectively support the grid on sub-second time scales.
A test of the cross-scale resilience model: Functional richness in Mediterranean-climate ecosystems
Wardwell, D.A.; Allen, Craig R.; Peterson, G.D.; Tyre, A.J.
2008-01-01
Ecological resilience has been proposed to be generated, in part, in the discontinuous structure of complex systems. Environmental discontinuities are reflected in discontinuous, aggregated animal body mass distributions. Diversity of functional groups within body mass aggregations (scales) and redundancy of functional groups across body mass aggregations (scales) has been proposed to increase resilience. We evaluate that proposition by analyzing mammalian and avian communities of Mediterranean-climate ecosystems. We first determined that body mass distributions for each animal community were discontinuous. We then calculated the variance in richness of function across aggregations in each community, and compared observed values with distributions created by 1000 simulations using a null of random distribution of function, with the same n, number of discontinuities and number of functional groups as the observed data. Variance in the richness of functional groups across scales was significantly lower in real communities than in simulations in eight of nine sites. The distribution of function across body mass aggregations in the animal communities we analyzed was non-random, and supports the contentions of the cross-scale resilience model. ?? 2007 Elsevier B.V. All rights reserved.
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...
ERIC Educational Resources Information Center
Warfvinge, Per
2008-01-01
The ECTS grade transfer scale is an interface grade scale to help European universities, students and employers to understand the level of student achievement. Hence, the ECTS scale can be seen as an interface, transforming local scales to a common system where A-E denote passing grades. By definition, ECTS should distribute the passing students…
Supporting large scale applications on networks of workstations
NASA Technical Reports Server (NTRS)
Cooper, Robert; Birman, Kenneth P.
1989-01-01
Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.
NASA Astrophysics Data System (ADS)
Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao
2018-04-01
Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.
NASA Astrophysics Data System (ADS)
McGowan, D. W.; Horne, J. K.
2016-02-01
As part of the Gulf of Alaska (GOA) Integrated Ecosystem Research Program (GOAIERP), scale-dependent relationships of capelin (Mallotus villosus) densities were quantified as a function of oceanographic gradients, zooplankton prey fields, predators, and a potential competitor (age-0 walleye pollock, Gadus chalcogrammus). Within GOA food webs, capelin occupy an intermediate trophic position as planktivores where they function as both predator and prey; facilitating energy transfer from secondary producers to higher trophic level piscivores. Variability in the distribution of capelin in the GOA has previously been attributed to physical and biological processes that operate across a range of spatial and temporal scales. Capelin distributions were quantified during an acoustic-trawl survey conducted in the summer and fall of 2011 and 2013. Densities were significantly higher in 2013 and primarily concentrated along the edges of shallow submarine banks over the continental shelf east of Kodiak in both years. Wavelet analysis was used to identify scales that maximize variability in capelin distributions. Wavelets decompose a data series in the frequency domain to identify periods that account for variance in the series while retaining nonstationary features that may be biologically meaningful. Variance peaks in capelin densities were identified along most transects at fine- (0.44 to 0.72 km) and coarse- (32.6 to 52.9 km) scales, likely corresponding to aggregation size and the width of banks. Linear and nonlinear models were used to identify factors and interactions that influence capelin distributions at the scale of a predator-prey interaction and at coarser environmental scales. Cross-wavelets measured coherence between capelin and individual factors across a range of scales. Preliminary results indicate that capelin distributions may be influenced by intrusions of cold, nutrient-rich water from the GOA basin on to the shelf.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.
2014-08-10
Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less
Assessing the Spatial Scale Effect of Anthropogenic Factors on Species Distribution
Mangiacotti, Marco; Scali, Stefano; Sacchi, Roberto; Bassu, Lara; Nulchis, Valeria; Corti, Claudia
2013-01-01
Patch context is a way to describe the effect that the surroundings exert on a landscape patch. Despite anthropogenic context alteration may affect species distributions by reducing the accessibility to suitable patches, species distribution modelling have rarely accounted for its effects explicitly. We propose a general framework to statistically detect the occurrence and the extent of such a factor, by combining presence-only data, spatial distribution models and information-theoretic model selection procedures. After having established the spatial resolution of the analysis on the basis of the species characteristics, a measure of anthropogenic alteration that can be quantified at increasing distance from each patch has to be defined. Then the distribution of the species is modelled under competing hypotheses: H0, assumes that the distribution is uninfluenced by the anthropogenic variables; H1, assumes the effect of alteration at the species scale (resolution); and H2, H3 … Hn add the effect of context alteration at increasing radii. Models are compared using the Akaike Information Criterion to establish the best hypothesis, and consequently the occurrence (if any) and the spatial scale of the anthropogenic effect. As a study case we analysed the distribution data of two insular lizards (one endemic and one naturalised) using four alternative hypotheses: no alteration (H0), alteration at the species scale (H1), alteration at two context scales (H2 and H3). H2 and H3 performed better than H0 and H1, highlighting the importance of context alteration. H2 performed better than H3, setting the spatial scale of the context at 1 km. The two species respond differently to context alteration, the introduced lizard being more tolerant than the endemic one. The proposed approach supplies reliably and interpretable results, uses easily available data on species distribution, and allows the assessing of the spatial scale at which human disturbance produces the heaviest effects. PMID:23825669
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
A new method for detecting, quantifying and monitoring diffuse contamination
NASA Astrophysics Data System (ADS)
Fabian, Karl; Reimann, Clemens; de Caritat, Patrice
2017-04-01
A new method is presented for detecting and quantifying diffuse contamination at the regional to continental scale. It is based on the analysis of cumulative distribution functions (CDFs) in cumulative probability (CP) plots for spatially representative datasets, preferably containing >1000 samples. Simulations demonstrate how different types of contamination influence elemental CDFs of different sample media. Contrary to common belief, diffuse contamination does not result in exceedingly high element concentrations in regional- to continental-scale datasets. Instead it produces a distinctive shift of concentrations in the background distribution of the studied element resulting in a steeper data distribution in the CP plot. Via either (1) comparing the distribution of an element in top soil samples to the distribution of the same element in bottom soil samples from the same area, taking soil forming processes into consideration, or (2) comparing the distribution of the contaminating element (e.g., Pb) to that of an element with a geochemically comparable behaviour but no contamination source (e.g., Rb or Ba in case of Pb), the relative impact of diffuse contamination on the element concentration can be estimated either graphically in the CP plot via a best fit estimate or quantitatively via a Kolmogorov-Smirnov or Cramer vonMiese test. This is demonstrated using continental-scale geochemical soil datasets from Europe, Australia, and the USA, and a regional scale dataset from Norway. Several different datasets from Europe deliver comparable results at regional to continental scales. The method is also suitable for monitoring diffuse contamination based on the statistical distribution of repeat datasets at the continental scale in a cost-effective manner.
Deng, Li Ping; Bai, Xue Jiao; Qin, Sheng Jin; Wei, Ya Wei; Zhou, Yong Bin; Li, Lu Lu; Niu, Sha Sha; Han, Mei Na
2016-07-01
With secondary forest in the montane region of eastern Liaoning Province as research object, this paper analyzed the spatial distribution and scale effect of Gleason richness index, Simpson dominance index, Shannon diversity index and Pielou evenness index in a 4 hm 2 plot. The results showed that spatial distributions of the four diversity indices showed higher spatial heterogeneity. Variance of the four diversity indices varied with increasing scale. Coefficients of variation of the four diversity indices decreased with increasing scale. The four diversity indices of the tree layer were higher than those of the shrub layer, and the variation tendency varied with increasing scale. The results indicated that sampling scale should be taken into account when studying species diversity in the montane region of eastern Liaoning Province.
Probing the statistics of primordial fluctuations and their evolution
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique; Yokoyama, Jun'ichi
1993-01-01
The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.
Distributions of the Kullback-Leibler divergence with applications.
Belov, Dmitry I; Armstrong, Ronald D
2011-05-01
The Kullback-Leibler divergence (KLD) is a widely used method for measuring the fit of two distributions. In general, the distribution of the KLD is unknown. Under reasonable assumptions, common in psychometrics, the distribution of the KLD is shown to be asymptotically distributed as a scaled (non-central) chi-square with one degree of freedom or a scaled (doubly non-central) F. Applications of the KLD for detecting heterogeneous response data are discussed with particular emphasis on test security. © The British Psychological Society.
Assessing Young Children's Moral Development: A Standardized and Objective Scale.
ERIC Educational Resources Information Center
Enright, Robert D.; And Others
A paired-comparisons measure of distributive justice development, the Distributive Justice Scale (DJS), was developed and validated in four studies. Pictures were drawn to represent the different stages of distributive justice for a given dilemma and the DJS was scored by selecting the child's preferred stage via the picture comparisons for each…
Adam E. Duerr; Tricia A. Miller; Kerri L. Cornell Duerr; Michael J. Lanzone; Amy Fesnock; Todd E. Katzner
2015-01-01
Anthropogenic development has great potential to affect fragile desert environments. Large-scale development of renewable energy infrastructure is planned for many desert ecosystems. Development plans should account for anthropogenic effects to distributions and abundance of rare or sensitive wildlife; however, baseline data on abundance and distribution of such...
NASA Technical Reports Server (NTRS)
Pendergraft, O. C., Jr.
1979-01-01
Static pressure coefficient distributions on the forebody, afterbody, and nozzles of a 1/12 scale F-15 propulsion model were determined. The effects of nozzle power setting and horizontal tail deflection angle on the pressure coefficient distributions were investigated.
ERIC Educational Resources Information Center
Turner, Henry J.
2014-01-01
This dissertation of practice utilized a multiple case-study approach to examine distributed leadership within five school districts that were attempting to gain acceptance of a large-scale 1:1 technology initiative. Using frame theory and distributed leadership theory as theoretical frameworks, this study interviewed each district's…
ERIC Educational Resources Information Center
Brown, Sara Ellen
2017-01-01
The purpose of this study was to conduct an exploratory factor analysis within the community college sector using the Distributed Leadership Scale. The instrument, which was designed from the original Distributed Leadership Readiness Scale(c) (CSDE, 2004), measured readiness and engagement in shared leadership practices within the community…
SAMPLING DISTRIBUTIONS OF ERROR IN MULTIDIMENSIONAL SCALING.
ERIC Educational Resources Information Center
STAKE, ROBERT E.; AND OTHERS
AN EMPIRICAL STUDY WAS MADE OF THE ERROR FACTORS IN MULTIDIMENSIONAL SCALING (MDS) TO REFINE THE USE OF MDS FOR MORE EXPERT MANIPULATION OF SCALES USED IN EDUCATIONAL MEASUREMENT. THE PURPOSE OF THE RESEARCH WAS TO GENERATE TABLES OF THE SAMPLING DISTRIBUTIONS THAT ARE NECESSARY FOR DISCRIMINATING BETWEEN ERROR AND NONERROR MDS DIMENSIONS. THE…
Mapping the distribution of the denitrifier community at large scales (Invited)
NASA Astrophysics Data System (ADS)
Philippot, L.; Bru, D.; Ramette, A.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.
2010-12-01
Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 740 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.
Fine Scale Baleen Whale Behavior Observed Via Tagging Over Daily Time Scales
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Fine Scale Baleen Whale Behavior Observed Via Tagging...followed over time scales of days from an oceanographic vessel so that environmental sampling can be conducted in proximity to the tagged whale ...characterize the relationship between diel variability in the foraging behavior of baleen whales (North Atlantic right whales and sei whales ) and the
The Microphysical Structure of Extreme Precipitation as Inferred from Ground-Based Raindrop Spectra.
NASA Astrophysics Data System (ADS)
Uijlenhoet, Remko; Smith, James A.; Steiner, Matthias
2003-05-01
The controls on the variability of raindrop size distributions in extreme rainfall and the associated radar reflectivity-rain rate relationships are studied using a scaling-law formalism for the description of raindrop size distributions and their properties. This scaling-law formalism enables a separation of the effects of changes in the scale of the raindrop size distribution from those in its shape. Parameters controlling the scale and shape of the scaled raindrop size distribution may be related to the microphysical processes generating extreme rainfall. A global scaling analysis of raindrop size distributions corresponding to rain rates exceeding 100 mm h1, collected during the 1950s with the Illinois State Water Survey raindrop camera in Miami, Florida, reveals that extreme rain rates tend to be associated with conditions in which the variability of the raindrop size distribution is strongly number controlled (i.e., characteristic drop sizes are roughly constant). This means that changes in properties of raindrop size distributions in extreme rainfall are largely produced by varying raindrop concentrations. As a result, rainfall integral variables (such as radar reflectivity and rain rate) are roughly proportional to each other, which is consistent with the concept of the so-called equilibrium raindrop size distribution and has profound implications for radar measurement of extreme rainfall. A time series analysis for two contrasting extreme rainfall events supports the hypothesis that the variability of raindrop size distributions for extreme rain rates is strongly number controlled. However, this analysis also reveals that the actual shapes of the (measured and scaled) spectra may differ significantly from storm to storm. This implies that the exponents of power-law radar reflectivity-rain rate relationships may be similar, and close to unity, for different extreme rainfall events, but their prefactors may differ substantially. Consequently, there is no unique radar reflectivity-rain rate relationship for extreme rain rates, but the variability is essentially reduced to one free parameter (i.e., the prefactor). It is suggested that this free parameter may be estimated on the basis of differential reflectivity measurements in extreme rainfall.
Power Scaling and Seasonal Evolution of Floe Areas in the Arctic East Siberian Sea
NASA Astrophysics Data System (ADS)
Barton, C. C.; Geise, G. R.; Tebbens, S. F.
2016-12-01
The size distribution of floes and its evolution during the Arctic summer season and a model of fragmentation that generates a power law scaling distribution of fragment sizes are the subject of this paper. This topic is of relevance to marine vessels that encounter floes, to the calculation of sea ice albedo, to the determination of Arctic heat exchange which is strongly influenced by ice concentrations and the amount of open water between floes, and to photosynthetic marine organisms which are dependent upon sunlight penetrating the spaces between floes. Floes are 2-3 m thick and initially range in area from one to millions of square meters. The cumulative number versus floe area distribution of seasonal sea floes from six satellite images of the Arctic Ocean during the summer breakup and melting is well fit by two scale-invariant power law scaling regimes for floe areas ranging from 30 m2 to 28,400,000 m2. Scaling exponents, B, for larger floe areas range from -0.6 to -1.0 with an average of -0.8. Scaling exponents, B, for smaller floe areas range from -0.3 to -0.6 with an average of -0.5. The inflection point between the two scaling regimes ranges from 283 x 102 m2 to 4850 x 102 m2 and generally moves from larger to smaller floe areas through the summer melting season. We observe that the two scaling regimes and the inflection between them are established during the initial breakup of sea ice solely by the process of fracture. The distributions of floe size regimes retain their scaling exponents as the floe pack evolves from larger to smaller floe areas from the initial breakup through the summer season, due to grinding, crushing, fracture, and melting. The scaling exponents for floe area distribution are in the same range as those reported in previous studies of Arctic floes and for the single scaling exponents found for crushed and ground geologic materials including streambed gravel, lunar debris, and artificially crushed quartz. A probabilistic fragmentation model that produces a power distribution of particle sizes has been developed and will be presented.
Scaling range sizes to threats for robust predictions of risks to biodiversity.
Keith, David A; Akçakaya, H Resit; Murray, Nicholas J
2018-04-01
Assessments of risk to biodiversity often rely on spatial distributions of species and ecosystems. Range-size metrics used extensively in these assessments, such as area of occupancy (AOO), are sensitive to measurement scale, prompting proposals to measure them at finer scales or at different scales based on the shape of the distribution or ecological characteristics of the biota. Despite its dominant role in red-list assessments for decades, appropriate spatial scales of AOO for predicting risks of species' extinction or ecosystem collapse remain untested and contentious. There are no quantitative evaluations of the scale-sensitivity of AOO as a predictor of risks, the relationship between optimal AOO scale and threat scale, or the effect of grid uncertainty. We used stochastic simulation models to explore risks to ecosystems and species with clustered, dispersed, and linear distribution patterns subject to regimes of threat events with different frequency and spatial extent. Area of occupancy was an accurate predictor of risk (0.81<|r|<0.98) and performed optimally when measured with grid cells 0.1-1.0 times the largest plausible area threatened by an event. Contrary to previous assertions, estimates of AOO at these relatively coarse scales were better predictors of risk than finer-scale estimates of AOO (e.g., when measurement cells are <1% of the area of the largest threat). The optimal scale depended on the spatial scales of threats more than the shape or size of biotic distributions. Although we found appreciable potential for grid-measurement errors, current IUCN guidelines for estimating AOO neutralize geometric uncertainty and incorporate effective scaling procedures for assessing risks posed by landscape-scale threats to species and ecosystems. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Emergence of scale-free close-knit friendship structure in online social networks.
Cui, Ai-Xiang; Zhang, Zi-Ke; Tang, Ming; Hui, Pak Ming; Fu, Yan
2012-01-01
Although the structural properties of online social networks have attracted much attention, the properties of the close-knit friendship structures remain an important question. Here, we mainly focus on how these mesoscale structures are affected by the local and global structural properties. Analyzing the data of four large-scale online social networks reveals several common structural properties. It is found that not only the local structures given by the indegree, outdegree, and reciprocal degree distributions follow a similar scaling behavior, the mesoscale structures represented by the distributions of close-knit friendship structures also exhibit a similar scaling law. The degree correlation is very weak over a wide range of the degrees. We propose a simple directed network model that captures the observed properties. The model incorporates two mechanisms: reciprocation and preferential attachment. Through rate equation analysis of our model, the local-scale and mesoscale structural properties are derived. In the local-scale, the same scaling behavior of indegree and outdegree distributions stems from indegree and outdegree of nodes both growing as the same function of the introduction time, and the reciprocal degree distribution also shows the same power-law due to the linear relationship between the reciprocal degree and in/outdegree of nodes. In the mesoscale, the distributions of four closed triples representing close-knit friendship structures are found to exhibit identical power-laws, a behavior attributed to the negligible degree correlations. Intriguingly, all the power-law exponents of the distributions in the local-scale and mesoscale depend only on one global parameter, the mean in/outdegree, while both the mean in/outdegree and the reciprocity together determine the ratio of the reciprocal degree of a node to its in/outdegree. Structural properties of numerical simulated networks are analyzed and compared with each of the four real networks. This work helps understand the interplay between structures on different scales in online social networks.
Emergence of Scale-Free Close-Knit Friendship Structure in Online Social Networks
Cui, Ai-Xiang; Zhang, Zi-Ke; Tang, Ming; Hui, Pak Ming; Fu, Yan
2012-01-01
Although the structural properties of online social networks have attracted much attention, the properties of the close-knit friendship structures remain an important question. Here, we mainly focus on how these mesoscale structures are affected by the local and global structural properties. Analyzing the data of four large-scale online social networks reveals several common structural properties. It is found that not only the local structures given by the indegree, outdegree, and reciprocal degree distributions follow a similar scaling behavior, the mesoscale structures represented by the distributions of close-knit friendship structures also exhibit a similar scaling law. The degree correlation is very weak over a wide range of the degrees. We propose a simple directed network model that captures the observed properties. The model incorporates two mechanisms: reciprocation and preferential attachment. Through rate equation analysis of our model, the local-scale and mesoscale structural properties are derived. In the local-scale, the same scaling behavior of indegree and outdegree distributions stems from indegree and outdegree of nodes both growing as the same function of the introduction time, and the reciprocal degree distribution also shows the same power-law due to the linear relationship between the reciprocal degree and in/outdegree of nodes. In the mesoscale, the distributions of four closed triples representing close-knit friendship structures are found to exhibit identical power-laws, a behavior attributed to the negligible degree correlations. Intriguingly, all the power-law exponents of the distributions in the local-scale and mesoscale depend only on one global parameter, the mean in/outdegree, while both the mean in/outdegree and the reciprocity together determine the ratio of the reciprocal degree of a node to its in/outdegree. Structural properties of numerical simulated networks are analyzed and compared with each of the four real networks. This work helps understand the interplay between structures on different scales in online social networks. PMID:23272067
Non-Gaussian Nature of Fracture and the Survival of Fat-Tail Exponents
NASA Astrophysics Data System (ADS)
Tallakstad, Ken Tore; Toussaint, Renaud; Santucci, Stephane; Måløy, Knut Jørgen
2013-04-01
We study the fluctuations of the global velocity Vl(t), computed at various length scales l, during the intermittent mode-I propagation of a crack front. The statistics converge to a non-Gaussian distribution, with an asymmetric shape and a fat tail. This breakdown of the central limit theorem (CLT) is due to the diverging variance of the underlying local crack front velocity distribution, displaying a power law tail. Indeed, by the application of a generalized CLT, the full shape of our experimental velocity distribution at large scale is shown to follow the stable Levy distribution, which preserves the power law tail exponent under upscaling. This study aims to demonstrate in general for crackling noise systems how one can infer the complete scale dependence of the activity—and extreme event distributions—by measuring only at a global scale.
Spatial analysis of cities using Renyi entropy and fractal parameters
NASA Astrophysics Data System (ADS)
Chen, Yanguang; Feng, Jian
2017-12-01
The spatial distributions of cities fall into two groups: one is the simple distribution with characteristic scale (e.g. exponential distribution), and the other is the complex distribution without characteristic scale (e.g. power-law distribution). The latter belongs to scale-free distributions, which can be modeled with fractal geometry. However, fractal dimension is not suitable for the former distribution. In contrast, spatial entropy can be used to measure any types of urban distributions. This paper is devoted to generalizing multifractal parameters by means of dual relation between Euclidean and fractal geometries. The main method is mathematical derivation and empirical analysis, and the theoretical foundation is the discovery that the normalized fractal dimension is equal to the normalized entropy. Based on this finding, a set of useful spatial indexes termed dummy multifractal parameters are defined for geographical analysis. These indexes can be employed to describe both the simple distributions and complex distributions. The dummy multifractal indexes are applied to the population density distribution of Hangzhou city, China. The calculation results reveal the feature of spatio-temporal evolution of Hangzhou's urban morphology. This study indicates that fractal dimension and spatial entropy can be combined to produce a new methodology for spatial analysis of city development.
Scaling in the distribution of intertrade durations of Chinese stocks
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Chen, Wei; Zhou, Wei-Xing
2008-10-01
The distribution of intertrade durations, defined as the waiting times between two consecutive transactions, is investigated based upon the limit order book data of 23 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. A scaling pattern is observed in the distributions of intertrade durations, where the empirical density functions of the normalized intertrade durations of all 23 stocks collapse onto a single curve. The scaling pattern is also observed in the intertrade duration distributions for filled and partially filled trades and in the conditional distributions. The ensemble distributions for all stocks are modeled by the Weibull and the Tsallis q-exponential distributions. Maximum likelihood estimation shows that the Weibull distribution outperforms the q-exponential for not-too-large intertrade durations which account for more than 98.5% of the data. Alternatively, nonlinear least-squares estimation selects the q-exponential as a better model, in which the optimization is conducted on the distance between empirical and theoretical values of the logarithmic probability densities. The distribution of intertrade durations is Weibull followed by a power-law tail with an asymptotic tail exponent close to 3.
Regular Topologies for Gigabit Wide-Area Networks. Volume 1
NASA Technical Reports Server (NTRS)
Shacham, Nachum; Denny, Barbara A.; Lee, Diane S.; Khan, Irfan H.; Lee, Danny Y. C.; McKenney, Paul
1994-01-01
In general terms, this project aimed at the analysis and design of techniques for very high-speed networking. The formal objectives of the project were to: (1) Identify switch and network technologies for wide-area networks that interconnect a large number of users and can provide individual data paths at gigabit/s rates; (2) Quantitatively evaluate and compare existing and proposed architectures and protocols, identify their strength and growth potentials, and ascertain the compatibility of competing technologies; and (3) Propose new approaches to existing architectures and protocols, and identify opportunities for research to overcome deficiencies and enhance performance. The project was organized into two parts: 1. The design, analysis, and specification of techniques and protocols for very-high-speed network environments. In this part, SRI has focused on several key high-speed networking areas, including Forward Error Control (FEC) for high-speed networks in which data distortion is the result of packet loss, and the distribution of broadband, real-time traffic in multiple user sessions. 2. Congestion Avoidance Testbed Experiment (CATE). This part of the project was done within the framework of the DARTnet experimental T1 national network. The aim of the work was to advance the state of the art in benchmarking DARTnet's performance and traffic control by developing support tools for network experimentation, by designing benchmarks that allow various algorithms to be meaningfully compared, and by investigating new queueing techniques that better satisfy the needs of best-effort and reserved-resource traffic. This document is the final technical report describing the results obtained by SRI under this project. The report consists of three volumes: Volume 1 contains a technical description of the network techniques developed by SRI in the areas of FEC and multicast of real-time traffic. Volume 2 describes the work performed under CATE. Volume 3 contains the source code of all software developed under CATE.
A distribution model for the aerial application of granular agricultural particles
NASA Technical Reports Server (NTRS)
Fernandes, S. T.; Ormsbee, A. I.
1978-01-01
A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.
Cross-scale assessment of potential habitat shifts in a rapidly changing climate
Jarnevich, Catherine S.; Holcombe, Tracy R.; Bella, Elizabeth S.; Carlson, Matthew L.; Graziano, Gino; Lamb, Melinda; Seefeldt, Steven S.; Morisette, Jeffrey T.
2014-01-01
We assessed the ability of climatic, environmental, and anthropogenic variables to predict areas of high-risk for plant invasion and consider the relative importance and contribution of these predictor variables by considering two spatial scales in a region of rapidly changing climate. We created predictive distribution models, using Maxent, for three highly invasive plant species (Canada thistle, white sweetclover, and reed canarygrass) in Alaska at both a regional scale and a local scale. Regional scale models encompassed southern coastal Alaska and were developed from topographic and climatic data at a 2 km (1.2 mi) spatial resolution. Models were applied to future climate (2030). Local scale models were spatially nested within the regional area; these models incorporated physiographic and anthropogenic variables at a 30 m (98.4 ft) resolution. Regional and local models performed well (AUC values > 0.7), with the exception of one species at each spatial scale. Regional models predict an increase in area of suitable habitat for all species by 2030 with a general shift to higher elevation areas; however, the distribution of each species was driven by different climate and topographical variables. In contrast local models indicate that distance to right-of-ways and elevation are associated with habitat suitability for all three species at this spatial level. Combining results from regional models, capturing long-term distribution, and local models, capturing near-term establishment and distribution, offers a new and effective tool for highlighting at-risk areas and provides insight on how variables acting at different scales contribute to suitability predictions. The combinations also provides easy comparison, highlighting agreement between the two scales, where long-term distribution factors predict suitability while near-term do not and vice versa.
Intermittent particle distribution in synthetic free-surface turbulent flows.
Ducasse, Lauris; Pumir, Alain
2008-06-01
Tracer particles on the surface of a turbulent flow have a very intermittent distribution. This preferential concentration effect is studied in a two-dimensional synthetic compressible flow, both in the inertial (self-similar) and in the dissipative (smooth) range of scales, as a function of the compressibility C . The second moment of the concentration coarse grained over a scale r , n_{r};{2} , behaves as a power law in both the inertial and the dissipative ranges of scale, with two different exponents. The shapes of the probability distribution functions of the coarse-grained density n_{r} vary as a function of scale r and of compressibility C through the combination C/r;{kappa} (kappa approximately 0.5) , corresponding to the compressibility, coarse grained over a domain of scale r , averaged over Lagrangian trajectories.
Maximum entropy approach to H -theory: Statistical mechanics of hierarchical systems
NASA Astrophysics Data System (ADS)
Vasconcelos, Giovani L.; Salazar, Domingos S. P.; Macêdo, A. M. S.
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem—representing the region where the measurements are made—in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017), 10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.
Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Scale-dependency of effective hydraulic conductivity on fire-affected hillslopes
NASA Astrophysics Data System (ADS)
Langhans, Christoph; Lane, Patrick N. J.; Nyman, Petter; Noske, Philip J.; Cawson, Jane G.; Oono, Akiko; Sheridan, Gary J.
2016-07-01
Effective hydraulic conductivity (Ke) for Hortonian overland flow modeling has been defined as a function of rainfall intensity and runon infiltration assuming a distribution of saturated hydraulic conductivities (Ks). But surface boundary condition during infiltration and its interactions with the distribution of Ks are not well represented in models. As a result, the mean value of the Ks distribution (KS¯), which is the central parameter for Ke, varies between scales. Here we quantify this discrepancy with a large infiltration data set comprising four different methods and scales from fire-affected hillslopes in SE Australia using a relatively simple yet widely used conceptual model of Ke. Ponded disk (0.002 m2) and ring infiltrometers (0.07 m2) were used at the small scales and rainfall simulations (3 m2) and small catchments (ca 3000 m2) at the larger scales. We compared KS¯ between methods measured at the same time and place. Disk and ring infiltrometer measurements had on average 4.8 times higher values of KS¯ than rainfall simulations and catchment-scale estimates. Furthermore, the distribution of Ks was not clearly log-normal and scale-independent, as supposed in the conceptual model. In our interpretation, water repellency and preferential flow paths increase the variance of the measured distribution of Ks and bias ponding toward areas of very low Ks during rainfall simulations and small catchment runoff events while areas with high preferential flow capacity remain water supply-limited more than the conceptual model of Ke predicts. The study highlights problems in the current theory of scaling runoff generation.
Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brancucci Martinez-Anido, C.; Hodge, B. M.
2014-09-01
This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).
Silver hake tracks changes in Northwest Atlantic circulation.
Nye, Janet A; Joyce, Terrence M; Kwon, Young-Oh; Link, Jason S
2011-08-02
Recent studies documenting shifts in spatial distribution of many organisms in response to a warming climate highlight the need to understand the mechanisms underlying species distribution at large spatial scales. Here we present one noteworthy example of remote oceanographic processes governing the spatial distribution of adult silver hake, Merluccius bilinearis, a commercially important fish in the Northeast US shelf region. Changes in spatial distribution of silver hake over the last 40 years are highly correlated with the position of the Gulf Stream. These changes in distribution are in direct response to local changes in bottom temperature on the continental shelf that are responding to the same large scale circulation change affecting the Gulf Stream path, namely changes in the Atlantic meridional overturning circulation (AMOC). If the AMOC weakens, as is suggested by global climate models, silver hake distribution will remain in a poleward position, the extent to which could be forecast at both decadal and multidecadal scales.
Model for non-Gaussian intraday stock returns
NASA Astrophysics Data System (ADS)
Gerig, Austin; Vicente, Javier; Fuentes, Miguel A.
2009-12-01
Stock prices are known to exhibit non-Gaussian dynamics, and there is much interest in understanding the origin of this behavior. Here, we present a model that explains the shape and scaling of the distribution of intraday stock price fluctuations (called intraday returns) and verify the model using a large database for several stocks traded on the London Stock Exchange. We provide evidence that the return distribution for these stocks is non-Gaussian and similar in shape and that the distribution appears stable over intraday time scales. We explain these results by assuming the volatility of returns is constant intraday but varies over longer periods such that its inverse square follows a gamma distribution. This produces returns that are Student distributed for intraday time scales. The predicted results show excellent agreement with the data for all stocks in our study and over all regions of the return distribution.
The Large -scale Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Flin, Piotr
A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.
Gruwell, Matthew E.; Flarhety, Meghan; Dittmar, Katharina
2012-01-01
It has long been known that armored scale insects harbor endosymbiotic bacteria inside specialized cells called bacteriocytes. Originally, these endosymbionts were thought to be fungal symbionts but they are now known to be bacterial and have been named Uzinura diaspidicola. Bacteriocyte and endosymbiont distribution patterns within host insects were visualized using in situ hybridization via 16S rRNA specific probes. Images of scale insect embryos, eggs and adult scale insects show patterns of localized bacteriocytes in embryos and randomly distributed bacteriocytes in adults. The symbiont pocket was not found in the armored scale insect eggs that were tested. The pattern of dispersed bacteriocytes in adult scale insects suggest that Uzinura and Blattabacteria may share some homologous traits that coincide with similar life style requirements, such as dispersal in fat bodies and uric acid recycling. PMID:26467959
Implications of the IRAS data for galactic gamma ray astronomy and EGRET
NASA Technical Reports Server (NTRS)
Stecker, Floyd W.
1990-01-01
Using the results of gamma-ray, millimeter wave and far surveys of the galaxy, logically consistent picture of the large scale distribution of galactic gas and cosmic rays was derived, tied to the overall processes of stellar birth and destruction on a galactic scale. Using the results of the IRAS far-infrared survey of te galaxy, the large scale radial distributions of galactic far-infrared emission independently was obtained for both the Northern and Southern Hemisphere sides of the Galaxy. The dominant feature in these distributions was found to be a broad peak coincident with the 5 kpc molecular gas cloud ring. Evidence was found for spiral arm features. Strong correlations are evident between the large scale galactic distributions of far-infrared emission, gamma-ray emission and total CO emission. There is particularly tight correlation between the distribution of warm molecular clouds and far-infrared emission on a galactic scale. The 5 kpc ring was evident in existing galactic gamma-ray data. The extent to which the more detailed spiral arm features are evident in the more resolved EGRET (Energetic Gamma-Ray Experimental Telescope) data will help to determine more precisely the propagation characteristics of cosmic rays.
Mapping spatial patterns of denitrifiers at large scales (Invited)
NASA Astrophysics Data System (ADS)
Philippot, L.; Ramette, A.; Saby, N.; Bru, D.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.
2010-12-01
Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 739 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.
Sebastian Martinuzzi; Lee A. Vierling; William A. Gould; Kerri T. Vierling; Andrew T. Hudak
2009-01-01
Remote sensing provides critical information for broad scale assessments of wildlife habitat distribution and conservation. However, such efforts have been typically unable to incorporate information about vegetation structure, a variable important for explaining the distribution of many wildlife species. We evaluated the consequences of incorporating remotely sensed...
Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D
2015-01-01
Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.
Differential memory in the earth's magnetotail
NASA Technical Reports Server (NTRS)
Burkhart, G. R.; Chen, J.
1991-01-01
The process of 'differential memory' in the earth's magnetotail is studied in the framework of the modified Harris magnetotail geometry. It is verified that differential memory can generate non-Maxwellian features in the modified Harris field model. The time scales and the potentially observable distribution functions associated with the process of differential memory are investigated, and it is shown that non-Maxwelllian distributions can evolve as a test particle response to distribution function boundary conditions in a Harris field magnetotail model. The non-Maxwellian features which arise from distribution function mapping have definite time scales associated with them, which are generally shorter than the earthward convection time scale but longer than the typical Alfven crossing time.
Climate Controls AM Fungal Distributions from Global to Local Scales
NASA Astrophysics Data System (ADS)
Kivlin, S. N.; Hawkes, C.; Muscarella, R.; Treseder, K. K.; Kazenel, M.; Lynn, J.; Rudgers, J.
2016-12-01
Arbuscular mycorrhizal (AM) fungi have key functions in terrestrial biogeochemical processes; thus, determining the relative importance of climate, edaphic factors, and plant community composition on their geographic distributions can improve predictions of their sensitivity to global change. Local adaptation by AM fungi to plant hosts, soil nutrients, and climate suggests that all of these factors may control fungal geographic distributions, but their relative importance is unknown. We created species distribution models for 142 AM fungal taxa at the global scale with data from GenBank. We compared climate variables (BioClim and soil moisture), edaphic variables (phosphorus, carbon, pH, and clay content), and plant variables using model selection on models with (1) all variables, (2) climatic variables only (including soil moisture) and (3) resource-related variables only (all other soil parameters and NPP) using the MaxEnt algorithm evaluated with ENMEval. We also evaluated whether drivers of AM fungal distributions were phylogenetically conserved. To test whether global correlates of AM fungal distributions were reflected at local scales, we then surveyed AM fungi in nine plant hosts along three elevation gradients in the Upper Gunnison Basin, Colorado, USA. At the global scale, the distributions of 55% of AM fungal taxa were affected by both climate and soil resources, whereas 16% were only affected by climate and 29% were only affected by soil resources. Even for AM fungi that were affected by both climate and resources, the effects of climatic variables nearly always outweighed those of resources. Soil moisture and isothermality were the main climatic and NPP and soil carbon the main resource related factors influencing AM fungal distributions. Distributions of closely related AM fungal taxa were similarly affected by climate, but not by resources. Local scale surveys of AM fungi across elevations confirmed that climate was a key driver of AM fungal composition and root colonization, with weaker influences of plant identity and soil nutrients. These two studies across scales suggest prevailing effects of climate on AM fungal distributions. Thus, incorporating climate when forecasting future ranges of AM fungi will enhance predictions of AM fungal abundance and associated ecosystem functions.
NASA Technical Reports Server (NTRS)
Geller, Margaret J.; Huchra, J. P.
1991-01-01
Present-day understanding of the large-scale galaxy distribution is reviewed. The statistics of the CfA redshift survey are briefly discussed. The need for deeper surveys to clarify the issues raised by recent studies of large-scale galactic distribution is addressed.
Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.
2008-01-01
Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.
Scaling of size distributions of C60 and C70 fullerene surface islands
NASA Astrophysics Data System (ADS)
Dubrovskii, V. G.; Berdnikov, Y.; Olyanich, D. A.; Mararov, V. V.; Utas, T. V.; Zotov, A. V.; Saranin, A. A.
2017-06-01
We present experimental data and a theoretical analysis for the size distributions of C60 and C70 surface islands deposited onto In-modified Si(111)√3 × √3-Au surface under different conditions. We show that both fullerene islands feature an analytic Vicsek-Family scaling shape where the scaled size distributions are given by a power law times an incomplete beta-function with the required normalization. The power exponent in this distribution corresponds to the fractal shape of two-dimensional islands, confirmed by the experimentally observed morphologies. Quite interestingly, we do not see any significant difference between C60 and C70 fullerenes in terms of either scaling parameters or temperature dependence of the diffusion constants. In particular, we deduce the activation energy for surface diffusion of ED = 140 ± 10 meV for both types of fullerenes.
Scaling and allometry in the building geometries of Greater London
NASA Astrophysics Data System (ADS)
Batty, M.; Carvalho, R.; Hudson-Smith, A.; Milton, R.; Smith, D.; Steadman, P.
2008-06-01
Many aggregate distributions of urban activities such as city sizes reveal scaling but hardly any work exists on the properties of spatial distributions within individual cities, notwithstanding considerable knowledge about their fractal structure. We redress this here by examining scaling relationships in a world city using data on the geometric properties of individual buildings. We first summarise how power laws can be used to approximate the size distributions of buildings, in analogy to city-size distributions which have been widely studied as rank-size and lognormal distributions following Zipf [ Human Behavior and the Principle of Least Effort (Addison-Wesley, Cambridge, 1949)] and Gibrat [ Les Inégalités Économiques (Librarie du Recueil Sirey, Paris, 1931)]. We then extend this analysis to allometric relationships between buildings in terms of their different geometric size properties. We present some preliminary analysis of building heights from the Emporis database which suggests very strong scaling in world cities. The data base for Greater London is then introduced from which we extract 3.6 million buildings whose scaling properties we explore. We examine key allometric relationships between these different properties illustrating how building shape changes according to size, and we extend this analysis to the classification of buildings according to land use types. We conclude with an analysis of two-point correlation functions of building geometries which supports our non-spatial analysis of scaling.
Soil nutrients influence spatial distributions of tropical tree species.
John, Robert; Dalling, James W; Harms, Kyle E; Yavitt, Joseph B; Stallard, Robert F; Mirabello, Matthew; Hubbell, Stephen P; Valencia, Renato; Navarrete, Hugo; Vallejo, Martha; Foster, Robin B
2007-01-16
The importance of niche vs. neutral assembly mechanisms in structuring tropical tree communities remains an important unsettled question in community ecology [Bell G (2005) Ecology 86:1757-1770]. There is ample evidence that species distributions are determined by soils and habitat factors at landscape (<10(4) km(2)) and regional scales. At local scales (<1 km(2)), however, habitat factors and species distributions show comparable spatial aggregation, making it difficult to disentangle the importance of niche and dispersal processes. In this article, we test soil resource-based niche assembly at a local scale, using species and soil nutrient distributions obtained at high spatial resolution in three diverse neotropical forest plots in Colombia (La Planada), Ecuador (Yasuni), and Panama (Barro Colorado Island). Using spatial distribution maps of >0.5 million individual trees of 1,400 species and 10 essential plant nutrients, we used Monte Carlo simulations of species distributions to test plant-soil associations against null expectations based on dispersal assembly. We found that the spatial distributions of 36-51% of tree species at these sites show strong associations to soil nutrient distributions. Neutral dispersal assembly cannot account for these plant-soil associations or the observed niche breadths of these species. These results indicate that belowground resource availability plays an important role in the assembly of tropical tree communities at local scales and provide the basis for future investigations on the mechanisms of resource competition among tropical tree species.
NASA Astrophysics Data System (ADS)
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ -stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α . We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ -stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Nezhadhaghighi, Mohsen Ghasemi
2017-08-01
Here, we present results of numerical simulations and the scaling characteristics of one-dimensional random fluctuations with heavy-tailed probability distribution functions. Assuming that the distribution function of the random fluctuations obeys Lévy statistics with a power-law scaling exponent, we investigate the fractional diffusion equation in the presence of μ-stable Lévy noise. We study the scaling properties of the global width and two-point correlation functions and then compare the analytical and numerical results for the growth exponent β and the roughness exponent α. We also investigate the fractional Fokker-Planck equation for heavy-tailed random fluctuations. We show that the fractional diffusion processes in the presence of μ-stable Lévy noise display special scaling properties in the probability distribution function (PDF). Finally, we numerically study the scaling properties of the heavy-tailed random fluctuations by using the diffusion entropy analysis. This method is based on the evaluation of the Shannon entropy of the PDF generated by the random fluctuations, rather than on the measurement of the global width of the process. We apply the diffusion entropy analysis to extract the growth exponent β and to confirm the validity of our numerical analysis.
Warwick-Evans, Victoria C.; Atkinson, Philip W.; Robinson, Leonie A.; Green, Jonathan A.
2016-01-01
During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney’s coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney’s seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making. PMID:27031616
Domain-area distribution anomaly in segregating multicomponent superfluids
NASA Astrophysics Data System (ADS)
Takeuchi, Hiromitsu
2018-01-01
The domain-area distribution in the phase transition dynamics of Z2 symmetry breaking is studied theoretically and numerically for segregating binary Bose-Einstein condensates in quasi-two-dimensional systems. Due to the dynamic-scaling law of the phase ordering kinetics, the domain-area distribution is described by a universal function of the domain area, rescaled by the mean distance between domain walls. The scaling theory for general coarsening dynamics in two dimensions hypothesizes that the distribution during the coarsening dynamics has a hierarchy with the two scaling regimes, the microscopic and macroscopic regimes with distinct power-law exponents. The power law in the macroscopic regime, where the domain size is larger than the mean distance, is universally represented with the Fisher's exponent of the percolation theory in two dimensions. On the other hand, the power-law exponent in the microscopic regime is sensitive to the microscopic dynamics of the system. This conjecture is confirmed by large-scale numerical simulations of the coupled Gross-Pitaevskii equation for binary condensates. In the numerical experiments of the superfluid system, the exponent in the microscopic regime anomalously reaches to its theoretical upper limit of the general scaling theory. The anomaly comes from the quantum-fluid effect in the presence of circular vortex sheets, described by the hydrodynamic approximation neglecting the fluid compressibility. It is also found that the distribution of superfluid circulation along vortex sheets obeys a dynamic-scaling law with different power-law exponents in the two regimes. An analogy to quantum turbulence on the hierarchy of vorticity distribution and the applicability to chiral superfluid 3He in a slab are also discussed.
Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi
2011-06-01
For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.
NASA Technical Reports Server (NTRS)
Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.
2015-01-01
Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.
Flexible Unicast-Based Group Communication for CoAP-Enabled Devices †
Ishaq, Isam; Hoebeke, Jeroen; Van den Abeele, Floris; Rossey, Jen; Moerman, Ingrid; Demeester, Piet
2014-01-01
Smart embedded objects will become an important part of what is called the Internet of Things. Applications often require concurrent interactions with several of these objects and their resources. Existing solutions have several limitations in terms of reliability, flexibility and manageability of such groups of objects. To overcome these limitations we propose an intermediately level of intelligence to easily manipulate a group of resources across multiple smart objects, building upon the Constrained Application Protocol (CoAP). We describe the design of our solution to create and manipulate a group of CoAP resources using a single client request. Furthermore we introduce the concept of profiles for the created groups. The use of profiles allows the client to specify in more detail how the group should behave. We have implemented our solution and demonstrate that it covers the complete group life-cycle, i.e., creation, validation, flexible usage and deletion. Finally, we quantitatively analyze the performance of our solution and compare it against multicast-based CoAP group communication. The results show that our solution improves reliability and flexibility with a trade-off in increased communication overhead. PMID:24901978
The DAQ system for the AEḡIS experiment
NASA Astrophysics Data System (ADS)
Prelz, F.; Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Prevedelli, M.; Ravelli, L.; Rienaecker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.
2017-10-01
In the sociology of small- to mid-sized (O(100) collaborators) experiments the issue of data collection and storage is sometimes felt as a residual problem for which well-established solutions are known. Still, the DAQ system can be one of the few forces that drive towards the integration of otherwise loosely coupled detector systems. As such it may be hard to complete with off-the-shelf components only. LabVIEW and ROOT are the (only) two software systems that were assumed to be familiar enough to all collaborators of the AEḡIS (AD6) experiment at CERN: working out of the GXML representation of LabVIEW Data types, a semantically equivalent representation as ROOT TTrees was developed for permanent storage and analysis. All data in the experiment is cast into this common format and can be produced and consumed on both systems and transferred over TCP and/or multicast over UDP for immediate sharing over the experiment LAN. We describe the setup that has been able to cater to all run data logging and long term monitoring needs of the AEḡIS experiment so far.
Innovative Networking Concepts Tested on the Advanced Communications Technology Satellite
NASA Technical Reports Server (NTRS)
Friedman, Daniel; Gupta, Sonjai; Zhang, Chuanguo; Ephremides, Anthony
1996-01-01
This paper describes a program of experiments conducted over the advanced communications technology satellite (ACTS) and the associated TI-VSAT (very small aperture terminal). The experiments were motivated by the commercial potential of low-cost receive only satellite terminals that can operate in a hybrid network environment, and by the desire to demonstrate frame relay technology over satellite networks. The first experiment tested highly adaptive methods of satellite bandwidth allocation in an integrated voice-data service environment. The second involved comparison of forward error correction (FEC) and automatic repeat request (ARQ) methods of error control for satellite communication with emphasis on the advantage that a hybrid architecture provides, especially in the case of multicasts. Finally, the third experiment demonstrated hybrid access to databases and compared the performance of internetworking protocols for interconnecting local area networks (LANs) via satellite. A custom unit termed frame relay access switch (FRACS) was developed by COMSAT Laboratories for these experiments; the preparation and conduct of these experiments involved a total of 20 people from the University of Maryland, the University of Colorado and COMSAT Laboratories, from late 1992 until 1995.
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-08-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical Burst Switching - Support of Multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule Paper Submission Deadline: November 1, 2003 Notification to Authors: January 15, 2004 Final Manuscripts to Publisher: February 15, 2004 Publication of Focus Issue: February/March 2004
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-10-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004
Next-Generation WDM Network Design and Routing
NASA Astrophysics Data System (ADS)
Tsang, Danny H. K.; Bensaou, Brahim
2003-09-01
Call for Papers The Editors of JON are soliciting papers on WDM Network Design and Routing. The aim in this focus issue is to publish original research on topics including - but not limited to - the following: - WDM network architectures and protocols - GMPLS network architectures - Wavelength converter placement in WDM networks - Routing and wavelength assignment (RWA) in WDM networks - Protection and restoration strategies and algorithms in WDM networks - Traffic grooming in WDM networks - Dynamic routing strategies and algorithms - Optical burst switching - Support of multicast - Protection and restoration in WDM networks - Performance analysis and optimization in WDM networks Manuscript Submission To submit to this special issue, follow the normal procedure for submission to JON, indicating "WDM Network Design" in the "Comments" field of the online submission form. For all other questions relating to this focus issue, please send an e-mail to jon@osa.org, subject line "WDM Network Design." Additional information can be found on the JON website: http://www.osa-jon.org/submission/. Schedule - Paper Submission Deadline: November 1, 2003 - Notification to Authors: January 15, 2004 - Final Manuscripts to Publisher: February 15, 2004 - Publication of Focus Issue: February/March 2004
Automation Hooks Architecture Trade Study for Flexible Test Orchestration
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.
2010-01-01
We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.
Takizawa, Masaomi; Miyashita, Toyohisa; Murase, Sumio; Kanda, Hirohito; Karaki, Yoshiaki; Yagi, Kazuo; Ohue, Toru
2003-01-01
A real-time telescreening system is developed to detect early diseases for rural area residents using two types of mobile vans with a portable satellite station. The system consists of a satellite communication system with 1.5Mbps of the JCSAT-1B satellite, a spiral CT van, an ultrasound imaging van with two video conference system, a DICOM server and a multicast communication unit. The video image and examination image data are transmitted from the van to hospitals and the university simultaneously. Physician in the hospital observes and interprets exam images from the van and watches the video images of the position of ultrasound transducer on screenee in the van. After the observation images, physician explains a results of the examination by the video conference system. Seventy lung CT screening and 203 ultrasound screening were done from March to June 2002. The trial of this real time screening suggested that rural residents are given better healthcare without visit to the hospital. And it will open the gateway to reduce the medical cost and medical divide between city area and rural area.
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-08-08
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.
Feng, Huan; Tappero, Ryan; Zhang, Weiguo; ...
2015-07-26
This study is focused on micro-scale measurement of metal (Ca, Cl, Fe, K, Mn, Cu, Pb, and Zn) distributions in Spartina alterniflora root system. The root samples were collected in the Yangtze River intertidal zone in July 2013. Synchrotron X-ray fluorescence (XRF), computed microtomography (CMT), and X-ray absorption near-edge structure (XANES) techniques, which provide micro-meter scale analytical resolution, were applied to this study. Although it was found that the metals of interest were distributed in both epidermis and vascular tissue with the varying concentrations, the results showed that Fe plaque was mainly distributed in the root epidermis. Other metals (e.g.,more » Cu, Mn, Pb, and Zn) were correlated with Fe in the epidermis possibly due to scavenge by Fe plaque. Relatively high metal concentrations were observed in the root hair tip. As a result, this micro-scale investigation provides insights of understanding the metal uptake and spatial distribution as well as the function of Fe plaque governing metal transport in the root system.« less
Boehm, Alexandria B
2002-10-15
In this study, we extend the established scaling theory for cluster size distributions generated during unsteady coagulation to number-flux distributions that arise during steady-state coagulation and settling in an unmixed water mass. The scaling theory predicts self-similar number-flux distributions and power-law decay of total number flux with depth. The shape of the number-flux distributions and the power-law exponent describing the decay of the total number flux are shown to depend on the homogeneity and small i/j limit of the coagulation kernel and the exponent kappa, which describes the variation in settling velocity with cluster volume. Particle field measurements from Lake Zurich, collected by U. Weilenmann and co-workers (Limnol. Oceanogr.34, 1 (1989)), are used to illustrate how the scaling predictions can be applied to a natural system. This effort indicates that within the mid-depth region of Lake Zurich, clusters of the same size preferentially interact and large clusters react with one another more quickly than small ones, indicative of clusters coagulating in a reaction-limited regime.
EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.
Tong, Xiaoxiao; Bentler, Peter M
2013-01-01
Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.
NASA Astrophysics Data System (ADS)
Kelling, S.
2017-12-01
The goal of Biodiversity research is to identify, explain, and predict why a species' distribution and abundance vary through time, space, and with features of the environment. Measuring these patterns and predicting their responses to change are not exercises in curiosity. Today, they are essential tasks for understanding the profound effects that humans have on earth's natural systems, and for developing science-based environmental policies. To gain insight about species' distribution patterns requires studying natural systems at appropriate scales, yet studies of ecological processes continue to be compromised by inadequate attention to scale issues. How spatial and temporal patterns in nature change with scale often reflects fundamental laws of physics, chemistry, or biology, and we can identify such basic, governing laws only by comparing patterns over a wide range of scales. This presentation will provide several examples that integrate bird observations made by volunteers, with NASA Earth Imagery using Big Data analysis techniques to analyze the temporal patterns of bird occurrence across scales—from hemisphere-wide views of bird distributions to the impact of powerful city lights on bird migration.
NASA Astrophysics Data System (ADS)
Arnison, G.; Albajar, C.; Albrow, M. G.; Allkofer, O. C.; Astbury, A.; Aubert, B.; Axon, T.; Bacci, C.; Bacon, T.; Batley, J. R.; Bauer, G.; Bellinger, J.; Bettini, A.; Bézaguet, A.; Bock, R. K.; Bos, K.; Buckley, E.; Busetto, G.; Catz, P.; Cennini, P.; Centro, S.; Ceradini, F.; Ciapetti, G.; Cittolin, S.; Clarke, D.; Cline, D.; Cochet, C.; Colas, J.; Colas, P.; Corden, M.; Coughlan, J. A.; Cox, G.; Dau, D.; Debeer, M.; Debrion, J. P.; Degiorgi, M.; Della Negra, M.; Demoulin, M.; Denby, B.; Denegri, D.; Diciaccio, A.; Dobrzynski, L.; Dorenbosch, J.; Dowell, J. D.; Duchovni, E.; Edgecock, R.; Eggert, K.; Eisenhandler, E.; Ellis, N.; Erhard, P.; Faissner, H.; Keeler, M. Fincke; Flynn, P.; Fontaine, G.; Frey, R.; Frühwirth, R.; Garvey, J.; Gee, D.; Geer, S.; Ghesquière, C.; Ghez, P.; Ghio, F.; Giacomelli, P.; Gibson, W. R.; Giraud-Héraud, Y.; Givernaud, A.; Gonidec, A.; Goodman, M.; Grassmann, H.; Grayer, G.; Guryn, W.; Hansl-Kozanecka, T.; Haynes, W.; Haywood, S. J.; Hoffmann, H.; Holthuizen, D. J.; Homer, R. J.; Honma, A.; Ikeda, M.; Jank, W.; Jimack, M.; Jorat, G.; Kalmus, P. I. P.; Karimäki, V.; Keeler, R.; Kenyon, I.; Kernan, A.; Kienzle, W.; Kinnunen, R.; Kozanecki, W.; Krammer, M.; Kroll, J.; Kryn, D.; Kyberd, P.; Lacava, F.; Laugier, J. P.; Lees, J. P.; Leuchs, R.; Levegrun, S.; Lévêque, A.; Levi, M.; Linglin, D.; Locci, E.; Long, K.; Markiewicz, T.; Markytan, M.; Martin, T.; Maurin, G.; McMahon, T.; Mendiburu, J.-P.; Meneguzzo, A.; Meyer, O.; Meyer, T.; Minard, M.-N.; Mohammad, M.; Morgan, K.; Moricca, M.; Moser, H.; Mours, B.; Muller, Th.; Nandi, A.; Naumann, L.; Norton, A.; Pascoli, D.; Pauss, F.; Perault, C.; Petrolo, E.; Mortari, G. Piano; Pietarinen, E.; Pigot, C.; Pimiä, M.; Pitman, D.; Placci, A.; Porte, J.-P.; Radermacher, E.; Ransdell, J.; Redelberger, T.; Reithler, H.; Revol, J. P.; Richman, J.; Rijssenbeek, M.; Robinson, D.; Rohlf, J.; Rossi, P.; Ruhm, W.; Rubbia, C.; Sajot, G.; Salvini, G.; Sass, J.; Sadoulet, B.; Samyn, D.; Savoy-Navarro, A.; Schinzel, D.; Schwartz, A.; Scott, W.; Shah, T. P.; Sheer, I.; Siotis, I.; Smith, D.; Sobie, R.; Sphicas, P.; Strauss, J.; Streets, J.; Stubenrauch, C.; Summers, D.; Sumorok, K.; Szoncso, F.; Tao, C.; Taurok, A.; Have, I. Ten; Tether, S.; Thompson, G.; Tscheslog, E.; Tuominiemi, J.; Van Eijk, B.; Verecchia, P.; Vialle, J. P.; Villasenor, L.; Virdee, T. S.; Von der Schmitt, H.; Von Schlippe, W.; Vrana, J.; Vuillemin, V.; Wahl, H. D.; Watkins, P.; Wildish, A.; Wilke, R.; Wilson, J.; Wingerter, I.; Wimpenny, S. J.; Wulz, C. E.; Wyatt, T.; Yvert, M.; Zaccardelli, C.; Zacharov, I.; Zaganidis, N.; Zanello, L.; Zotto, P.; UA1 Collaboration
1986-09-01
Angular distributions of high-mass jet pairs (180< m2 J<350 GeV) have been measured in the UA1 experiment at the CERN pp¯ Collider ( s=630 GeV) . We show that angular distributions are independent of the subprocess centre-of-mass (CM) energy over this range, and use the data to put constraints on the definition of the Q2 scale. The distribution for the very high mass jet pairs (240< m2 J<300 GeV) has also been used to obtain a lower limit on the energy scale Λ c of compositeness of quarks. We find Λ c>415 GeV at 95% confidence level.
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
Voltage Impacts of Utility-Scale Distributed Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, A.
2014-09-01
Although most utility-scale wind turbines in the United States are added at the transmission level in large wind power plants, distributed wind power offers an alternative that could increase the overall wind power penetration without the need for additional transmission. This report examines the distribution feeder-level voltage issues that can arise when adding utility-scale wind turbines to the distribution system. Four of the Pacific Northwest National Laboratory taxonomy feeders were examined in detail to study the voltage issues associated with adding wind turbines at different distances from the sub-station. General rules relating feeder resistance up to the point of turbinemore » interconnection to the expected maximum voltage change levels were developed. Additional analysis examined line and transformer overvoltage conditions.« less
A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission
Parker, Jon; Epstein, Joshua M.
2013-01-01
The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120
Ripoche, Marion; Lindsay, Leslie Robbin; Ludwig, Antoinette; Ogden, Nicholas H; Thivierge, Karine; Leighton, Patrick A
2018-03-27
Since its detection in Canada in the early 1990s, Ixodes scapularis , the primary tick vector of Lyme disease in eastern North America, has continued to expand northward. Estimates of the tick's broad-scale distribution are useful for tracking the extent of the Lyme disease risk zone; however, tick distribution may vary widely within this zone. Here, we investigated I. scapularis nymph distribution at three spatial scales across the Lyme disease emergence zone in southern Quebec, Canada. We collected ticks and compared the nymph densities among different woodlands and different plots and transects within the same woodland. Hot spot analysis highlighted significant nymph clustering at each spatial scale. In regression models, nymph abundance was associated with litter depth, humidity, and elevation, which contribute to a suitable habitat for ticks, but also with the distance from the trail and the type of trail, which could be linked to host distribution and human disturbance. Accounting for this heterogeneous nymph distribution at a fine spatial scale could help improve Lyme disease management strategies but also help people to understand the risk variation around them and to adopt appropriate behaviors, such as staying on the trail in infested parks to limit their exposure to the vector and associated pathogens.
Lindsay, Leslie Robbin; Ludwig, Antoinette; Ogden, Nicholas H.; Thivierge, Karine; Leighton, Patrick A.
2018-01-01
Since its detection in Canada in the early 1990s, Ixodes scapularis, the primary tick vector of Lyme disease in eastern North America, has continued to expand northward. Estimates of the tick’s broad-scale distribution are useful for tracking the extent of the Lyme disease risk zone; however, tick distribution may vary widely within this zone. Here, we investigated I. scapularis nymph distribution at three spatial scales across the Lyme disease emergence zone in southern Quebec, Canada. We collected ticks and compared the nymph densities among different woodlands and different plots and transects within the same woodland. Hot spot analysis highlighted significant nymph clustering at each spatial scale. In regression models, nymph abundance was associated with litter depth, humidity, and elevation, which contribute to a suitable habitat for ticks, but also with the distance from the trail and the type of trail, which could be linked to host distribution and human disturbance. Accounting for this heterogeneous nymph distribution at a fine spatial scale could help improve Lyme disease management strategies but also help people to understand the risk variation around them and to adopt appropriate behaviors, such as staying on the trail in infested parks to limit their exposure to the vector and associated pathogens. PMID:29584627
Jarnevich, Catherine S.; Young, Nicholas E.; Talbert, Marian; Talbert, Colin
2018-01-01
Understanding invasive species distributions and potential invasions often requires broad‐scale information on the environmental tolerances of the species. Further, resource managers are often faced with knowing these broad‐scale relationships as well as nuanced environmental factors related to their landscape that influence where an invasive species occurs and potentially could occur. Using invasive buffelgrass (Cenchrus ciliaris), we developed global models and local models for Saguaro National Park, Arizona, USA, based on location records and literature on physiological tolerances to environmental factors to investigate whether environmental relationships of a species at a global scale are also important at local scales. In addition to correlative models with five commonly used algorithms, we also developed a model using a priori user‐defined relationships between occurrence and environmental characteristics based on a literature review. All correlative models at both scales performed well based on statistical evaluations. The user‐defined curves closely matched those produced by the correlative models, indicating that the correlative models may be capturing mechanisms driving the distribution of buffelgrass. Given climate projections for the region, both global and local models indicate that conditions at Saguaro National Park may become more suitable for buffelgrass. Combining global and local data with correlative models and physiological information provided a holistic approach to forecasting invasive species distributions.
A. Townsend Peterson; Daniel A. Kluza
2005-01-01
Large-scale assessments of the distribution and diversity of birds have been challenged by the need for a robust methodology for summarizing or predicting species' geographic distributions (e.g. Beard et al. 1999, Manel et al. 1999, Saveraid et al. 2001). Methodologies used in such studies have at times been inappropriate, or even more frequently limited in their...
Numerical Simulations of Vortical Mode Stirring: Effects of Large Scale Shear and Strain
2015-09-30
Numerical Simulations of Vortical Mode Stirring: Effects of Large-Scale Shear and Strain M.-Pascale Lelong NorthWest Research Associates...can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local ambient conditions including latitude...talk at the 1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Nonlinear Effects in Internal Waves Conference held
Multiscale Study of Currents Affected by Topography
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Multiscale Study of Currents Affected by Topography ...the effects of topography on the ocean general and regional circulation with a focus on the wide range of scales of interactions. The small-scale...details of the topography and the waves, eddies, drag, and turbulence it generates (at spatial scales ranging from meters to mesoscale) interact in the
Species, functional groups, and thresholds in ecological resilience
Sundstrom, Shana M.; Allen, Craig R.; Barichievy, Chris
2012-01-01
The cross-scale resilience model states that ecological resilience is generated in part from the distribution of functions within and across scales in a system. Resilience is a measure of a system's ability to remain organized around a particular set of mutually reinforcing processes and structures, known as a regime. We define scale as the geographic extent over which a process operates and the frequency with which a process occurs. Species can be categorized into functional groups that are a link between ecosystem processes and structures and ecological resilience. We applied the cross-scale resilience model to avian species in a grassland ecosystem. A species’ morphology is shaped in part by its interaction with ecological structure and pattern, so animal body mass reflects the spatial and temporal distribution of resources. We used the log-transformed rank-ordered body masses of breeding birds associated with grasslands to identify aggregations and discontinuities in the distribution of those body masses. We assessed cross-scale resilience on the basis of 3 metrics: overall number of functional groups, number of functional groups within an aggregation, and the redundancy of functional groups across aggregations. We assessed how the loss of threatened species would affect cross-scale resilience by removing threatened species from the data set and recalculating values of the 3 metrics. We also determined whether more function was retained than expected after the loss of threatened species by comparing observed loss with simulated random loss in a Monte Carlo process. The observed distribution of function compared with the random simulated loss of function indicated that more functionality in the observed data set was retained than expected. On the basis of our results, we believe an ecosystem with a full complement of species can sustain considerable species losses without affecting the distribution of functions within and across aggregations, although ecological resilience is reduced. We propose that the mechanisms responsible for shaping discontinuous distributions of body mass and the nonrandom distribution of functions may also shape species losses such that local extinctions will be nonrandom with respect to the retention and distribution of functions and that the distribution of function within and across aggregations will be conserved despite extinctions.
[Spatial point patterns of Antarctic krill fishery in the northern Antarctic Peninsula].
Yang, Xiao Ming; Li, Yi Xin; Zhu, Guo Ping
2016-12-01
As a key species in the Antarctic ecosystem, the spatial distribution of Antarctic krill (thereafter krill) often tends to present aggregation characteristics, which therefore reflects the spatial patterns of krill fishing operation. Based on the fishing data collected from Chinese krill fishing vessels, of which vessel A was professional krill fishing vessel and Vessel B was a fishing vessel which shifted between Chilean jack mackerel (Trachurus murphyi) fishing ground and krill fishing ground. In order to explore the characteristics of spatial distribution pattern and their ecological effects of two obvious different fishing fleets under a high and low nominal catch per unit effort (CPUE), from the viewpoint of spatial point pattern, the present study analyzed the spatial distribution characteristics of krill fishery in the northern Antarctic Peninsula from three aspects: (1) the two vessels' point pattern characteristics of higher CPUEs and lower CPUEs at different scales; (2) correlation of the bivariate point patterns between these points of higher CPUE and lower CPUE; and (3) correlation patterns of CPUE. Under the analysis derived from the Ripley's L function and mark correlation function, the results showed that the point patterns of the higher/lo-wer catch available were similar, both showing an aggregation distribution in this study windows at all scale levels. The aggregation intensity of krill fishing was nearly maximum at 15 km spatial scale, and kept stably higher values at the scale of 15-50 km. The aggregation intensity of krill fishery point patterns could be described in order as higher CPUE of vessel A > lower CPUE of vessel B >higher CPUE of vessel B > higher CPUE of vessel B. The relationship of the higher and lo-wer CPUEs of vessel A showed positive correlation at the spatial scale of 0-75 km, and presented stochastic relationship after 75 km scale, whereas vessel B showed positive correlation at all spatial scales. The point events of higher and lower CPUEs were synchronized, showing significant correlations at most of spatial scales because of the dynamics nature and complex of krill aggregation patterns. The distribution of vessel A's CPUEs was positively correlated at scales of 0-44 km, but negatively correlated at the scales of 44-80 km. The distribution of vessel B's CPUEs was negatively correlated at the scales of 50-70 km, but no significant correlations were found at other scales. The CPUE mark point patterns showed a negative correlation, which indicated that intraspecific competition for space and prey was significant. There were significant differences in spatial point pattern distribution between vessel A with higher fishing capacity and vessel B with lower fishing capacity. The results showed that the professional krill fishing vessel is suitable to conduct the analysis of spatial point pattern and scientific fishery survey.
NASA Astrophysics Data System (ADS)
Lee, H.; Seo, D.-J.; Liu, Y.; Koren, V.; McKee, P.; Corby, R.
2012-01-01
State updating of distributed rainfall-runoff models via streamflow assimilation is subject to overfitting because large dimensionality of the state space of the model may render the assimilation problem seriously under-determined. To examine the issue in the context of operational hydrology, we carry out a set of real-world experiments in which streamflow data is assimilated into gridded Sacramento Soil Moisture Accounting (SAC-SMA) and kinematic-wave routing models of the US National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM) with the variational data assimilation technique. Study basins include four basins in Oklahoma and five basins in Texas. To assess the sensitivity of data assimilation performance to dimensionality reduction in the control vector, we used nine different spatiotemporal adjustment scales, where state variables are adjusted in a lumped, semi-distributed, or distributed fashion and biases in precipitation and potential evaporation (PE) are adjusted hourly, 6-hourly, or kept time-invariant. For each adjustment scale, three different streamflow assimilation scenarios are explored, where streamflow observations at basin interior points, at the basin outlet, or at both interior points and the outlet are assimilated. The streamflow assimilation experiments with nine different basins show that the optimum spatiotemporal adjustment scale varies from one basin to another and may be different for streamflow analysis and prediction in all of the three streamflow assimilation scenarios. The most preferred adjustment scale for seven out of nine basins is found to be the distributed, hourly scale, despite the fact that several independent validation results at this adjustment scale indicated the occurrence of overfitting. Basins with highly correlated interior and outlet flows tend to be less sensitive to the adjustment scale and could benefit more from streamflow assimilation. In comparison to outlet flow assimilation, interior flow assimilation at any adjustment scale produces streamflow predictions with a spatial correlation structure more consistent with that of streamflow observations. We also describe diagnosing the complexity of the assimilation problem using the spatial correlation information associated with the streamflow process, and discuss the effect of timing errors in a simulated hydrograph on the performance of the data assimilation procedure.
Newland, Jamee; Newman, Christy; Treloar, Carla
2016-08-01
In Australia, sterile needles and syringes are distributed to people who inject drugs (PWID) through formal services for the purposes of preventing blood borne viruses (BBV). Peer distribution involves people acquiring needles from formal services and redistributing them to others. This paper investigates the dynamics of the distribution of sterile injecting equipment among networks of people who inject drugs in four sites in New South Wales (NSW), Australia. Qualitative data exploring the practice of peer distribution were collected through in-depth, semi-structured interviews and participatory social network mapping. These interviews explored injecting equipment demand, access to services, relationship pathways through which peer distribution occurred, an estimate of the size of the different peer distribution roles and participants' understanding of the illegality of peer distribution in NSW. Data were collected from 32 participants, and 31 (98%) reported participating in peer distribution in the months prior to interview. Of those 31 participants, five reported large-scale formal distribution, with an estimated volume of 34,970 needles and syringes annually. Twenty-two participated in reciprocal exchange, where equipment was distributed and received on an informal basis that appeared dependent on context and circumstance and four participants reported recipient peer distribution as their only access to sterile injecting equipment. Most (n=27) were unaware that it was illegal to distribute injecting equipment to their peers. Peer distribution was almost ubiquitous amongst the PWID participating in the study, and although five participants reported taking part in the highly organised, large-scale distribution of injecting equipment for altruistic reasons, peer distribution was more commonly reported to take place in small networks of friends and/or partners for reasons of convenience. The law regarding the illegality of peer distribution needs to change so that NSPs can capitalise on peer distribution to increase the options available to PWID and to acknowledge PWID as essential harm reduction agents in the prevention of BBVs. Copyright © 2016 Elsevier B.V. All rights reserved.
Abanto-Valle, C. A.; Bandyopadhyay, D.; Lachos, V. H.; Enriquez, I.
2009-01-01
A Bayesian analysis of stochastic volatility (SV) models using the class of symmetric scale mixtures of normal (SMN) distributions is considered. In the face of non-normality, this provides an appealing robust alternative to the routine use of the normal distribution. Specific distributions examined include the normal, student-t, slash and the variance gamma distributions. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo (MCMC) algorithm is introduced for parameter estimation. Moreover, the mixing parameters obtained as a by-product of the scale mixture representation can be used to identify outliers. The methods developed are applied to analyze daily stock returns data on S&P500 index. Bayesian model selection criteria as well as out-of- sample forecasting results reveal that the SV models based on heavy-tailed SMN distributions provide significant improvement in model fit as well as prediction to the S&P500 index data over the usual normal model. PMID:20730043
Fractal topography and subsurface water flows from fluvial bedforms to the continental shield
Worman, A.; Packman, A.I.; Marklund, L.; Harvey, J.W.; Stone, S.H.
2007-01-01
Surface-subsurface flow interactions are critical to a wide range of geochemical and ecological processes and to the fate of contaminants in freshwater environments. Fractal scaling relationships have been found in distributions of both land surface topography and solute efflux from watersheds, but the linkage between those observations has not been realized. We show that the fractal nature of the land surface in fluvial and glacial systems produces fractal distributions of recharge, discharge, and associated subsurface flow patterns. Interfacial flux tends to be dominated by small-scale features while the flux through deeper subsurface flow paths tends to be controlled by larger-scale features. This scaling behavior holds at all scales, from small fluvial bedforms (tens of centimeters) to the continental landscape (hundreds of kilometers). The fractal nature of surface-subsurface water fluxes yields a single scale-independent distribution of subsurface water residence times for both near-surface fluvial systems and deeper hydrogeological flows. Copyright 2007 by the American Geophysical Union.
Computer programs for smoothing and scaling airfoil coordinates
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
Detailed descriptions are given of the theoretical methods and associated computer codes of a program to smooth and a program to scale arbitrary airfoil coordinates. The smoothing program utilizes both least-squares polynomial and least-squares cubic spline techniques to smooth interatively the second derivatives of the y-axis airfoil coordinates with respect to a transformed x-axis system which unwraps the airfoil and stretches the nose and trailing-edge regions. The corresponding smooth airfoil coordinates are then determined by solving a tridiagonal matrix of simultaneous cubic-spline equations relating the y-axis coordinates and their corresponding second derivatives. A technique for computing the camber and thickness distribution of the smoothed airfoil is also discussed. The scaling program can then be used to scale the thickness distribution generated by the smoothing program to a specific maximum thickness which is then combined with the camber distribution to obtain the final scaled airfoil contour. Computer listings of the smoothing and scaling programs are included.
Multiresolution analysis of characteristic length scales with high-resolution topographic data
NASA Astrophysics Data System (ADS)
Sangireddy, Harish; Stark, Colin P.; Passalacqua, Paola
2017-07-01
Characteristic length scales (CLS) define landscape structure and delimit geomorphic processes. Here we use multiresolution analysis (MRA) to estimate such scales from high-resolution topographic data. MRA employs progressive terrain defocusing, via convolution of the terrain data with Gaussian kernels of increasing standard deviation, and calculation at each smoothing resolution of (i) the probability distributions of curvature and topographic index (defined as the ratio of slope to area in log scale) and (ii) characteristic spatial patterns of divergent and convergent topography identified by analyzing the curvature of the terrain. The MRA is first explored using synthetic 1-D and 2-D signals whose CLS are known. It is then validated against a set of MARSSIM (a landscape evolution model) steady state landscapes whose CLS were tuned by varying hillslope diffusivity and simulated noise amplitude. The known CLS match the scales at which the distributions of topographic index and curvature show scaling breaks, indicating that the MRA can identify CLS in landscapes based on the scaling behavior of topographic attributes. Finally, the MRA is deployed to measure the CLS of five natural landscapes using meter resolution digital terrain model data. CLS are inferred from the scaling breaks of the topographic index and curvature distributions and equated with (i) small-scale roughness features and (ii) the hillslope length scale.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Multi-level structure in the large scale distribution of optically luminous galaxies
NASA Astrophysics Data System (ADS)
Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen
1992-04-01
Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.
Gravitational potential wells and the cosmic bulk flow
NASA Astrophysics Data System (ADS)
Wang, Yuyu; Kumar, Abhinav; Feldman, Hume; Watkins, Richard
2016-03-01
The bulk flow is a volume average of the peculiar velocities and a useful probe of the mass distribution on large scales. The gravitational instability model views the bulk flow as a potential flow that obeys a Maxwellian Distribution. We use two N-body simulations, the LasDamas Carmen and the Horizon Run, to calculate the bulk flows of various sized volumes in the simulation boxes. Once we have the bulk flow velocities as a function of scale, we investigate the mass and gravitational potential distribution around the volume. We found that matter densities can be asymmetrical and difficult to detect in real surveys, however, the gravitational potential and its gradient may provide better tools to investigate the underlying matter distribution. This study shows that bulk flows are indeed potential flows and thus provides information on the flow sources. We also show that bulk flow magnitudes follow a Maxwellian distribution on scales > 10h-1 Mpc.
Scaling of load in communications networks.
Narayan, Onuttom; Saniee, Iraj
2010-09-01
We show that the load at each node in a preferential attachment network scales as a power of the degree of the node. For a network whose degree distribution is p(k)∼k{-γ} , we show that the load is l(k)∼k{η} with η=γ-1 , implying that the probability distribution for the load is p(l)∼1/l{2} independent of γ . The results are obtained through scaling arguments supported by finite size scaling studies. They contradict earlier claims, but are in agreement with the exact solution for the special case of tree graphs. Results are also presented for real communications networks at the IP layer, using the latest available data. Our analysis of the data shows relatively poor power-law degree distributions as compared to the scaling of the load versus degree. This emphasizes the importance of the load in network analysis.
The emergence of overlapping scale-free genetic architecture in digital organisms.
Gerlee, P; Lundh, T
2008-01-01
We have studied the evolution of genetic architecture in digital organisms and found that the gene overlap follows a scale-free distribution, which is commonly found in metabolic networks of many organisms. Our results show that the slope of the scale-free distribution depends on the mutation rate and that the gene development is driven by expansion of already existing genes, which is in direct correspondence to the preferential growth algorithm that gives rise to scale-free networks. To further validate our results we have constructed a simple model of gene development, which recapitulates the results from the evolutionary process and shows that the mutation rate affects the tendency of genes to cluster. In addition we could relate the slope of the scale-free distribution to the genetic complexity of the organisms and show that a high mutation rate gives rise to a more complex genetic architecture.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes.
Uhl, Jonathan T; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A W; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R; Liaw, P K; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A. W.; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J.; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R.; Liaw, P. K.; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A.
2015-01-01
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes. PMID:26572103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
GEOCHEMISTRY OF SULFUR IN IRON CORROSION SCALES FOUND IN DRINKING WATER DISTRIBUTION SYSTEMS
Iron-sulfur geochemistry is important in many natural and engineered environments, including drinking water systems. In the anaerobic environment beneath scales of corroding iron drinking water distribution system pipes, sulfate reducing bacteria (SRB) produce sulfide from natu...
Reliability and validity analysis of modified Nursing Stress Scale for Indian population.
Pathak, Vasundhara; Chakraborty, Tania; Mukhopadhyay, Suman
2013-01-01
The original Nursing Stress Scale (NSS) was structurally modified according to results of factorial analysis and a new scale was named as modified nursing stress scale (MNSS). This is the first study to modify and validate NSS for Indian nursing population. Factorial analysis showed different factor loading for two subscales and items were shifted according to their loading to provide a more meaningful structure. After relocation of Items 13, 14, and 15 into first factor, this factor was renamed as "emotional and painful conditions of patients" to provide a more appropriate name to the first factor. Items 24, 25, 26, 27, 28, and 29 were found to be distributed under two different factors; one of these two was renamed as "unpredictable changes" and another retained its original name (i.e., workload). This distribution was also supported by rational analysis. All other items were distributed under factors as in the original scale. Rest of the validity assessment was done with the modified scale. Thus, with minor changes in structure, the scale was found to have better content validity.
SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Bri-Mathias; Palmintier, Bryan
This presentation provides an overview of full-scale, high-quality, synthetic distribution system data set(s) for testing distribution automation algorithms, distributed control approaches, ADMS capabilities, and other emerging distribution technologies.
Soil nutrients influence spatial distributions of tropical tree species
John, Robert; Dalling, James W.; Harms, Kyle E.; Yavitt, Joseph B.; Stallard, Robert F.; Mirabello, Matthew; Hubbell, Stephen P.; Valencia, Renato; Navarrete, Hugo; Vallejo, Martha; Foster, Robin B.
2007-01-01
The importance of niche vs. neutral assembly mechanisms in structuring tropical tree communities remains an important unsettled question in community ecology [Bell G (2005) Ecology 86:1757–1770]. There is ample evidence that species distributions are determined by soils and habitat factors at landscape (<104 km2) and regional scales. At local scales (<1 km2), however, habitat factors and species distributions show comparable spatial aggregation, making it difficult to disentangle the importance of niche and dispersal processes. In this article, we test soil resource-based niche assembly at a local scale, using species and soil nutrient distributions obtained at high spatial resolution in three diverse neotropical forest plots in Colombia (La Planada), Ecuador (Yasuni), and Panama (Barro Colorado Island). Using spatial distribution maps of >0.5 million individual trees of 1,400 species and 10 essential plant nutrients, we used Monte Carlo simulations of species distributions to test plant–soil associations against null expectations based on dispersal assembly. We found that the spatial distributions of 36–51% of tree species at these sites show strong associations to soil nutrient distributions. Neutral dispersal assembly cannot account for these plant–soil associations or the observed niche breadths of these species. These results indicate that belowground resource availability plays an important role in the assembly of tropical tree communities at local scales and provide the basis for future investigations on the mechanisms of resource competition among tropical tree species. PMID:17215353
Hovey, Renae K; Van Niel, Kimberly P; Bellchambers, Lynda M; Pember, Matthew B
2012-01-01
The western rock lobster, Panulirus cygnus, is endemic to Western Australia and supports substantial commercial and recreational fisheries. Due to and its wide distribution and the commercial and recreational importance of the species a key component of managing western rock lobster is understanding the ecological processes and interactions that may influence lobster abundance and distribution. Using terrain analyses and distribution models of substrate and benthic biota, we assess the physical drivers that influence the distribution of lobsters at a key fishery site. Using data collected from hydroacoustic and towed video surveys, 20 variables (including geophysical, substrate and biota variables) were developed to predict the distributions of substrate type (three classes of reef, rhodoliths and sand) and dominant biota (kelp, sessile invertebrates and macroalgae) within a 40 km(2) area about 30 km off the west Australian coast. Lobster presence/absence data were collected within this area using georeferenced pots. These datasets were used to develop a classification tree model for predicting the distribution of the western rock lobster. Interestingly, kelp and reef were not selected as predictors. Instead, the model selected geophysical and geomorphic scalar variables, which emphasise a mix of terrain within limited distances. The model of lobster presence had an adjusted D(2) of 64 and an 80% correct classification. Species distribution models indicate that juxtaposition in fine scale terrain is most important to the western rock lobster. While key features like kelp and reef may be important to lobster distribution at a broad scale, it is the fine scale features in terrain that are likely to define its ecological niche. Determining the most appropriate landscape configuration and scale will be essential to refining niche habitats and will aid in selecting appropriate sites for protecting critical lobster habitats.
Hovey, Renae K.; Van Niel, Kimberly P.; Bellchambers, Lynda M.; Pember, Matthew B.
2012-01-01
Background The western rock lobster, Panulirus cygnus, is endemic to Western Australia and supports substantial commercial and recreational fisheries. Due to and its wide distribution and the commercial and recreational importance of the species a key component of managing western rock lobster is understanding the ecological processes and interactions that may influence lobster abundance and distribution. Using terrain analyses and distribution models of substrate and benthic biota, we assess the physical drivers that influence the distribution of lobsters at a key fishery site. Methods and Findings Using data collected from hydroacoustic and towed video surveys, 20 variables (including geophysical, substrate and biota variables) were developed to predict the distributions of substrate type (three classes of reef, rhodoliths and sand) and dominant biota (kelp, sessile invertebrates and macroalgae) within a 40 km2 area about 30 km off the west Australian coast. Lobster presence/absence data were collected within this area using georeferenced pots. These datasets were used to develop a classification tree model for predicting the distribution of the western rock lobster. Interestingly, kelp and reef were not selected as predictors. Instead, the model selected geophysical and geomorphic scalar variables, which emphasise a mix of terrain within limited distances. The model of lobster presence had an adjusted D2 of 64 and an 80% correct classification. Conclusions Species distribution models indicate that juxtaposition in fine scale terrain is most important to the western rock lobster. While key features like kelp and reef may be important to lobster distribution at a broad scale, it is the fine scale features in terrain that are likely to define its ecological niche. Determining the most appropriate landscape configuration and scale will be essential to refining niche habitats and will aid in selecting appropriate sites for protecting critical lobster habitats. PMID:22506021
2017-01-01
Lycaenid butterflies from the genera Callophrys, Cyanophrys and Thecla have evolved remarkable biophotonic gyroid nanostructures within their wing scales that have only recently been replicated by nanoscale additive manufacturing. These nanostructures selectively reflect parts of the visible spectrum to give their characteristic non-iridescent, matte-green appearance, despite a distinct blue–green–yellow iridescence predicted for individual crystals from theory. It has been hypothesized that the organism must achieve its uniform appearance by growing crystals with some restrictions on the possible distribution of orientations, yet preferential orientation observed in Callophrys rubi confirms that this distribution need not be uniform. By analysing scanning electron microscope and optical images of 912 crystals in three wing scales, we find no preference for their rotational alignment in the plane of the scales. However, crystal orientation normal to the scale was highly correlated to their colour at low (conical) angles of view and illumination. This correlation enabled the use of optical images, each containing up to 104–105 crystals, for concluding the preferential alignment seen along the at the level of single scales, appears ubiquitous. By contrast, orientations were found to occur at no greater rate than that expected by chance. Above a critical cone angle, all crystals reflected bright green light indicating the dominant light scattering is due to the predicted band gap along the direction, independent of the domain orientation. Together with the natural variation in scale and wing shapes, we can readily understand the detailed mechanism of uniform colour production and iridescence suppression in these butterflies. It appears that the combination of preferential alignment normal to the wing scale, and uniform distribution within the plane is a near optimal solution for homogenizing the angular distribution of the band gap relative to the wings. Finally, the distributions of orientations, shapes, sizes and degree of order of crystals within single scales provide useful insights for understanding the mechanisms at play in the formation of these biophotonic nanostructures. PMID:28630678
NASA Astrophysics Data System (ADS)
Ishibashi, Takuya; Watanabe, Noriaki; Hirano, Nobuo; Okamoto, Atsushi; Tsuchiya, Noriyoshi
2015-01-01
The present study evaluates aperture distributions and fluid flow characteristics for variously sized laboratory-scale granite fractures under confining stress. As a significant result of the laboratory investigation, the contact area in fracture plane was found to be virtually independent of scale. By combining this characteristic with the self-affine fractal nature of fracture surfaces, a novel method for predicting fracture aperture distributions beyond laboratory scale is developed. Validity of this method is revealed through reproduction of the results of laboratory investigation and the maximum aperture-fracture length relations, which are reported in the literature, for natural fractures. The present study finally predicts conceivable scale dependencies of fluid flows through joints (fractures without shear displacement) and faults (fractures with shear displacement). Both joint and fault aperture distributions are characterized by a scale-independent contact area, a scale-dependent geometric mean, and a scale-independent geometric standard deviation of aperture. The contact areas for joints and faults are approximately 60% and 40%. Changes in the geometric means of joint and fault apertures (µm), em, joint and em, fault, with fracture length (m), l, are approximated by em, joint = 1 × 102 l0.1 and em, fault = 1 × 103 l0.7, whereas the geometric standard deviations of both joint and fault apertures are approximately 3. Fluid flows through both joints and faults are characterized by formations of preferential flow paths (i.e., channeling flows) with scale-independent flow areas of approximately 10%, whereas the joint and fault permeabilities (m2), kjoint and kfault, are scale dependent and are approximated as kjoint = 1 × 10-12 l0.2 and kfault = 1 × 10-8 l1.1.
NASA Astrophysics Data System (ADS)
Sandrini-Neto, L.; Lana, P. C.
2012-06-01
Heterogeneity in the distribution of organisms occurs at a range of spatial scales, which may vary from few centimeters to hundreds of kilometers. The exclusion of small-scale variability from routine sampling designs may confound comparisons at larger scales and lead to inconsistent interpretation of data. Despite its ecological and social-economic importance, little is known about the spatial structure of the mangrove crab Ucides cordatus in the southwest Atlantic. Previous studies have commonly compared densities at relatively broad scales, relying on alleged distribution patterns (e.g., mangroves of distinct composition and structure). We have assessed variability patterns of U. cordatus in mangroves of Paranaguá Bay at four levels of spatial hierarchy (10 s km, km, 10 s m and m) using a nested ANOVA and variance components measures. The potential role of sediment parameters, pneumatophore density, and organic matter content in regulating observed patterns was assessed by multiple regression models. Densities of total and non-commercial size crabs varied mostly at 10 s m to km scales. Densities of commercial size crabs differed at the scales of 10 s m and 10 s km. Variance components indicated that small-scale variation was the most important, contributing up to 70% of the crab density variability. Multiple regression models could not explain the observed variations. Processes driving differences in crab abundance were not related to the measured variables. Small-scale patchy distribution has direct implications to current management practices of U. cordatus. Future studies should consider processes operating at smaller scales, which are responsible for a complex mosaic of patches within previously described patterns.
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2014-09-30
172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing
NASA Astrophysics Data System (ADS)
Huo, Chengyu; Huang, Xiaolin; Zhuang, Jianjun; Hou, Fengzhen; Ni, Huangjing; Ning, Xinbao
2013-09-01
The Poincaré plot is one of the most important approaches in human cardiac rhythm analysis. However, further investigations are still needed to concentrate on techniques that can characterize the dispersion of the points displayed by a Poincaré plot. Based on a modified Poincaré plot, we provide a novel measurement named distribution entropy (DE) and propose a quadrantal multi-scale distribution entropy analysis (QMDE) for the quantitative descriptions of the scatter distribution patterns in various regions and temporal scales. We apply this method to the heartbeat interval series derived from healthy subjects and congestive heart failure (CHF) sufferers, respectively, and find that the discriminations between them are most significant in the first quadrant, which implies significant impacts on vagal regulation brought about by CHF. We also investigate the day-night differences of young healthy people, and it is shown that the results present a clearly circadian rhythm, especially in the first quadrant. In addition, the multi-scale analysis indicates that the results of healthy subjects and CHF sufferers fluctuate in different trends with variation of the scale factor. The same phenomenon also appears in circadian rhythm investigations of young healthy subjects, which implies that the cardiac dynamic system is affected differently in various temporal scales by physiological or pathological factors.
Note on a modified return period scale for upper-truncated unbounded flood distributions
NASA Astrophysics Data System (ADS)
Bardsley, Earl
2017-01-01
Probability distributions unbounded to the right often give good fits to annual discharge maxima. However, all hydrological processes are in reality constrained by physical upper limits, though not necessarily well defined. A result of this contradiction is that for sufficiently small exceedance probabilities the unbounded distributions anticipate flood magnitudes which are impossibly large. This raises the question of whether displayed return period scales should, as is current practice, have some given number of years, such as 500 years, as the terminating rightmost tick-point. This carries the implication that the scale might be extended indefinitely to the right with a corresponding indefinite increase in flood magnitude. An alternative, suggested here, is to introduce a sufficiently high upper truncation point to the flood distribution and modify the return period scale accordingly. The rightmost tick-mark then becomes infinity, corresponding to the upper truncation point discharge. The truncation point is likely to be set as being above any physical upper bound and the return period scale will change only slightly over all practical return periods of operational interest. The rightmost infinity tick point is therefore proposed, not as an operational measure, but rather to signal in flood plots that the return period scale does not extend indefinitely to the right.
DOES A SCALING LAW EXIST BETWEEN SOLAR ENERGETIC PARTICLE EVENTS AND SOLAR FLARES?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahler, S. W., E-mail: AFRL.RVB.PA@kirtland.af.mil
2013-05-20
Among many other natural processes, the size distributions of solar X-ray flares and solar energetic particle (SEP) events are scale-invariant power laws. The measured distributions of SEP events prove to be distinctly flatter, i.e., have smaller power-law slopes, than those of the flares. This has led to speculation that the two distributions are related through a scaling law, first suggested by Hudson, which implies a direct nonlinear physical connection between the processes producing the flares and those producing the SEP events. We present four arguments against this interpretation. First, a true scaling must relate SEP events to all flare X-raymore » events, and not to a small subset of the X-ray event population. We also show that the assumed scaling law is not mathematically valid and that although the flare X-ray and SEP event data are correlated, they are highly scattered and not necessarily related through an assumed scaling of the two phenomena. An interpretation of SEP events within the context of a recent model of fractal-diffusive self-organized criticality by Aschwanden provides a physical basis for why the SEP distributions should be flatter than those of solar flares. These arguments provide evidence against a close physical connection of flares with SEP production.« less
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
CHLORINE DECAY AND BIOFILM STUDIES IN A PILOT SCALE DRINKING WATER DISTRIBUTION DEAD END PIPE SYSTEM
Chlorine decay experiments using a pilot-scale water distribution dead end pipe system were conducted to define relationships between chlorine decay and environmental factors. These included flow rate, biomass concentration and biofilm density, and initial chlorine concentrations...
Wolf, Tabea; Zimprich, Daniel
2016-10-01
The reminiscence bump phenomenon has frequently been reported for the recall of autobiographical memories. The present study complements previous research by examining individual differences in the distribution of word-cued autobiographical memories. More importantly, we introduce predictor variables that might account for individual differences in the mean (location) and the standard deviation (scale) of individual memory distributions. All variables were derived from different theoretical accounts for the reminiscence bump phenomenon. We used a mixed location-scale logitnormal model, to analyse the 4602 autobiographical memories reported by 118 older participants. Results show reliable individual differences in the location and the scale. After controlling for age and gender, individual proportions of first-time experiences and individual proportions of positive memories, as well as the ratings on Openness to new Experiences and Self-Concept Clarity accounted for 29% of individual differences in location and 42% of individual differences in scale of autobiographical memory distributions. Results dovetail with a life-story account for the reminiscence bump which integrates central components of previous accounts.
Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions
McLerran, Larry; Tribedy, Prithwish
2015-11-02
High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less
Large-Scale Geographic Variation in Distribution and Abundance of Australian Deep-Water Kelp Forests
Marzinelli, Ezequiel M.; Williams, Stefan B.; Babcock, Russell C.; Barrett, Neville S.; Johnson, Craig R.; Jordan, Alan; Kendrick, Gary A.; Pizarro, Oscar R.; Smale, Dan A.; Steinberg, Peter D.
2015-01-01
Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia’s Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10–100 m to 100–1,000 km) and depths (15–60 m) across several regions ca 2–6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40–50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves. PMID:25693066
Shifts in Summertime Precipitation Accumulation Distributions over the US
NASA Astrophysics Data System (ADS)
Martinez-Villalobos, C.; Neelin, J. D.
2016-12-01
Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.
Shifts in Summertime Precipitation Accumulation Distributions over the US
NASA Astrophysics Data System (ADS)
Martinez-Villalobos, C.; Neelin, J. D.
2017-12-01
Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
Rotation And Scale Invariant Object Recognition Using A Distributed Associative Memory
NASA Astrophysics Data System (ADS)
Wechsler, Harry; Zimmerman, George Lee
1988-04-01
This paper describes an approach to 2-dimensional object recognition. Complex-log conformal mapping is combined with a distributed associative memory to create a system which recognizes objects regardless of changes in rotation or scale. Recalled information from the memorized database is used to classify an object, reconstruct the memorized version of the object, and estimate the magnitude of changes in scale or rotation. The system response is resistant to moderate amounts of noise and occlusion. Several experiments, using real, gray scale images, are presented to show the feasibility of our approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Torné-Noguera, Anna; Rodrigo, Anselm; Arnan, Xavier; Osorio, Sergio; Barril-Graells, Helena; da Rocha-Filho, Léo Correia; Bosch, Jordi
2014-01-01
Understanding biodiversity distribution is a primary goal of community ecology. At a landscape scale, bee communities are affected by habitat composition, anthropogenic land use, and fragmentation. However, little information is available on local-scale spatial distribution of bee communities within habitats that are uniform at the landscape scale. We studied a bee community along with floral and nesting resources over a 32 km2 area of uninterrupted Mediterranean scrubland. Our objectives were (i) to analyze floral and nesting resource composition at the habitat scale. We ask whether these resources follow a geographical pattern across the scrubland at bee-foraging relevant distances; (ii) to analyze the distribution of bee composition across the scrubland. Bees being highly mobile organisms, we ask whether bee composition shows a homogeneous distribution or else varies spatially. If so, we ask whether this variation is irregular or follows a geographical pattern and whether bees respond primarily to flower or to nesting resources; and (iii) to establish whether body size influences the response to local resource availability and ultimately spatial distribution. We obtained 6580 specimens belonging to 98 species. Despite bee mobility and the absence of environmental barriers, our bee community shows a clear geographical pattern. This pattern is mostly attributable to heterogeneous distribution of small (<55 mg) species (with presumed smaller foraging ranges), and is mostly explained by flower resources rather than nesting substrates. Even then, a large proportion (54.8%) of spatial variability remains unexplained by flower or nesting resources. We conclude that bee communities are strongly conditioned by local effects and may exhibit spatial heterogeneity patterns at a scale as low as 500–1000 m in patches of homogeneous habitat. These results have important implications for local pollination dynamics and spatial variation of plant-pollinator networks. PMID:24824445
Torné-Noguera, Anna; Rodrigo, Anselm; Arnan, Xavier; Osorio, Sergio; Barril-Graells, Helena; da Rocha-Filho, Léo Correia; Bosch, Jordi
2014-01-01
Understanding biodiversity distribution is a primary goal of community ecology. At a landscape scale, bee communities are affected by habitat composition, anthropogenic land use, and fragmentation. However, little information is available on local-scale spatial distribution of bee communities within habitats that are uniform at the landscape scale. We studied a bee community along with floral and nesting resources over a 32 km2 area of uninterrupted Mediterranean scrubland. Our objectives were (i) to analyze floral and nesting resource composition at the habitat scale. We ask whether these resources follow a geographical pattern across the scrubland at bee-foraging relevant distances; (ii) to analyze the distribution of bee composition across the scrubland. Bees being highly mobile organisms, we ask whether bee composition shows a homogeneous distribution or else varies spatially. If so, we ask whether this variation is irregular or follows a geographical pattern and whether bees respond primarily to flower or to nesting resources; and (iii) to establish whether body size influences the response to local resource availability and ultimately spatial distribution. We obtained 6580 specimens belonging to 98 species. Despite bee mobility and the absence of environmental barriers, our bee community shows a clear geographical pattern. This pattern is mostly attributable to heterogeneous distribution of small (<55 mg) species (with presumed smaller foraging ranges), and is mostly explained by flower resources rather than nesting substrates. Even then, a large proportion (54.8%) of spatial variability remains unexplained by flower or nesting resources. We conclude that bee communities are strongly conditioned by local effects and may exhibit spatial heterogeneity patterns at a scale as low as 500-1000 m in patches of homogeneous habitat. These results have important implications for local pollination dynamics and spatial variation of plant-pollinator networks.
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
2017-10-21
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Milovanovic, Petar; Vukovic, Zorica; Antonijevic, Djordje; Djonic, Danijela; Zivkovic, Vladimir; Nikolic, Slobodan; Djuric, Marija
2017-05-01
Bone is a remarkable biological nanocomposite material showing peculiar hierarchical organization from smaller (nano, micro) to larger (macro) length scales. Increased material porosity is considered as the main feature of fragile bone at larger length-scales. However, there is a shortage of quantitative information on bone porosity at smaller length-scales, as well as on the distribution of pore sizes in healthy vs. fragile bone. Therefore, here we investigated how healthy and fragile bones differ in pore volume and pore size distribution patterns, considering a wide range of mostly neglected pore sizes from nano to micron-length scales (7.5 to 15000 nm). Cortical bone specimens from four young healthy women (age: 35 ± 6 years) and five women with bone fracture (age: 82 ± 5 years) were analyzed by mercury porosimetry. Our findings showed that, surprisingly, fragile bone demonstrated lower pore volume at the measured scales. Furtnermore, pore size distribution showed differential patterns between healthy and fragile bones, where healthy bone showed especially high proportion of pores between 200 and 15000 nm. Therefore, although fragile bones are known for increased porosity at macroscopic level and level of tens or hundreds of microns as firmly established in the literature, our study with a unique assessment range of nano-to micron-sized pores reveal that osteoporosis does not imply increased porosity at all length scales. Our thorough assessment of bone porosity reveals a specific distribution of porosities at smaller length-scales and contributes to proper understanding of bone structure which is important for designing new biomimetic bone substitute materials.
Dispersion upscaling from a pore scale characterization of Lagrangian velocities
NASA Astrophysics Data System (ADS)
Turuban, Régis; de Anna, Pietro; Jiménez-Martínez, Joaquín; Tabuteau, Hervé; Méheust, Yves; Le Borgne, Tanguy
2013-04-01
Mixing and reactive transport are primarily controlled by the interplay between diffusion, advection and reaction at pore scale. Yet, how the distribution and spatial correlation of the velocity field at pore scale impact these processes is still an open question. Here we present an experimental investigation of the distribution and correlation of pore scale velocities and its relation with upscaled dispersion. We use a quasi two-dimensional (2D) horizontal set up, consisting of two glass plates filled with cylinders representing the grains of the porous medium : the cell is built by soft lithography technique, wich allows for full control of the system geometry. The local velocity field is quantified from particle tracking velocimetry using microspheres that are advected with the pore scale flow. Their displacement is purely advective, as the particle size is chosen large enough to avoid diffusion. We thus obtain particle trajectories as well as lagrangian velocities in the entire system. The measured velocity field shows the existence of a network of preferential flow paths in channels with high velocities, as well as very low velocity in stagnation zones, with a non Gaussian distribution. Lagrangian velocities are long range correlated in time, which implies a non-fickian scaling of the longitudinal variance of particle positions. To upscale this process we develop an effective transport model, based on correlated continous time random walk, which is entirely parametrized by the pore scale velocity distribution and correlation. The model predictions are compared with conservative tracer test data for different Peclet numbers. Furthermore, we investigate the impact of different pore geometries on the distribution and correlation of Lagrangian velocities and we discuss the link between these properties and the effective dispersion behavior.
Universal scaling law in human behavioral organization.
Nakamura, Toru; Kiyono, Ken; Yoshiuchi, Kazuhiro; Nakahara, Rika; Struzik, Zbigniew R; Yamamoto, Yoshiharu
2007-09-28
We describe the nature of human behavioral organization, specifically how resting and active periods are interwoven throughout daily life. Active period durations with physical activity count successively above a predefined threshold, when rescaled with individual means, follow a universal stretched exponential (gamma-type) cumulative distribution with characteristic time, both in healthy individuals and in patients with major depressive disorder. On the other hand, resting period durations below the threshold for both groups obey a scale-free power-law cumulative distribution over two decades, with significantly lower scaling exponents in the patients. We thus find universal distribution laws governing human behavioral organization, with a parameter altered in depression.
Universal Scaling Law in Human Behavioral Organization
NASA Astrophysics Data System (ADS)
Nakamura, Toru; Kiyono, Ken; Yoshiuchi, Kazuhiro; Nakahara, Rika; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2007-09-01
We describe the nature of human behavioral organization, specifically how resting and active periods are interwoven throughout daily life. Active period durations with physical activity count successively above a predefined threshold, when rescaled with individual means, follow a universal stretched exponential (gamma-type) cumulative distribution with characteristic time, both in healthy individuals and in patients with major depressive disorder. On the other hand, resting period durations below the threshold for both groups obey a scale-free power-law cumulative distribution over two decades, with significantly lower scaling exponents in the patients. We thus find universal distribution laws governing human behavioral organization, with a parameter altered in depression.
The large-scale distribution of galaxies
NASA Technical Reports Server (NTRS)
Geller, Margaret J.
1989-01-01
The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.
Representation of sub-element scale variability in snow accumulation and ablation is increasingly recognized as important in distributed hydrologic modelling. Representing sub-grid scale variability may be accomplished through numerical integration of a nested grid or through a l...
NASA Astrophysics Data System (ADS)
Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng
2018-02-01
A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method
Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles.
Bhatnagar, Akshay; Gustavsson, K; Mitra, Dhrubaditya
2018-02-01
We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component V_{R} for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D_{2}. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011)PLEEE81539-375510.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014)1468-524810.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) |V_{R}|≪R, where the distribution depends solely on R, and (2) |V_{R}|≫R, where the distribution is a function of |V_{R}| alone. The probability distributions in these two regimes are matched along a straight line: |V_{R}|=z^{*}R. Our simulations confirm that this is indeed correct. We further obtain D_{2} and z^{*} as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.
Statistics of the relative velocity of particles in turbulent flows: Monodisperse particles
NASA Astrophysics Data System (ADS)
Bhatnagar, Akshay; Gustavsson, K.; Mitra, Dhrubaditya
2018-02-01
We use direct numerical simulations to calculate the joint probability density function of the relative distance R and relative radial velocity component VR for a pair of heavy inertial particles suspended in homogeneous and isotropic turbulent flows. At small scales the distribution is scale invariant, with a scaling exponent that is related to the particle-particle correlation dimension in phase space, D2. It was argued [K. Gustavsson and B. Mehlig, Phys. Rev. E 84, 045304 (2011), 10.1103/PhysRevE.84.045304; J. Turbul. 15, 34 (2014), 10.1080/14685248.2013.875188] that the scale invariant part of the distribution has two asymptotic regimes: (1) | VR|≪R , where the distribution depends solely on R , and (2) | VR|≫R , where the distribution is a function of | VR| alone. The probability distributions in these two regimes are matched along a straight line: | VR|= z*R . Our simulations confirm that this is indeed correct. We further obtain D2 and z* as a function of the Stokes number, St. The former depends nonmonotonically on St with a minimum at about St≈0.7 and the latter has only a weak dependence on St.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer. However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size distribution both within a given rain event and across different varieties of rain events. Index Terms-drop size distribution, frequency scaling, propagation losses, radiowave propagation.
NASA Technical Reports Server (NTRS)
Zemba, Michael; Nessel, James; Houts, Jacquelynne; Luini, Lorenzo; Riva, Carlo
2016-01-01
The rain rate data and statistics of a location are often used in conjunction with models to predict rain attenuation. However, the true attenuation is a function not only of rain rate, but also of the drop size distribution (DSD). Generally, models utilize an average drop size distribution (Laws and Parsons or Marshall and Palmer [1]). However, individual rain events may deviate from these models significantly if their DSD is not well approximated by the average. Therefore, characterizing the relationship between the DSD and attenuation is valuable in improving modeled predictions of rain attenuation statistics. The DSD may also be used to derive the instantaneous frequency scaling factor and thus validate frequency scaling models. Since June of 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have jointly conducted a propagation study in Milan, Italy utilizing the 20 and 40 GHz beacon signals of the Alphasat TDP#5 Aldo Paraboni payload. The Ka- and Q-band beacon receivers provide a direct measurement of the signal attenuation while concurrent weather instrumentation provides measurements of the atmospheric conditions at the receiver. Among these instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which yields droplet size distributions (DSD); this DSD information can be used to derive a scaling factor that scales the measured 20 GHz data to expected 40 GHz attenuation. Given the capability to both predict and directly observe 40 GHz attenuation, this site is uniquely situated to assess and characterize such predictions. Previous work using this data has examined the relationship between the measured drop-size distribution and the measured attenuation of the link [2]. The focus of this paper now turns to a deeper analysis of the scaling factor, including the prediction error as a function of attenuation level, correlation between the scaling factor and the rain rate, and the temporal variability of the drop size distribution both within a given rain event and across different varieties of rain events. Index Terms-drop size distribution, frequency scaling, propagation losses, radiowave propagation.
Formation of large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.; Melott, Adrian L.; Bertschinger, Edmund
1989-01-01
Numerical simulations of large-scale structure formation from cosmic strings and massive neutrinos are described. The linear power spectrum in this model resembles the cold-dark-matter power spectrum. Galaxy formation begins early, and the final distribution consists of isolated density peaks embedded in a smooth background, leading to a natural bias in the distribution of luminous matter. The distribution of clustered matter has a filamentary appearance with large voids.
Ultra-Wideband Radar Transient Detection using Time-Frequency and Wavelet Transforms.
1992-12-01
if p==2, mesh(flipud(abs(spdatamatrix).A2)) end 2. Wigner - Ville Distribution function P = wvd (data,winlenstep,begintheendp) % Filename: wvd.m % Title...short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution , and time-scale methods, such as the a trous...such as the short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution [1], and time-scale methods, such
Similarity Scaling for the Inner Region of the Turbulent Boundary Layer
2009-11-20
Turan , O., Anderson, C, and Castillo, L., "Outer Scaling in Turbulent Boundary Layers," AIAA 2005-4814 (2005). 25 [28] Townsend, A ., The Structure of...for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO...2010-0012 12. DISTRIBUTION / AVAILABILITY STATEMENT DISTRIBUTION A : APPROVED FOR PUBLIC RELEASE: DISTRIBUTION UNLIMITED 13. SUPPLEMENTARY NOTES
Experimental Study on Scale-Up of Solid-Liquid Stirred Tank with an Intermig Impeller
NASA Astrophysics Data System (ADS)
Zhao, Hongliang; Zhao, Xing; Zhang, Lifeng; Yin, Pan
2017-02-01
The scale-up of a solid-liquid stirred tank with an Intermig impeller was characterized via experiments. Solid concentration, impeller just-off-bottom speed and power consumption were measured in stirred tanks of different scales. The scale-up criteria for achieving the same effect of solid suspension in small-scale and large-scale vessels were evaluated. The solids distribution improves if the operating conditions are held constant as the tank is scaled-up. The results of impeller just-off-bottom speed gave X = 0.868 in the scale-up relationship ND X = constant. Based on this criterion, the stirring power per unit volume obviously decreased at N = N js, and the power number ( N P) was approximately equal to 0.3 when the solids are uniformly distributed in the vessels.
Male group size, female distribution and changes in sexual segregation by Roosevelt elk
Peterson, Leah M.
2017-01-01
Sexual segregation, or the differential use of space by males and females, is hypothesized to be a function of body size dimorphism. Sexual segregation can also manifest at small (social segregation) and large (habitat segregation) spatial scales for a variety of reasons. Furthermore, the connection between small- and large-scale sexual segregation has rarely been addressed. We studied a population of Roosevelt elk (Cervus elaphus roosevelti) across 21 years in north coastal California, USA, to assess small- and large-scale sexual segregation in winter. We hypothesized that male group size would associate with small-scale segregation and that a change in female distribution would associate with large-scale segregation. Variation in forage biomass might also be coupled to small and large-scale sexual segregation. Our findings were consistent with male group size associating with small-scale segregation and a change in female distribution associating with large-scale segregation. Females appeared to avoid large groups comprised of socially dominant males. Males appeared to occupy a habitat vacated by females because of a wider forage niche, greater tolerance to lethal risks, and, perhaps, to reduce encounters with other elk. Sexual segregation at both spatial scales was a poor predictor of forage biomass. Size dimorphism was coupled to change in sexual segregation at small and large spatial scales. Small scale segregation can seemingly manifest when all forage habitat is occupied by females and large scale segregation might happen when some forage habitat is not occupied by females. PMID:29121076
Scaling earthquake ground motions for performance-based assessment of buildings
Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.
2011-01-01
The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.
Simulating Bubble Plumes from Breaking Waves with a Forced-Air Venturi
NASA Astrophysics Data System (ADS)
Long, M. S.; Keene, W. C.; Maben, J. R.; Chang, R. Y. W.; Duplessis, P.; Kieber, D. J.; Beaupre, S. R.; Frossard, A. A.; Kinsey, J. D.; Zhu, Y.; Lu, X.; Bisgrove, J.
2017-12-01
It has been hypothesized that the size distribution of bubbles in subsurface seawater is a major factor that modulates the corresponding size distribution of primary marine aerosol (PMA) generated when those bubbles burst at the air-water interface. A primary physical control of the bubble size distribution produced by wave breaking is the associated turbulence that disintegrates larger bubbles into smaller ones. This leads to two characteristic features of bubble size distributions: (1) the Hinze scale which reflects a bubble size above which disintegration is possible based on turbulence intensity and (2) the slopes of log-linear regressions of the size distribution on either side of the Hinze scale that indicate the state of plume evolution or age. A Venturi with tunable seawater and forced air flow rates was designed and deployed in an artificial PMA generator to produce bubble plumes representative of breaking waves. This approach provides direct control of turbulence intensity and, thus, the resulting bubble size distribution characterizable by observations of the Hinze scale and the simulated plume age over a range of known air detrainment rates. Evaluation of performance in different seawater types over the western North Atlantic demonstrated that the Venturi produced bubble plumes with parameter values that bracket the range of those observed in laboratory and field experiments. Specifically, the seawater flow rate modulated the value of the Hinze scale while the forced-air flow rate modulated the plume age parameters. Results indicate that the size distribution of sub-surface bubbles within the generator did not significantly modulate the corresponding number size distribution of PMA produced via bubble bursting.
Impervious surface is known to negatively affect catchment hydrology through both its extent and spatial distribution. In this study, we empirically quantify via model simulations the impacts of different configurations of impervious surface on watershed response to rainfall. An ...
Tsallis Entropy and the Transition to Scaling in Fragmentation
NASA Astrophysics Data System (ADS)
Sotolongo-Costa, Oscar; Rodriguez, Arezky H.; Rodgers, G. J.
2000-12-01
By using the maximum entropy principle with Tsallis entropy we obtain a fragment size distribution function which undergoes a transition to scaling. This distribution function reduces to those obtained by other authors using Shannon entropy. The treatment is easily generalisable to any process of fractioning with suitable constraints.
Growing magma chambers control the distribution of small-scale flood basalts.
Yu, Xun; Chen, Li-Hui; Zeng, Gang
2015-11-19
Small-scale continental flood basalts are a global phenomenon characterized by regular spatio-temporal distributions. However, no genetic mechanism has been proposed to explain the visible but overlooked distribution patterns of these continental basaltic volcanism. Here we present a case study from eastern China, combining major and trace element analyses with Ar-Ar and K-Ar dating to show that the spatio-temporal distribution of small-scale flood basalts is controlled by the growth of long-lived magma chambers. Evolved basalts (SiO2 > 47.5 wt.%) from Xinchang-Shengzhou, a small-scale Cenozoic flood basalt field in Zhejiang province, eastern China, show a northward younging trend over the period 9.4-3.0 Ma. With northward migration, the magmas evolved only slightly ((Na2O + K2O)/MgO = 0.40-0.66; TiO2/MgO = 0.23-0.35) during about 6 Myr (9.4-3.3 Ma). When the flood basalts reached the northern end of the province, the magmas evolved rapidly (3.3-3.0 Ma) through a broad range of compositions ((Na2O + K2O)/MgO = 0.60-1.28; TiO2/MgO = 0.30-0.57). The distribution and two-stage compositional evolution of the migrating flood basalts record continuous magma replenishment that buffered against magmatic evolution and induced magma chamber growth. Our results demonstrate that the magma replenishment-magma chamber growth model explains the spatio-temporal distribution of small-scale flood basalts.
Burns, K C; Zotz, G
2010-02-01
Epiphytes are an important component of many forested ecosystems, yet our understanding of epiphyte communities lags far behind that of terrestrial-based plant communities. This discrepancy is exacerbated by the lack of a theoretical context to assess patterns in epiphyte community structure. We attempt to fill this gap by developing an analytical framework to investigate epiphyte assemblages, which we then apply to a data set on epiphyte distributions in a Panamanian rain forest. On a coarse scale, interactions between epiphyte species and host tree species can be viewed as bipartite networks, similar to pollination and seed dispersal networks. On a finer scale, epiphyte communities on individual host trees can be viewed as meta-communities, or suites of local epiphyte communities connected by dispersal. Similar analytical tools are typically employed to investigate species interaction networks and meta-communities, thus providing a unified analytical framework to investigate coarse-scale (network) and fine-scale (meta-community) patterns in epiphyte distributions. Coarse-scale analysis of the Panamanian data set showed that most epiphyte species interacted with fewer host species than expected by chance. Fine-scale analyses showed that epiphyte species richness on individual trees was lower than null model expectations. Therefore, epiphyte distributions were clumped at both scales, perhaps as a result of dispersal limitations. Scale-dependent patterns in epiphyte species composition were observed. Epiphyte-host networks showed evidence of negative co-occurrence patterns, which could arise from adaptations among epiphyte species to avoid competition for host species, while most epiphyte meta-communities were distributed at random. Application of our "meta-network" analytical framework in other locales may help to identify general patterns in the structure of epiphyte assemblages and their variation in space and time.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
NASA Technical Reports Server (NTRS)
Nessel, James; Zemba, Michael; Luini, Lorenzo; Riva, Carlo
2015-01-01
Rain attenuation is strongly dependent on the rain rate, but also on the rain drop size distribution (DSD). Typically, models utilize an average drop size distribution, such as those developed by Laws and Parsons, or Marshall and Palmer. However, individual rain events may possess drop size distributions which could be significantly different from the average and will impact, for example, fade mitigation techniques which utilize channel performance estimates from a signal at a different frequency. Therefore, a good understanding of the characteristics and variability of the raindrop size distribution is extremely important in predicting rain attenuation and instantaneous frequency scaling parameters on an event-toevent basis. Since June 2014, NASA Glenn Research Center (GRC) and the Politecnico di Milano (POLIMI) have measured the attenuation due to rain in Milan, Italy, on the 20/40 GHz beacon signal broadcast from the Alphasat TDP#5 Aldo Paraboni Q/V-band Payload. Concomitant with these measurements are the measurements of drop size distribution and rain rate utilizing a Thies Clima laser precipitation monitor (disdrometer). In this paper, we discuss the comparison of the predicted rain attenuation at 20 and 40 GHz derived from the drop size distribution data with the measured rain attenuation. The results are compared on statistical and real-time bases. We will investigate the performance of the rain attenuation model, instantaneous frequency scaling, and the distribution of the scaling factor. Further, seasonal rain characteristics will be analysed.
Guo, Yao-xin; Kang, Bing; Li, Gang; Wang, De-xiang; Yang, Gai-he; Wang, Da-wei
2011-10-01
An investigation was conducted on the species composition and population diameter-class structure of a typical secondary Betula albo-sinensis forest in Xiaolongshan of west Qinling Mountains, and the spatial distribution pattern and interspecific correlations of the main populations were analyzed at multiple scales by the O-ring functions of single variable and double variables. In the test forest, B. albo-sinensis was obviously dominant, but from the analysis of DBH class distribution, the B. albo-sinensis seedlings were short of, and the natural regeneration was very poor. O the contrary, the regeneration of Abies fargesii and Populus davidianas was fine. B. albo-sinensis and Salix matsudana had a random distribution at almost all scales, while A. fargesii and P. davidianas were significantly clumped at small scale. B. albo-sinensis had positive correlations with A. fargesii and P. davidianas at medium scale, whereas S. matsudana had negative correlations with B. albo-sinensis, A. fargesii, and P. davidianas at small scale. No significant correlations were observed between other species. The findings suggested that the spatial distribution patterns of the tree species depended on their biological characteristics at small scale, but on the environmental heterogeneity at larger scales. In a period of future time, B. albo-sinensis would still be dominant, but from a long-term view, it was necessary to take some artificial measures to improve the regeneratio of B. albo-sinensis.
Diversity of individual mobility patterns and emergence of aggregated scaling laws
Yan, Xiao-Yong; Han, Xiao-Pu; Wang, Bing-Hong; Zhou, Tao
2013-01-01
Uncovering human mobility patterns is of fundamental importance to the understanding of epidemic spreading, urban transportation and other socioeconomic dynamics embodying spatiality and human travel. According to the direct travel diaries of volunteers, we show the absence of scaling properties in the displacement distribution at the individual level,while the aggregated displacement distribution follows a power law with an exponential cutoff. Given the constraint on total travelling cost, this aggregated scaling law can be analytically predicted by the mixture nature of human travel under the principle of maximum entropy. A direct corollary of such theory is that the displacement distribution of a single mode of transportation should follow an exponential law, which also gets supportive evidences in known data. We thus conclude that the travelling cost shapes the displacement distribution at the aggregated level. PMID:24045416
Data Recovery of Distributed Hash Table with Distributed-to-Distributed Data Copy
NASA Astrophysics Data System (ADS)
Doi, Yusuke; Wakayama, Shirou; Ozaki, Satoshi
To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.
[Effects of sampling plot number on tree species distribution prediction under climate change].
Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu
2013-05-01
Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.
Enterprise PACS and image distribution.
Huang, H K
2003-01-01
Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.
Scaling exponent and dispersity of polymers in solution by diffusion NMR.
Williamson, Nathan H; Röding, Magnus; Miklavcic, Stanley J; Nydén, Magnus
2017-05-01
Molecular mass distribution measurements by pulsed gradient spin echo nuclear magnetic resonance (PGSE NMR) spectroscopy currently require prior knowledge of scaling parameters to convert from polymer self-diffusion coefficient to molecular mass. Reversing the problem, we utilize the scaling relation as prior knowledge to uncover the scaling exponent from within the PGSE data. Thus, the scaling exponent-a measure of polymer conformation and solvent quality-and the dispersity (M w /M n ) are obtainable from one simple PGSE experiment. The method utilizes constraints and parametric distribution models in a two-step fitting routine involving first the mass-weighted signal and second the number-weighted signal. The method is developed using lognormal and gamma distribution models and tested on experimental PGSE attenuation of the terminal methylene signal and on the sum of all methylene signals of polyethylene glycol in D 2 O. Scaling exponent and dispersity estimates agree with known values in the majority of instances, leading to the potential application of the method to polymers for which characterization is not possible with alternative techniques. Copyright © 2017 Elsevier Inc. All rights reserved.
Landscape-scale processes influence riparian plant composition along a regulated river
Palmquist, Emily C.; Ralston, Barbara; Merritt, David M.; Shafroth, Patrick B.
2018-01-01
Hierarchical frameworks are useful constructs when exploring landscape- and local-scale factors affecting patterns of vegetation in riparian areas. In drylands, which have steep environmental gradients and high habitat heterogeneity, landscape-scale variables, such as climate, can change rapidly along a river's course, affecting the relative influence of environmental variables at different scales. To assess how landscape-scale factors change the structure of riparian vegetation, we measured riparian vegetation composition along the Colorado River through Grand Canyon, determined which factors best explain observed changes, identified how richness and functional diversity vary, and described the implications of our results for river management. Cluster analysis identified three divergent floristic groups that are distributed longitudinally along the river. These groups were distributed along gradients of elevation, temperature and seasonal precipitation, but were not associated with annual precipitation or local-scale factors. Species richness and functional diversity decreased as a function of distance downstream showing that changing landscape-scale factors result in changes to ecosystem characteristics. Species composition and distribution remain closely linked to seasonal precipitation and temperature. These patterns in floristic composition in a semiarid system inform management and provide insights into potential future changes as a result of shifts in climate and changes in flow management.
NASA Astrophysics Data System (ADS)
Zhao, Duo; Fu, Suiyan; Parks, George K.; Sun, Weijie; Zong, Qiugang; Pan, Dongxiao; Wu, Tong
2017-08-01
We present new observations of electron distributions and the accompanying waves during the current sheet activities at ˜60 RE in the geomagnetic tail detected by the ARTEMIS (Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun) spacecraft. We find that electron flat-top distribution is a common feature near the neutral sheet of the tailward flowing plasmas, consistent with the electron distributions that are shaped in the reconnection region. Whistler mode waves are generated by the anisotropic electron temperature associated with the electron flat-top distributions. These whistler mode waves are modulated by low frequency ion scale waves that are possibly excited by the high-energy ions injected during the current sheet instability. The magnetic and electric fields of the ion scale waves are in phase with electron density variations, indicating that they are compressional ion cyclotron waves. Our observations present examples of the dynamical processes occurring during the current sheet activities far downstream of the geomagnetic tail.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; ...
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
Li, Pu; Weng, Linlu; Niu, Haibo; Robinson, Brian; King, Thomas; Conmy, Robyn; Lee, Kenneth; Liu, Lei
2016-12-15
This study was aimed at testing the applicability of modified Weber number scaling with Alaska North Slope (ANS) crude oil, and developing a Reynolds number scaling approach for oil droplet size prediction for high viscosity oils. Dispersant to oil ratio and empirical coefficients were also quantified. Finally, a two-step Rosin-Rammler scheme was introduced for the determination of droplet size distribution. This new approach appeared more advantageous in avoiding the inconsistency in interfacial tension measurements, and consequently delivered concise droplet size prediction. Calculated and observed data correlated well based on Reynolds number scaling. The relation indicated that chemical dispersant played an important role in reducing the droplet size of ANS under different seasonal conditions. The proposed Reynolds number scaling and two-step Rosin-Rammler approaches provide a concise, reliable way to predict droplet size distribution, supporting decision making in chemical dispersant application during an offshore oil spill. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary
2018-04-29
Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Lares, M.; Luparello, H. E.; Garcia Lambas, D.; Ruiz, A. N.; Ceccarelli, L.; Paz, D.
2017-10-01
Cosmic voids are of great interest given their relation to the large scale distribution of mass and the way they trace cosmic flows shaping the cosmic web. Here we show that the distribution of voids has, in consonance with the distribution of mass, a characteristic scale at which void pairs are preferentially located. We identify clumps of voids with similar environments and use them to define second order underdensities. Also, we characterize its properties and analyze its impact on the cosmic microwave background.
Dependence of Snowmelt Simulations on Scaling of the Forcing Processes (Invited)
NASA Astrophysics Data System (ADS)
Winstral, A. H.; Marks, D. G.; Gurney, R. J.
2009-12-01
The spatial organization and scaling relationships of snow distribution in mountain environs is ultimately dependent on the controlling processes. These processes include interactions between weather, topography, vegetation, snow state, and seasonally-dependent radiation inputs. In large scale snow modeling it is vital to know these dependencies to obtain accurate predictions while reducing computational costs. This study examined the scaling characteristics of the forcing processes and the dependency of distributed snowmelt simulations to their scaling. A base model simulation characterized these processes with 10m resolution over a 14.0 km2 basin with an elevation range of 1474 - 2244 masl. Each of the major processes affecting snow accumulation and melt - precipitation, wind speed, solar radiation, thermal radiation, temperature, and vapor pressure - were independently degraded to 1 km resolution. Seasonal and event-specific results were analyzed. Results indicated that scale effects on melt vary by process and weather conditions. The dependence of melt simulations on the scaling of solar radiation fluxes also had a seasonal component. These process-based scaling characteristics should remain static through time as they are based on physical considerations. As such, these results not only provide guidance for current modeling efforts, but are also well suited to predicting how potential climate changes will affect the heterogeneity of mountain snow distributions.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Ting, Yuan-Sen
2018-04-01
The distributions of a galaxy's gas and stars in chemical space encode a tremendous amount of information about that galaxy's physical properties and assembly history. However, present methods for extracting information from chemical distributions are based either on coarse averages measured over galactic scales (e.g. metallicity gradients) or on searching for clusters in chemical space that can be identified with individual star clusters or gas clouds on ˜1 pc scales. These approaches discard most of the information, because in galaxies gas and young stars are observed to be distributed fractally, with correlations on all scales, and the same is likely to be true of metals. In this paper we introduce a first theoretical model, based on stochastically forced diffusion, capable of predicting the multiscale statistics of metal fields. We derive the variance, correlation function, and power spectrum of the metal distribution from first principles, and determine how these quantities depend on elements' astrophysical origin sites and on the large-scale properties of galaxies. Among other results, we explain for the first time why the typical abundance scatter observed in the interstellar media of nearby galaxies is ≈0.1 dex, and we predict that this scatter will be correlated on spatial scales of ˜0.5-1 kpc, and over time-scales of ˜100-300 Myr. We discuss the implications of our results for future chemical tagging studies.
NASA Astrophysics Data System (ADS)
Lian, Enyang; Ren, Yingyu; Han, Yunfeng; Liu, Weixin; Jin, Ningde; Zhao, Junying
2016-11-01
The multi-scale analysis is an important method for detecting nonlinear systems. In this study, we carry out experiments and measure the fluctuation signals from a rotating electric field conductance sensor with eight electrodes. We first use a recurrence plot to recognise flow patterns in vertical upward gas-liquid two-phase pipe flow from measured signals. Then we apply a multi-scale morphological analysis based on the first-order difference scatter plot to investigate the signals captured from the vertical upward gas-liquid two-phase flow loop test. We find that the invariant scaling exponent extracted from the multi-scale first-order difference scatter plot with the bisector of the second-fourth quadrant as the reference line is sensitive to the inhomogeneous distribution characteristics of the flow structure, and the variation trend of the exponent is helpful to understand the process of breakup and coalescence of the gas phase. In addition, we explore the dynamic mechanism influencing the inhomogeneous distribution of the gas phase in terms of adaptive optimal kernel time-frequency representation. The research indicates that the system energy is a factor influencing the distribution of the gas phase and the multi-scale morphological analysis based on the first-order difference scatter plot is an effective method for indicating the inhomogeneous distribution of the gas phase in gas-liquid two-phase flow.
'Fracking', Induced Seismicity and the Critical Earth
NASA Astrophysics Data System (ADS)
Leary, P.; Malin, P. E.
2012-12-01
Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.
Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1988-01-01
These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.
Verifying the Dependence of Fractal Coefficients on Different Spatial Distributions
NASA Astrophysics Data System (ADS)
Gospodinov, Dragomir; Marekova, Elisaveta; Marinov, Alexander
2010-01-01
A fractal distribution requires that the number of objects larger than a specific size r has a power-law dependence on the size N(r) = C/rD∝r-D where D is the fractal dimension. Usually the correlation integral is calculated to estimate the correlation fractal dimension of epicentres. A `box-counting' procedure could also be applied giving the `capacity' fractal dimension. The fractal dimension can be an integer and then it is equivalent to a Euclidean dimension (it is zero of a point, one of a segment, of a square is two and of a cube is three). In general the fractal dimension is not an integer but a fractional dimension and there comes the origin of the term `fractal'. The use of a power-law to statistically describe a set of events or phenomena reveals the lack of a characteristic length scale, that is fractal objects are scale invariant. Scaling invariance and chaotic behavior constitute the base of a lot of natural hazards phenomena. Many studies of earthquakes reveal that their occurrence exhibits scale-invariant properties, so the fractal dimension can characterize them. It has first been confirmed that both aftershock rate decay in time and earthquake size distribution follow a power law. Recently many other earthquake distributions have been found to be scale-invariant. The spatial distribution of both regional seismicity and aftershocks show some fractal features. Earthquake spatial distributions are considered fractal, but indirectly. There are two possible models, which result in fractal earthquake distributions. The first model considers that a fractal distribution of faults leads to a fractal distribution of earthquakes, because each earthquake is characteristic of the fault on which it occurs. The second assumes that each fault has a fractal distribution of earthquakes. Observations strongly favour the first hypothesis. The fractal coefficients analysis provides some important advantages in examining earthquake spatial distribution, which are:—Simple way to quantify scale-invariant distributions of complex objects or phenomena by a small number of parameters.—It is becoming evident that the applicability of fractal distributions to geological problems could have a more fundamental basis. Chaotic behaviour could underlay the geotectonic processes and the applicable statistics could often be fractal. The application of fractal distribution analysis has, however, some specific aspects. It is usually difficult to present an adequate interpretation of the obtained values of fractal coefficients for earthquake epicenter or hypocenter distributions. That is why in this paper we aimed at other goals—to verify how a fractal coefficient depends on different spatial distributions. We simulated earthquake spatial data by generating randomly points first in a 3D space - cube, then in a parallelepiped, diminishing one of its sides. We then continued this procedure in 2D and 1D space. For each simulated data set we calculated the points' fractal coefficient (correlation fractal dimension of epicentres) and then checked for correlation between the coefficients values and the type of spatial distribution. In that way one can obtain a set of standard fractal coefficients' values for varying spatial distributions. These then can be used when real earthquake data is analyzed by comparing the real data coefficients values to the standard fractal coefficients. Such an approach can help in interpreting the fractal analysis results through different types of spatial distributions.
Recommendations for scale-up of community-based misoprostol distribution programs.
Robinson, Nuriya; Kapungu, Chisina; Carnahan, Leslie; Geller, Stacie
2014-06-01
Community-based distribution of misoprostol for prevention of postpartum hemorrhage (PPH) in resource-poor settings has been shown to be safe and effective. However, global recommendations for prenatal distribution and monitoring within a community setting are not yet available. In order to successfully translate misoprostol and PPH research into policy and practice, several critical points must be considered. A focus on engaging the community, emphasizing the safe nature of community-based misoprostol distribution, supply chain management, effective distribution, coverage, and monitoring plans are essential elements to community-based misoprostol program introduction, expansion, or scale-up. Copyright © 2014 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
Self-similarity of waiting times in fracture systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niccolini, G.; Bosia, F.; Carpinteri, A.
2009-08-15
Experimental and numerical results are presented for a fracture experiment carried out on a fiber-reinforced element under flexural loading, and a statistical analysis is performed for acoustic emission waiting-time distributions. By an optimization procedure, a recently proposed scaling law describing these distributions for different event magnitude scales is confirmed by both experimental and numerical data, thus reinforcing the idea that fracture of heterogeneous materials has scaling properties similar to those found for earthquakes. Analysis of the different scaling parameters obtained for experimental and numerical data leads us to formulate the hypothesis that the type of scaling function obtained depends onmore » the level of correlation among fracture events in the system.« less
Gothe, Emma; Sandin, Leonard; Allen, Craig R.; Angeler, David G.
2014-01-01
The distribution of functional traits within and across spatiotemporal scales has been used to quantify and infer the relative resilience across ecosystems. We use explicit spatial modeling to evaluate within- and cross-scale redundancy in headwater streams, an ecosystem type with a hierarchical and dendritic network structure. We assessed the cross-scale distribution of functional feeding groups of benthic invertebrates in Swedish headwater streams during two seasons. We evaluated functional metrics, i.e., Shannon diversity, richness, and evenness, and the degree of redundancy within and across modeled spatial scales for individual feeding groups. We also estimated the correlates of environmental versus spatial factors of both functional composition and the taxonomic composition of functional groups for each spatial scale identified. Measures of functional diversity and within-scale redundancy of functions were similar during both seasons, but both within- and cross-scale redundancy were low. This apparent low redundancy was partly attributable to a few dominant taxa explaining the spatial models. However, rare taxa with stochastic spatial distributions might provide additional information and should therefore be considered explicitly for complementing future resilience assessments. Otherwise, resilience may be underestimated. Finally, both environmental and spatial factors correlated with the scale-specific functional and taxonomic composition. This finding suggests that resilience in stream networks emerges as a function of not only local conditions but also regional factors such as habitat connectivity and invertebrate dispersal.
Architecture and Programming Models for High Performance Intensive Computation
2016-06-29
Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID
Soil moisture and biogeochemical factors influence the distribution of annual Bromus species
Jayne Belnap; John M. Stark; Benjamin M. Rau; Edith B. Allen; Susan Phillips
2016-01-01
Abiotic factors have a strong influence on where annual Bromus species are found. At the large regional scale, temperature and precipitation extremes determine the boundaries of Bromus occurrence. At the more local scale, soil characteristics and climate influence distribution, cover, and performance. In hot, dry, summer-rainfall-dominated deserts (Sonoran, Chihuahuan...
John D. Alexander; Jaime L. Stephens; Sam Veloz; Leo Salas; Josée S. Rousseau; C. John Ralph; Daniel A. Sarr
2017-01-01
As data about populations of indicator species become available, proactive strategies that improve representation of biological diversity within protected area networks should consider finer-scaled evaluations, especially in regions identified as important through course-scale analyses. We use density distribution models derived from a robust regional bird...
Identification And Distribution Of Vanadinite (Pb5(V5+O4)3Cl) In Lead Pipe Corrosion By-Products
This study presents the first detailed look at vanadium (V) speciation in drinking water pipe corrosion scales. A pool of 34 scale layers from 15 lead or lead-lined pipes representing eight different municipal drinking water distribution systems in the Northeastern and Midwester...
The objective of this work is to compare the properties of lead solids formed during bench-scale precipitation experiments to solids found on lead pipe removed from real drinking water distribution systems and metal coupons used in pilot scale corrosion testing. Specifically, so...
Dissolving Pb from lead service lines and Pb-containing brasses and solders has become a major health issue for many water distribution systems. Knowledge of the mineralogy of scales in these pipes is key to modeling this dissolution. The traditional method of determining their ...
NASA Astrophysics Data System (ADS)
Donkov, Sava; Stefanov, Ivan Z.
2018-03-01
We have set ourselves the task of obtaining the probability distribution function of the mass density of a self-gravitating isothermal compressible turbulent fluid from its physics. We have done this in the context of a new notion: the molecular clouds ensemble. We have applied a new approach that takes into account the fractal nature of the fluid. Using the medium equations, under the assumption of steady state, we show that the total energy per unit mass is an invariant with respect to the fractal scales. As a next step we obtain a non-linear integral equation for the dimensionless scale Q which is the third root of the integral of the probability distribution function. It is solved approximately up to the leading-order term in the series expansion. We obtain two solutions. They are power-law distributions with different slopes: the first one is -1.5 at low densities, corresponding to an equilibrium between all energies at a given scale, and the second one is -2 at high densities, corresponding to a free fall at small scales.
Yang, Fan; Shi, Baoyou; Gu, Junnong; Wang, Dongsheng; Yang, Min
2012-10-15
The corrosion scales on iron pipes could have great impact on the water quality in drinking water distribution systems (DWDS). Unstable and less protective corrosion scale is one of the main factors causing "discolored water" issues when quality of water entering into distribution system changed significantly. The morphological and physicochemical characteristics of corrosion scales formed under different source water histories in duration of about two decades were systematically investigated in this work. Thick corrosion scales or densely distributed corrosion tubercles were mostly found in pipes transporting surface water, but thin corrosion scales and hollow tubercles were mostly discovered in pipes transporting groundwater. Magnetite and goethite were main constituents of iron corrosion products, but the mass ratio of magnetite/goethite (M/G) was significantly different depending on the corrosion scale structure and water source conditions. Thick corrosion scales and hard shell of tubercles had much higher M/G ratio (>1.0), while the thin corrosion scales had no magnetite detected or with much lower M/G ratio. The M/G ratio could be used to identify the characteristics and evaluate the performances of corrosion scales formed under different water conditions. Compared with the pipes transporting ground water, the pipes transporting surface water were more seriously corroded and could be in a relatively more active corrosion status all the time, which was implicated by relatively higher siderite, green rust and total iron contents in their corrosion scales. Higher content of unstable ferric components such as γ-FeOOH, β-FeOOH and amorphous iron oxide existed in corrosion scales of pipes receiving groundwater which was less corroded. Corrosion scales on groundwater pipes with low magnetite content had higher surface area and thus possibly higher sorption capacity. The primary trace inorganic elements in corrosion products were Br and heavy metals. Corrosion products obtained from pipes transporting groundwater had higher levels of Br, Ti, Ba, Cu, Sr, V, Cr, La, Pb and As. Copyright © 2012 Elsevier Ltd. All rights reserved.
Arenas-Castro, Salvador; Gonçalves, João; Alves, Paulo; Alcaraz-Segura, Domingo; Honrado, João P
2018-01-01
Global environmental changes are rapidly affecting species' distributions and habitat suitability worldwide, requiring a continuous update of biodiversity status to support effective decisions on conservation policy and management. In this regard, satellite-derived Ecosystem Functional Attributes (EFAs) offer a more integrative and quicker evaluation of ecosystem responses to environmental drivers and changes than climate and structural or compositional landscape attributes. Thus, EFAs may hold advantages as predictors in Species Distribution Models (SDMs) and for implementing multi-scale species monitoring programs. Here we describe a modelling framework to assess the predictive ability of EFAs as Essential Biodiversity Variables (EBVs) against traditional datasets (climate, land-cover) at several scales. We test the framework with a multi-scale assessment of habitat suitability for two plant species of conservation concern, both protected under the EU Habitats Directive, differing in terms of life history, range and distribution pattern (Iris boissieri and Taxus baccata). We fitted four sets of SDMs for the two test species, calibrated with: interpolated climate variables; landscape variables; EFAs; and a combination of climate and landscape variables. EFA-based models performed very well at the several scales (AUCmedian from 0.881±0.072 to 0.983±0.125), and similarly to traditional climate-based models, individually or in combination with land-cover predictors (AUCmedian from 0.882±0.059 to 0.995±0.083). Moreover, EFA-based models identified additional suitable areas and provided valuable information on functional features of habitat suitability for both test species (narrowly vs. widely distributed), for both coarse and fine scales. Our results suggest a relatively small scale-dependence of the predictive ability of satellite-derived EFAs, supporting their use as meaningful EBVs in SDMs from regional and broader scales to more local and finer scales. Since the evaluation of species' conservation status and habitat quality should as far as possible be performed based on scalable indicators linking to meaningful processes, our framework may guide conservation managers in decision-making related to biodiversity monitoring and reporting schemes.
Corkery, Robert W; Tyrode, Eric C
2017-08-06
Lycaenid butterflies from the genera Callophrys , Cyanophrys and Thecla have evolved remarkable biophotonic gyroid nanostructures within their wing scales that have only recently been replicated by nanoscale additive manufacturing. These nanostructures selectively reflect parts of the visible spectrum to give their characteristic non-iridescent, matte-green appearance, despite a distinct blue-green-yellow iridescence predicted for individual crystals from theory. It has been hypothesized that the organism must achieve its uniform appearance by growing crystals with some restrictions on the possible distribution of orientations, yet preferential orientation observed in Callophrys rubi confirms that this distribution need not be uniform. By analysing scanning electron microscope and optical images of 912 crystals in three wing scales, we find no preference for their rotational alignment in the plane of the scales. However, crystal orientation normal to the scale was highly correlated to their colour at low (conical) angles of view and illumination. This correlation enabled the use of optical images, each containing up to 10 4 -10 5 crystals, for concluding the preferential alignment seen along the [Formula: see text] at the level of single scales, appears ubiquitous. By contrast, [Formula: see text] orientations were found to occur at no greater rate than that expected by chance. Above a critical cone angle, all crystals reflected bright green light indicating the dominant light scattering is due to the predicted band gap along the [Formula: see text] direction, independent of the domain orientation. Together with the natural variation in scale and wing shapes, we can readily understand the detailed mechanism of uniform colour production and iridescence suppression in these butterflies. It appears that the combination of preferential alignment normal to the wing scale, and uniform distribution within the plane is a near optimal solution for homogenizing the angular distribution of the [Formula: see text] band gap relative to the wings. Finally, the distributions of orientations, shapes, sizes and degree of order of crystals within single scales provide useful insights for understanding the mechanisms at play in the formation of these biophotonic nanostructures.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1990-01-01
The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.
The link between the baryonic mass distribution and the rotation curve shape
NASA Astrophysics Data System (ADS)
Swaters, R. A.; Sancisi, R.; van der Hulst, J. M.; van Albada, T. S.
2012-09-01
The observed rotation curves of disc galaxies, ranging from late-type dwarf galaxies to early-type spirals, can be fitted remarkably well simply by scaling up the contributions of the stellar and H I discs. This 'baryonic scaling model' can explain the full breadth of observed rotation curves with only two free parameters. For a small fraction of galaxies, in particular early-type spiral galaxies, H I scaling appears to fail in the outer parts, possibly due to observational effects or ionization of H I. The overall success of the baryonic scaling model suggests that the well-known global coupling between the baryonic mass of a galaxy and its rotation velocity (known as the baryonic Tully-Fisher relation) applies at a more local level as well, and it seems to imply a link between the baryonic mass distribution and the distribution of total mass (including dark matter).
Simulating the wealth distribution with a Richest-Following strategy on scale-free network
NASA Astrophysics Data System (ADS)
Hu, Mao-Bin; Jiang, Rui; Wu, Qing-Song; Wu, Yong-Hong
2007-07-01
In this paper, we investigate the wealth distribution with agents playing evolutionary games on a scale-free social network adopting the Richest-Following strategy. Pareto's power-law distribution (1897) of wealth is demonstrated with power factor in agreement with that of US or Japan. Moreover, the agent's personal wealth is proportional to its number of contacts (connectivity), and this leads to the phenomenon that the rich gets richer and the poor gets relatively poorer, which agrees with the Matthew Effect.
Traffic-driven epidemic spreading on scale-free networks with tunable degree distribution
NASA Astrophysics Data System (ADS)
Yang, Han-Xin; Wang, Bing-Hong
2016-04-01
We study the traffic-driven epidemic spreading on scale-free networks with tunable degree distribution. The heterogeneity of networks is controlled by the exponent γ of power-law degree distribution. It is found that the epidemic threshold is minimized at about γ=2.2. Moreover, we find that nodes with larger algorithmic betweenness are more likely to be infected. We expect our work to provide new insights in to the effect of network structures on traffic-driven epidemic spreading.
Scaling analysis on Indian foreign exchange market
NASA Astrophysics Data System (ADS)
Sarkar, A.; Barat, P.
2006-05-01
In this paper, we investigate the scaling behavior of the average daily exchange rate returns of the Indian Rupee against four foreign currencies: namely, US Dollar, Euro, Great Britain Pound and Japanese Yen. The average daily exchange rate return of the Indian Rupee against US Dollar is found to exhibit a persistent scaling behavior and follow Levy stable distribution. On the contrary, the average daily exchange rate returns of the other three foreign currencies do not show persistency or antipersistency and follow Gaussian distribution.
Schock, Michael R; Hyland, Robert N; Welch, Meghan M
2008-06-15
Previously, contaminants, such as AI, As, and Ra, have been shown to accumulate in drinking-water distribution system solids. Accumulated contaminants could be periodically released back into the water supply causing elevated levels at consumers taps, going undetected by most current regulatory monitoring practices and consequently constituting a hidden risk. The objective of this study was to determine the occurrence of over 40 major scale constituents, regulated metals, and other potential metallic inorganic contaminants in drinking-water distribution system Pb (lead) or Pb-lined service lines. The primary method of analysis was inductively coupled plasma-atomic emission spectroscopy, following complete decomposition of scale material. Contaminants and scale constituents were categorized by their average concentrations, and many metals of potential health concern were found to occur at levels sufficient to result in elevated levels at the consumer's taps if they were to be mobilized. The data indicate distinctly nonconservative behavior for many inorganic contaminants in drinking-water distribution systems. This finding suggests an imminent need for further research into the transport and fate of contaminants throughout drinking-water distribution system pipes, as well as a re-evaluation of monitoring protocols in order to more accurately determine the scope and levels of potential consumer exposure.
On the theory of intensity distributions of tornadoes and other low pressure systems
NASA Astrophysics Data System (ADS)
Schielicke, Lisa; Névir, Peter
Approaching from a theoretical point of view, this work presents a theory which unifies intensity distributions of different low pressure systems, based on an energy of displacement. Resulting from a generalized Boltzmann distribution, the expression of this energy of displacement is obtained by radial integration over the forces which are in balance with the pressure gradient force in the horizontal equation of motion. A scale analysis helps to find out which balance of forces prevail. According to the prevailing balances, the expression of the energy of displacement differs for various depressions. Investigating the system at the moment of maximum intensity, the energy of displacement can be interpreted as the work that has to be done to generate and finally eliminate the pressure anomaly, respectively. By choosing the appropriate balance of forces, number-intensity (energy of displacement) distributions show exponential behavior with the same decay rate β for tornadoes and cyclones, if tropical and extra-tropical cyclones are investigated together. The decay rate is related to a characteristic (universal) scale of the energy of displacement which has approximately the value Eu = β- 1 ≈ 1000 m 2s - 2 . In consequence, while the different balances of forces cause the scales of velocity, the energy of displacement scale seems to be universal for all low pressure systems. Additionally, if intensity is expressed as lifetime minimum pressure, the number-intensity (pressure) distributions should be power law distributed. Moreover, this work points out that the choice of the physical quantity which represents the intensity is important concerning the behavior of intensity distributions. Various expressions of the intensity like velocity, kinetic energy, energy of displacement and pressure are possible, but lead to different behavior of the distributions.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
Krasny-Pacini, A; Pauly, F; Hiebel, J; Godon, S; Isner-Horobeti, M-E; Chevignard, M
2017-07-01
Goal Attainment Scaling (GAS) is a method for writing personalized evaluation scales to quantify progress toward defined goals. It is useful in rehabilitation but is hampered by the experience required to adequately "predict" the possible outcomes relating to a particular goal before treatment and the time needed to describe all 5 levels of the scale. Here we aimed to investigate the feasibility of using GAS in a clinical setting of a pediatric spasticity clinic with a shorter method, the "3-milestones" GAS (goal setting with 3 levels and goal rating with the classical 5 levels). Secondary aims were to (1) analyze the types of goals children's therapists set for botulinum toxin treatment and (2) compare the score distribution (and therefore the ability to predict outcome) by goal type. Therapists were trained in GAS writing and prepared GAS scales in the regional spasticity-management clinic they attended with their patients and families. The study included all GAS scales written during a 2-year period. GAS score distribution across the 5 GAS levels was examined to assess whether the therapist could reliably predict outcome and whether the 3-milestones GAS yielded similar distributions as the original GAS method. In total, 541 GAS scales were written and showed the expected score distribution. Most scales (55%) referred to movement quality goals and fewer (29%) to family goals and activity domains. The 3-milestones GAS method was feasible within the time constraints of the spasticity clinic and could be used by local therapists in cooperation with the hospital team. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Abdelsalam, A.; Kamel, S.; Hafiz, M. E.
2015-10-01
The behavior and the properties of medium-energy protons with kinetic energies in the range 26 - 400 MeV is derived from measurements of the particle yields and spectra in the final state of relativistic heavy-ion collisions (16O-AgBr interactions at 60 A and 200 A GeV and 32S-AgBr interactions at 3.7 A and 200 A GeV) and their interpretation in terms of the higher order moments. The multiplicity distributions have been fitted well with the Gaussian distribution function. The data are also compared with the predictions of the modified FRITIOF model, showing that the FRITIOF model does not reproduce the trend and the magnitude of the data. Measurements of the ratio of the variance to the mean show that the production of target fragments at high energies cannot be considered as a statistically independent process. However, the deviation of each multiplicity distribution from a Poisson law provides evidence for correlations. The KNO scaling behavior of two types of scaling (Koba-Nielsen-Olesen (KNO) scaling and Hegyi scaling) functions in terms of the multiplicity distribution is investigated. A simplified universal function has been used in each scaling to display the experimental data. An examination of the relationship between the entropy, the average multiplicity, and the KNO function is performed. Entropy production and subsequent scaling in nucleus-nucleus collisions are carried out by analyzing the experimental data over a wide energy range (Dubna and SPS). Interestingly, the data points corresponding to various energies overlap and fall on a single curve, indicating the presence of a kind of entropy scaling.
Growing magma chambers control the distribution of small-scale flood basalts
Yu, Xun; Chen, Li-Hui; Zeng, Gang
2015-01-01
Small-scale continental flood basalts are a global phenomenon characterized by regular spatio-temporal distributions. However, no genetic mechanism has been proposed to explain the visible but overlooked distribution patterns of these continental basaltic volcanism. Here we present a case study from eastern China, combining major and trace element analyses with Ar–Ar and K–Ar dating to show that the spatio-temporal distribution of small-scale flood basalts is controlled by the growth of long-lived magma chambers. Evolved basalts (SiO2 > 47.5 wt.%) from Xinchang–Shengzhou, a small-scale Cenozoic flood basalt field in Zhejiang province, eastern China, show a northward younging trend over the period 9.4–3.0 Ma. With northward migration, the magmas evolved only slightly ((Na2O + K2O)/MgO = 0.40–0.66; TiO2/MgO = 0.23–0.35) during about 6 Myr (9.4–3.3 Ma). When the flood basalts reached the northern end of the province, the magmas evolved rapidly (3.3–3.0 Ma) through a broad range of compositions ((Na2O + K2O)/MgO = 0.60–1.28; TiO2/MgO = 0.30–0.57). The distribution and two-stage compositional evolution of the migrating flood basalts record continuous magma replenishment that buffered against magmatic evolution and induced magma chamber growth. Our results demonstrate that the magma replenishment–magma chamber growth model explains the spatio-temporal distribution of small-scale flood basalts. PMID:26581905
Chen, T M; Chen, Q P; Liu, R C; Szot, A; Chen, S L; Zhao, J; Zhou, S S
2017-02-01
Hundreds of small-scale influenza outbreaks in schools are reported in mainland China every year, leading to a heavy disease burden which seriously impacts the operation of affected schools. Knowing the transmissibility of each outbreak in the early stage has become a major concern for public health policy-makers and primary healthcare providers. In this study, we collected all the small-scale outbreaks in Changsha (a large city in south central China with ~7·04 million population) from January 2005 to December 2013. Four simple and popularly used models were employed to calculate the reproduction number (R) of these outbreaks. Given that the duration of a generation interval Tc = 2·7 and the standard deviation (s.d.) σ = 1·1, the mean R estimated by an epidemic model, normal distribution and delta distribution were 2·51 (s.d. = 0·73), 4·11 (s.d. = 2·20) and 5·88 (s.d. = 5·00), respectively. When Tc = 2·9 and σ = 1·4, the mean R estimated by the three models were 2·62 (s.d. = 0·78), 4·72 (s.d. = 2·82) and 6·86 (s.d. = 6·34), respectively. The mean R estimated by gamma distribution was 4·32 (s.d. = 2·47). We found that the values of R in small-scale outbreaks in schools were higher than in large-scale outbreaks in a neighbourhood, city or province. Normal distribution, delta distribution, and gamma distribution models seem to more easily overestimate the R of influenza outbreaks compared to the epidemic model.
Human dynamics scaling characteristics for aerial inbound logistics operation
NASA Astrophysics Data System (ADS)
Wang, Qing; Guo, Jin-Li
2010-05-01
In recent years, the study of power-law scaling characteristics of real-life networks has attracted much interest from scholars; it deviates from the Poisson process. In this paper, we take the whole process of aerial inbound operation in a logistics company as the empirical object. The main aim of this work is to study the statistical scaling characteristics of the task-restricted work patterns. We found that the statistical variables have the scaling characteristics of unimodal distribution with a power-law tail in five statistical distributions - that is to say, there obviously exists a peak in each distribution, the shape of the left part closes to a Poisson distribution, and the right part has a heavy-tailed scaling statistics. Furthermore, to our surprise, there is only one distribution where the right parts can be approximated by the power-law form with exponent α=1.50. Others are bigger than 1.50 (three of four are about 2.50, one of four is about 3.00). We then obtain two inferences based on these empirical results: first, the human behaviors probably both close to the Poisson statistics and power-law distributions on certain levels, and the human-computer interaction behaviors may be the most common in the logistics operational areas, even in the whole task-restricted work pattern areas. Second, the hypothesis in Vázquez et al. (2006) [A. Vázquez, J. G. Oliveira, Z. Dezsö, K.-I. Goh, I. Kondor, A.-L. Barabási. Modeling burst and heavy tails in human dynamics, Phys. Rev. E 73 (2006) 036127] is probably not sufficient; it claimed that human dynamics can be classified as two discrete university classes. There may be a new human dynamics mechanism that is different from the classical Barabási models.
Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Yutaka, Ono; Furukawa, Toshiaki A.
2017-01-01
Background Several recent studies have shown that total scores on depressive symptom measures in a general population approximate an exponential pattern except for the lower end of the distribution. Furthermore, we confirmed that the exponential pattern is present for the individual item responses on the Center for Epidemiologic Studies Depression Scale (CES-D). To confirm the reproducibility of such findings, we investigated the total score distribution and item responses of the Kessler Screening Scale for Psychological Distress (K6) in a nationally representative study. Methods Data were drawn from the National Survey of Midlife Development in the United States (MIDUS), which comprises four subsamples: (1) a national random digit dialing (RDD) sample, (2) oversamples from five metropolitan areas, (3) siblings of individuals from the RDD sample, and (4) a national RDD sample of twin pairs. K6 items are scored using a 5-point scale: “none of the time,” “a little of the time,” “some of the time,” “most of the time,” and “all of the time.” The pattern of total score distribution and item responses were analyzed using graphical analysis and exponential regression model. Results The total score distributions of the four subsamples exhibited an exponential pattern with similar rate parameters. The item responses of the K6 approximated a linear pattern from “a little of the time” to “all of the time” on log-normal scales, while “none of the time” response was not related to this exponential pattern. Discussion The total score distribution and item responses of the K6 showed exponential patterns, consistent with other depressive symptom scales. PMID:28289560
Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities
NASA Astrophysics Data System (ADS)
Doster, F.; Celia, M. A.; Nordbotten, J. M.
2012-12-01
Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.
NASA Astrophysics Data System (ADS)
Karhula, Sakari S.; Finnilä, Mikko A.; Freedman, Jonathan D.; Kauppinen, Sami; Valkealahti, Maarit; Lehenkari, Petri; Pritzker, Kenneth P. H.; Nieminen, Heikki J.; Snyder, Brian D.; Grinstaff, Mark W.; Saarakkala, Simo
2017-08-01
Contrast-enhanced micro-computed tomography (CEµCT) with cationic and anionic contrast agents reveals glycosaminoglycan (GAG) content and distribution in articular cartilage (AC). The advantage of using cationic stains (e.g. CA4+) compared to anionic stains (e.g. Hexabrix®), is that it distributes proportionally with GAGs, while anionic stain distribution in AC is inversely proportional to the GAG content. To date, studies using cationic stains have been conducted with sufficient resolution to study its distributions on the macro-scale, but with insufficient resolution to study its distributions on the micro-scale. Therefore, it is not known whether the cationic contrast agents accumulate in extra/pericellular matrix and if they interact with chondrocytes. The insufficient resolution has also prevented to answer the question whether CA4+ accumulation in chondrons could lead to an erroneous quantification of GAG distribution with low-resolution µCT setups. In this study, we use high-resolution µCT to investigate whether CA4+ accumulates in chondrocytes, and further, to determine whether it affects the low-resolution ex vivo µCT studies of CA4+ stained human AC with varying degree of osteoarthritis. Human osteochondral samples were immersed in three different concentrations of CA4+ (3 mgI/ml, 6mgI/ml, and 24 mgI/ml) and imaged with high-resolution µCT at several timepoints. Different uptake diffusion profiles of CA4+ were observed between the segmented chondrons and the rest of the tissue. While the X-ray -detected CA4+ concentration in chondrons was greater than in the rest of the AC, its contribution to the uptake into the whole tissue was negligible and in line with macro-scale GAG content detected from histology. The efficient uptake of CA4+ into chondrons and surrounding territorial matrix can be explained by the micro-scale distribution of GAG content. CA4+ uptake in chondrons occurred regardless of the progression stage of osteoarthritis in the samples and the relative difference between the interterritorial matrix and segmented chondron area was less than 4%. To conclude, our results suggest that GAG quantification with CEµCT is not affected by the chondron uptake of CA4+. This further confirms the use of CA4+ for macro-scale assessment of GAG throughout the AC, and highlight the capability of studying chondron properties in 3D at the micro scale.
Can power-law scaling and neuronal avalanches arise from stochastic dynamics?
Touboul, Jonathan; Destexhe, Alain
2010-02-11
The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.