Enabling end-user network monitoring via the multicast consolidated proxy monitor
NASA Astrophysics Data System (ADS)
Kanwar, Anshuman; Almeroth, Kevin C.; Bhattacharyya, Supratik; Davy, Matthew
2001-07-01
The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.
High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications
NASA Technical Reports Server (NTRS)
Callahan, John; Montgomery, Todd; Whetten, Brian
1997-01-01
Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.
Efficient Group Coordination in Multicast Trees
2001-01-01
describe a novel protocol to coordinate multipoint groupwork within the IP-multicast framework. The protocol supports Internet-wide coordination for large...and highly-interactive groupwork , relying on the dissemination of coordination directives among group members across a shared end-to-end multicast
Multicasting in Wireless Communications (Ad-Hoc Networks): Comparison against a Tree-Based Approach
NASA Astrophysics Data System (ADS)
Rizos, G. E.; Vasiliadis, D. C.
2007-12-01
We examine on-demand multicasting in ad hoc networks. The Core Assisted Mesh Protocol (CAMP) is a well-known protocol for multicast routing in ad-hoc networks, generalizing the notion of core-based trees employed for internet multicasting into multicast meshes that have much richer connectivity than trees. On the other hand, wireless tree-based multicast routing protocols use much simpler structures for determining route paths, using only parent-child relationships. In this work, we compare the performance of the CAMP protocol against the performance of wireless tree-based multicast routing protocols, in terms of two important factors, namely packet delay and ratio of dropped packets.
NASA Astrophysics Data System (ADS)
Allani, Mouna; Garbinato, Benoît; Pedone, Fernando
An increasing number of Peer-to-Peer (P2P) Internet applications rely today on data dissemination as their cornerstone, e.g., audio or video streaming, multi-party games. These applications typically depend on some support for multicast communication, where peers interested in a given data stream can join a corresponding multicast group. As a consequence, the efficiency, scalability, and reliability guarantees of these applications are tightly coupled with that of the underlying multicast mechanism.
Tera-node Network Technology (Task 3) Scalable Personal Telecommunications
2000-03-14
Simulation results of this work may be found in http://north.east.isi.edu/spt/ audio.html. 6. Internet Research Task Force Reliable Multicast...Adaptation, 4. Multimedia Proxy Caching, 5. Experiments with the Rate Adaptation Protocol (RAP) 6. Providing leadership and innovation to the Internet ... Research Task Force (IRTF) Reliable Multicast Research Group (RMRG) 1. End-to-end Architecture for Quality-adaptive Streaming Applications over the
Multisites Coordination in Shared Multicast Trees
1999-01-01
conferencing, distributed interactive simulations, and collaborative systems. We de- scribe a novel protocol to coordinate multipoint groupwork in the IP...multicast framework. The pro- tocol supports Internet-wide coordination for large and highly-interactive groupwork , relying on trans- mission of
Mobile Multicast in Hierarchical Proxy Mobile IPV6
NASA Astrophysics Data System (ADS)
Hafizah Mohd Aman, Azana; Hashim, Aisha Hassan A.; Mustafa, Amin; Abdullah, Khaizuran
2013-12-01
Mobile Internet Protocol Version 6 (MIPv6) environments have been developing very rapidly. Many challenges arise with the fast progress of MIPv6 technologies and its environment. Therefore the importance of improving the existing architecture and operations increases. One of the many challenges which need to be addressed is the need for performance improvement to support mobile multicast. Numerous approaches have been proposed to improve mobile multicast performance. This includes Context Transfer Protocol (CXTP), Hierarchical Mobile IPv6 (HMIPv6), Fast Mobile IPv6 (FMIPv6) and Proxy Mobile IPv6 (PMIPv6). This document describes multicast context transfer in hierarchical proxy mobile IPv6 (H-PMIPv6) to provide better multicasting performance in PMIPv6 domain.
Inertial Motion Tracking for Inserting Humans into a Networked Synthetic Environment
2007-08-31
tracking methods. One method requires markers on the tracked buman body, and other method does not use nmkers. OPTOTRAK from Northem Digital Inc. is a...of using multicasting protocols. Unfortunately, most routers on the Internet are not configured for multicasting. A technique called tunneling is...used to overcome this problem. Tunneling is a software solution that m s on the end point routerslcomputers and allows multicast packets to traverse
Internet technologies and requirements for telemedicine
NASA Technical Reports Server (NTRS)
Lamaster, H.; Meylor, J.; Meylor, F.
1997-01-01
Internet technologies are briefly introduced and those applicable for telemedicine are reviewed. Multicast internet technologies are described. The National Aeronautics and Space Administration (NASA) 'Telemedicine Space-bridge to Russia' project is described and used to derive requirements for internet telemedicine. Telemedicine privacy and Quality of Service (QoS) requirements are described.
WDM Network and Multicasting Protocol Strategies
Zaim, Abdul Halim
2014-01-01
Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM), it is easier to take the advantage of optical networks and optical burst switching (OBS) and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET) and Just In Time (JIT) reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes. PMID:24744683
Lightweight and Statistical Techniques for Petascale PetaScale Debugging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Barton
2014-06-30
This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errorsmore » in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis to a small set of nodes or by identifying equivalence classes of nodes and sampling our debug targets from them. We implemented these techniques as lightweight tools that efficiently work on the full scale of the target machine. We explored four lightweight debugging refinements: generic classification parameters, such as stack traces, application-specific classification parameters, such as global variables, statistical data acquisition techniques and machine learning based approaches to perform root cause analysis. Work done under this project can be divided into two categories, new algorithms and techniques for scalable debugging, and foundation infrastructure work on our MRNet multicast-reduction framework for scalability, and Dyninst binary analysis and instrumentation toolkits.« less
Dynamic multicast routing scheme in WDM optical network
NASA Astrophysics Data System (ADS)
Zhu, Yonghua; Dong, Zhiling; Yao, Hong; Yang, Jianyong; Liu, Yibin
2007-11-01
During the information era, the Internet and the service of World Wide Web develop rapidly. Therefore, the wider and wider bandwidth is required with the lower and lower cost. The demand of operation turns out to be diversified. Data, images, videos and other special transmission demands share the challenge and opportunity with the service providers. Simultaneously, the electrical equipment has approached their limit. So the optical communication based on the wavelength division multiplexing (WDM) and the optical cross-connects (OXCs) shows great potentials and brilliant future to build an optical network based on the unique technical advantage and multi-wavelength characteristic. In this paper, we propose a multi-layered graph model with inter-path between layers to solve the problem of multicast routing wavelength assignment (RWA) contemporarily by employing an efficient graph theoretic formulation. And at the same time, an efficient dynamic multicast algorithm named Distributed Message Copying Multicast (DMCM) mechanism is also proposed. The multicast tree with minimum hops can be constructed dynamically according to this proposed scheme.
Protocol Architecture Model Report
NASA Technical Reports Server (NTRS)
Dhas, Chris
2000-01-01
NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.
Lopez, Esteban Alejandro; Costa, Orozimbo Alves; Ferrari, Deborah Viviane
2016-10-01
The purpose of this research note is to describe the development and technical validation of the Mobile Based Assistive Listening System (MoBALS), a free-of-charge smartphone-based remote microphone application. MoBALS Version 1.0 was developed for Android (Version 2.1 or higher) and was coded with Java using Eclipse Indigo with the Android Software Development Kit. A Wi-Fi router with background traffic and 2 affordable smartphones were used for debugging and technical validation comprising, among other things, multicasting capability, data packet loss, and battery consumption. MoBALS requires at least 2 smartphones connected to the same Wi-Fi router for signal transmission and reception. Subscriber identity module cards or Internet connections are not needed. MoBALS can be used alone or connected to a hearing aid or cochlear implant via direct audio input. Maximum data packet loss was 99.28%, and minimum battery life was 5 hr. Other relevant design specifications and their implementation are described. MoBALS performed as a remote microphone with enhanced accessibility features and avoids overhead expenses by using already-available and affordable technology. The further development and technical revalidation of MoBALS will be followed by clinical evaluation with persons with hearing impairment.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.
Choi, Sangil; Park, Jong Hyuk
2016-12-02
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.
Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network
Choi, Sangil; Park, Jong Hyuk
2016-01-01
Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438
Secure Hierarchical Multicast Routing and Multicast Internet Anonymity
1998-06-01
Multimedia, Summer 94, pages 76{79, 94. [15] David Chaum . Blind signatures for untraceable payments. In Proc. Crypto, pages 199{203, 1982. [16] David L...use of digital signatures , which consist of a cryptographic hash of the message encrypted with the private key of the signer. Digitally-signed messages... signature on the request and on the certi cate it contains. Notice that the location service need not retrieve the initiator’s public key as it is contained
Experimental Evaluation of Unicast and Multicast CoAP Group Communication
Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet
2016-01-01
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262
2014-05-01
developed techniques for building better IP geolocation systems. Geolocation has many applications, such as presenting advertisements for local business ...presenting advertisements for local business establishments on web pages to debugging network performance issues to attributing attack traffic to...Pennsylvania.” Geolocation has many applications, such as presenting advertisements for local business establishments on web pages to debugging network
Digital Video and the Internet: A Powerful Combination.
ERIC Educational Resources Information Center
Barron, Ann E.; Orwig, Gary W.
1995-01-01
Provides an overview of digital video and outlines hardware and software necessary for interactive training on the World Wide Web and for videoconferences via the Internet. Lists sites providing additional information on digital video, on CU-SeeMe software, and on MBONE (Multicast BackBONE), a technology that permits real-time transmission of…
A decentralized software bus based on IP multicas ting
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd
1995-01-01
We describe decentralized reconfigurable implementation of a conference management system based on the low-level Internet Protocol (IP) multicasting protocol. IP multicasting allows low-cost, world-wide, two-way transmission of data between large numbers of conferencing participants through the Multicasting Backbone (MBone). Each conference is structured as a software bus -- a messaging system that provides a run-time interconnection model that acts as a separate agent (i.e., the bus) for routing, queuing, and delivering messages between distributed programs. Unlike the client-server interconnection model, the software bus model provides a level of indirection that enhances the flexibility and reconfigurability of a distributed system. Current software bus implementations like POLYLITH, however, rely on a centralized bus process and point-to-point protocols (i.e., TCP/IP) to route, queue, and deliver messages. We implement a software bus called the MULTIBUS that relies on a separate process only for routing and uses a reliable IP multicasting protocol for delivery of messages. The use of multicasting means that interconnections are independent of IP machine addresses. This approach allows reconfiguration of bus participants during system execution without notifying other participants of new IP addresses. The use of IP multicasting also permits an economy of scale in the number of participants. We describe the MULITIBUS protocol elements and show how our implementation performs better than centralized bus implementations.
A Secure Multicast Framework in Large and High-Mobility Network Groups
NASA Astrophysics Data System (ADS)
Lee, Jung-San; Chang, Chin-Chen
With the widespread use of Internet applications such as Teleconference, Pay-TV, Collaborate tasks, and Message services, how to construct and distribute the group session key to all group members securely is becoming and more important. Instead of adopting the point-to-point packet delivery, these emerging applications are based upon the mechanism of multicast communication, which allows the group member to communicate with multi-party efficiently. There are two main issues in the mechanism of multicast communication: Key Distribution and Scalability. The first issue is how to distribute the group session key to all group members securely. The second one is how to maintain the high performance in large network groups. Group members in conventional multicast systems have to keep numerous secret keys in databases, which makes it very inconvenient for them. Furthermore, in case that a member joins or leaves the communication group, many involved participants have to change their own secret keys to preserve the forward secrecy and the backward secrecy. We consequently propose a novel version for providing secure multicast communication in large network groups. Our proposed framework not only preserves the forward secrecy and the backward secrecy but also possesses better performance than existing alternatives. Specifically, simulation results demonstrate that our scheme is suitable for high-mobility environments.
Multicast for savings in cache-based video distribution
NASA Astrophysics Data System (ADS)
Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf
1999-12-01
Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.
NASA Technical Reports Server (NTRS)
Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick;
2001-01-01
A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.
QoS Adaptation in Multimedia Multicast Conference Applications for E-Learning Services
ERIC Educational Resources Information Center
Deusdado, Sérgio; Carvalho, Paulo
2006-01-01
The evolution of the World Wide Web service has incorporated new distributed multimedia conference applications, powering a new generation of e-learning development and allowing improved interactivity and prohuman relations. Groupware applications are increasingly representative in the Internet home applications market, however, the Quality of…
Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast
NASA Astrophysics Data System (ADS)
Chu, Tianli; Xiong, Zixiang
2003-12-01
This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.
Design and Evaluation for the End-to-End Detection of TCP/IP Header Manipulation
2014-06-01
Cooperative Association for Internet Data Analysis CDN content delivery network CE congestion encountered CRC cyclic redundancy check CWR congestion...Switzerland was primarily developed as a network neutrality analysis tool to detect when internet service providers (ISPs) were interfering with...maximum 200 words) Understanding, measuring, and debugging IP networks , particularly across administrative domains, is challenging. One aspect of the
MDP: Reliable File Transfer for Space Missions
NASA Technical Reports Server (NTRS)
Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)
2002-01-01
This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990's, MDP is now in daily use by both the US Post Office and the DoD. This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Neck Oriented Reliable Multicast) is in the process of becoming an IETF standard.
Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun; Vandenhoute, Marc
2000-10-01
In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.
2003-09-01
This restriction limits the deployment to small and medium sized enterprises. The Internet cannot universally use DVMRP for this reason. In addition...20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE September 2003 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE... University , 1996 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCE from
Traffic Generator (TrafficGen) Version 1.4.2: Users Guide
2016-06-01
events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
NASA Astrophysics Data System (ADS)
Emmerson, S. R.; Veeraraghavan, M.; Chen, S.; Ji, X.
2015-12-01
Results of a pilot deployment of a major new version of the Unidata Local Data Manager (LDM-7) are presented. The Unidata LDM was developed by the University Corporation for Atmospheric Research (UCAR) and comprises a suite of software for the distribution and local processing of data in near real-time. It is widely used in the geoscience community to distribute observational data and model output, most notably as the foundation of the Unidata Internet Data Distribution (IDD) system run by UCAR, but also in private networks operated by NOAA, NASA, USGS, etc. The current version, LDM-6, uses at least one unicast TCP connection per receiving host. With over 900 connections, the bit-rate of total outgoing IDD traffic from UCAR averages approximately 3.0 GHz, with peak data rates exceeding 6.6 GHz. Expected increases in data volume suggest that a more efficient distribution mechanism will be required in the near future. LDM-7 greatly reduces the outgoing bandwidth requirement by incorporating a recently-developed "semi-reliable" IP multicast protocol while retaining the unicast TCP mechanism for reliability. During the summer of 2015, UCAR and the University of Virginia conducted a pilot deployment of the Unidata LDM-7 among U.S. university participants with access to the Internet2 network. Results of this pilot program, along with comparisons to the existing Unidata LDM-6 system, are presented.
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152
VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.
Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu
2015-01-01
Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.
Remote Observing and Automatic FTP on Kitt Peak
NASA Astrophysics Data System (ADS)
Seaman, Rob; Bohannan, Bruce
As part of KPNO's Internet-based observing services we experimented with the publically available audio, video and whiteboard MBONE clients (vat, nv, wb and others) in both point-to-point and multicast modes. While bandwidth is always a constraint on the Internet, it is less of a constraint to operations than many might think. These experiments were part of two new Internet-based observing services offered to KPNO observers beginning with the Fall 1995 semester: a remote observing station and an automatic FTP data queue. The remote observing station seeks to duplicate the KPNO IRAF/ICE observing environment on a workstation at the observer's home institution. The automatic FTP queue is intended to support those observing programs that require quick transport of data back to the home institution, for instance, for near real time reductions to aid in observing tactics. We also discuss the early operational results of these services.
NASA Astrophysics Data System (ADS)
Li, Ze; Zhang, Min; Wang, Danshi; Cui, Yue
2017-09-01
We propose a flexible and reconfigurable wavelength-division multiplexing (WDM) multicast scheme supporting downstream emergency multicast communication for WDM optical access network (WDM-OAN) via a multicast module (MM) based on four-wave mixing (FWM) in a semiconductor optical amplifier. It serves as an emergency measure to dispose of the burst, large bandwidth, and real-time multicast service with fast service provisioning and high resource efficiency. It also plays the role of physical backup in cases of big data migration or network disaster caused by invalid lasers or modulator failures. It provides convenient and reliable multicast service and emergency protection for WDM-OAN without modifying WDM-OAN structure. The strategies of an MM setting at the optical line terminal and remote node are discussed to apply this scheme to passive optical networks and active optical networks, respectively. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment in which one-to-six/eight 10-Gbps nonreturn-to-zero-differential phase-shift keying WDM multicasts in both strategies are successfully transmitted over single-mode fiber of 20.2 km. One-to-many reconfigurable WDM multicasts dealing with higher data rate and other modulation formats of multicast service are possible through the proposed scheme. It can be applied to different WDM access technologies, e.g., time-wavelength-division multiplexing-OAN and coherent WDM-OAN, and upgraded smoothly.
Degree-constrained multicast routing for multimedia communications
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugeng; Li, Guidan
2005-02-01
Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.
A Stateful Multicast Access Control Mechanism for Future Metro-Area-Networks.
ERIC Educational Resources Information Center
Sun, Wei-qiang; Li, Jin-sheng; Hong, Pei-lin
2003-01-01
Multicasting is a necessity for a broadband metro-area-network; however security problems exist with current multicast protocols. A stateful multicast access control mechanism, based on MAPE, is proposed. The architecture of MAPE is discussed, as well as the states maintained and messages exchanged. The scheme is flexible and scalable. (Author/AEF)
Digital Multicasting of Multiple Audio Streams
NASA Technical Reports Server (NTRS)
Macha, Mitchell; Bullock, John
2007-01-01
The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.
A proposed group management scheme for XTP multicast
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.
Optical multicast system for data center networks.
Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren
2015-08-24
We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.
A novel WDM passive optical network architecture supporting two independent multicast data streams
NASA Astrophysics Data System (ADS)
Qiu, Yang; Chan, Chun-Kit
2012-01-01
We propose a novel scheme to perform optical multicast overlay of two independent multicast data streams on a wavelength-division-multiplexed (WDM) passive optical network. By controlling a sinusoidal clock signal and shifting the wavelength at the optical line terminal (OLT), the delivery of the two multicast data, being carried by the generated optical tones, can be independently and flexibly controlled. Simultaneous transmission of 10-Gb/s unicast downstream and upstream data as well as two independent 10-Gb/s multicast data was successfully demonstrated.
Issues in designing transport layer multicast facilities
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined.
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.
Issues in providing a reliable multicast facility
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Strayer, W. Timothy; Weaver, Alfred C.
1990-01-01
Issues involved in point-to-multipoint communication are presented and the literature for proposed solutions and approaches surveyed. Particular attention is focused on the ideas and implementations that align with the requirements of the environment of interest. The attributes of multicast receiver groups that might lead to useful classifications, what the functionality of a management scheme should be, and how the group management module can be implemented are examined. The services that multicasting facilities can offer are presented, followed by mechanisms within the communications protocol that implements these services. The metrics of interest when evaluating a reliable multicast facility are identified and applied to four transport layer protocols that incorporate reliable multicast.
Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa
2017-05-16
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks.
Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa
2017-01-01
In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks. PMID:28509841
Multicast routing for wavelength-routed WDM networks with dynamic membership
NASA Astrophysics Data System (ADS)
Huang, Nen-Fu; Liu, Te-Lung; Wang, Yao-Tzung; Li, Bo
2000-09-01
Future broadband networks must support integrated services and offer flexible bandwidth usage. In our previous work, we explore the optical link control layer on the top of optical layer that enables the possibility of bandwidth on-demand service directly over wavelength division multiplexed (WDM) networks. Today, more and more applications and services such as video-conferencing software and Virtual LAN service require multicast support over the underlying networks. Currently, it is difficult to provide wavelength multicast over the optical switches without optical/electronic conversions although the conversion takes extra cost. In this paper, based on the proposed wavelength router architecture (equipped with ATM switches to offer O/E and E/O conversions when necessary), a dynamic multicast routing algorithm is proposed to furnish multicast services over WDM networks. The goal is to joint a new group member into the multicast tree so that the cost, including the link cost and the optical/electronic conversion cost, is kept as less as possible. The effectiveness of the proposed wavelength router architecture as well as the dynamic multicast algorithm is evaluated by simulation.
Point-to-Point Multicast Communications Protocol
NASA Technical Reports Server (NTRS)
Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.
1987-01-01
This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.
Internet-based computer technology on radiotherapy.
Chow, James C L
2017-01-01
Recent rapid development of Internet-based computer technologies has made possible many novel applications in radiation dose delivery. However, translational speed of applying these new technologies in radiotherapy could hardly catch up due to the complex commissioning process and quality assurance protocol. Implementing novel Internet-based technology in radiotherapy requires corresponding design of algorithm and infrastructure of the application, set up of related clinical policies, purchase and development of software and hardware, computer programming and debugging, and national to international collaboration. Although such implementation processes are time consuming, some recent computer advancements in the radiation dose delivery are still noticeable. In this review, we will present the background and concept of some recent Internet-based computer technologies such as cloud computing, big data processing and machine learning, followed by their potential applications in radiotherapy, such as treatment planning and dose delivery. We will also discuss the current progress of these applications and their impacts on radiotherapy. We will explore and evaluate the expected benefits and challenges in implementation as well.
Qin, Jun; Lu, Guo-Wei; Sakamoto, Takahide; Akahane, Kouichi; Yamamoto, Naokatsu; Wang, Danshi; Wang, Cheng; Wang, Hongxiang; Zhang, Min; Kawanishi, Tetsuya; Ji, Yuefeng
2014-12-01
In this paper, we experimentally demonstrate simultaneous multichannel wavelength multicasting (MWM) and exclusive-OR logic gate multicasting (XOR-LGM) for three 10Gbps non-return-to-zero differential phase-shift-keying (NRZ-DPSK) signals in quantum-dot semiconductor optical amplifier (QD-SOA) by exploiting the four-wave mixing (FWM) process. No additional pump is needed in the scheme. Through the interaction of the input three 10Gbps DPSK signal lights in QD-SOA, each channel is successfully multicasted to three wavelengths (1-to-3 for each), totally 3-to-9 MWM, and at the same time, three-output XOR-LGM is obtained at three different wavelengths. All the new generated channels are with a power penalty less than 1.2dB at a BER of 10(-9). Degenerate and non-degenerate FWM components are fully used in the experiment for data and logic multicasting.
MTP: An atomic multicast transport protocol
NASA Technical Reports Server (NTRS)
Freier, Alan O.; Marzullo, Keith
1990-01-01
Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.
Debugging Fortran on a shared memory machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, T.R.; Padua, D.A.
1987-01-01
Debugging on a parallel processor is more difficult than debugging on a serial machine because errors in a parallel program may introduce nondeterminism. The approach to parallel debugging presented here attempts to reduce the problem of debugging on a parallel machine to that of debugging on a serial machine by automatically detecting nondeterminism. 20 refs., 6 figs.
NASA Astrophysics Data System (ADS)
Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo
2018-03-01
The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.
Mobility based key management technique for multicast security in mobile ad hoc networks.
Madhusudhanan, B; Chitra, S; Rajan, C
2015-01-01
In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality.
Many-to-Many Multicast Routing Schemes under a Fixed Topology
Ding, Wei; Wang, Hongfa; Wei, Xuerui
2013-01-01
Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706
NASA Astrophysics Data System (ADS)
Liao, Luhua; Li, Lemin; Wang, Sheng
2006-12-01
We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.
NASA Astrophysics Data System (ADS)
Wu, Fei; Shao, Shihai; Tang, Youxi
2016-10-01
To enable simultaneous multicast downlink transmit and receive operations on the same frequency band, also known as full-duplex links between an access point and mobile users. The problem of minimizing the total power of multicast transmit beamforming is considered from the viewpoint of ensuring the suppression amount of near-field line-of-sight self-interference and guaranteeing prescribed minimum signal-to-interference-plus-noise-ratio (SINR) at each receiver of the multicast groups. Based on earlier results for multicast groups beamforming, the joint problem is easily shown to be NP-hard. A semidefinite relaxation (SDR) technique with linear program power adjust method is proposed to solve the NP-hard problem. Simulation shows that the proposed method is feasible even when the local receive antenna in nearfield and the mobile user in far-filed are in the same direction.
An efficient group multicast routing for multimedia communication
NASA Astrophysics Data System (ADS)
Wang, Yanlin; Sun, Yugen; Yan, Xinfang
2004-04-01
Group multicasting is a kind of communication mechanism whereby each member of a group sends messages to all the other members of the same group. Group multicast routing algorithms capable of satisfying quality of service (QoS) requirements of multimedia applications are essential for high-speed networks. We present a heuristic algorithm for group multicast routing with end to end delay constraint. Source-specific routing trees for each member are generated in our algorithm, which satisfy member"s bandwidth and end to end delay requirements. Simulations over random network were carried out to compare proposed algorithm performance with Low and Song"s. The experimental results show that our proposed algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover, our algorithm achieves good performance in balancing traffic, which can avoid link blocking and enhance the network behavior efficiently.
Tracking Students' Cognitive Processes during Program Debugging--An Eye-Movement Approach
ERIC Educational Resources Information Center
Lin, Yu-Tzu; Wu, Cheng-Chih; Hou, Ting-Yun; Lin, Yu-Chih; Yang, Fang-Ying; Chang, Chia-Hu
2016-01-01
This study explores students' cognitive processes while debugging programs by using an eye tracker. Students' eye movements during debugging were recorded by an eye tracker to investigate whether and how high- and low-performance students act differently during debugging. Thirty-eight computer science undergraduates were asked to debug two C…
Debugging: Finding, Fixing and Flailing, a Multi-Institutional Study of Novice Debuggers
ERIC Educational Resources Information Center
Fitzgerald, Sue; Lewandowski, Gary; McCauley, Renee; Murphy, Laurie; Simon, Beth; Thomas, Lynda; Zander, Carol
2008-01-01
Debugging is often difficult and frustrating for novices. Yet because students typically debug outside the classroom and often in isolation, instructors rarely have the opportunity to closely observe students while they debug. This paper describes the details of an exploratory study of the debugging skills and behaviors of contemporary novice Java…
Group-multicast capable optical virtual private ring with contention avoidance
NASA Astrophysics Data System (ADS)
Peng, Yunfeng; Du, Shu; Long, Keping
2008-11-01
A ring based optical virtual private network (OVPN) employing contention sensing and avoidance is proposed to deliver multiple-to-multiple group-multicast traffic. The network architecture is presented and its operation principles as well as performance are investigated. The main contribution of this article is the presentation of an innovative group-multicast capable OVPN architecture with technologies available today.
A high performance totally ordered multicast protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Whetten, Brian; Kaplan, Simon
1995-01-01
This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.
NASA Astrophysics Data System (ADS)
Woradit, Kampol; Guyot, Matthieu; Vanichchanunt, Pisit; Saengudomlert, Poompat; Wuttisittikulkij, Lunchakorn
While the problem of multicast routing and wavelength assignment (MC-RWA) in optical wavelength division multiplexing (WDM) networks has been investigated, relatively few researchers have considered network survivability for multicasting. This paper provides an optimization framework to solve the MC-RWA problem in a multi-fiber WDM network that can recover from a single-link failure with shared protection. Using the light-tree (LT) concept to support multicast sessions, we consider two protection strategies that try to reduce service disruptions after a link failure. The first strategy, called light-tree reconfiguration (LTR) protection, computes a new multicast LT for each session affected by the failure. The second strategy, called optical branch reconfiguration (OBR) protection, tries to restore a logical connection between two adjacent multicast members disconnected by the failure. To solve the MC-RWA problem optimally, we propose an integer linear programming (ILP) formulation that minimizes the total number of fibers required for both working and backup traffic. The ILP formulation takes into account joint routing of working and backup traffic, the wavelength continuity constraint, and the limited splitting degree of multicast-capable optical cross-connects (MC-OXCs). After showing some numerical results for optimal solutions, we propose heuristic algorithms that reduce the computational complexity and make the problem solvable for large networks. Numerical results suggest that the proposed heuristic yields efficient solutions compared to optimal solutions obtained from exact optimization.
Understanding Problem Solving Behavior of 6-8 Graders in a Debugging Game
ERIC Educational Resources Information Center
Liu, Zhongxiu; Zhi, Rui; Hicks, Andrew; Barnes, Tiffany
2017-01-01
Debugging is an over-looked component in K-12 computational thinking education. Few K-12 programming environments are designed to teach debugging, and most debugging research were conducted on college-aged students. In this paper, we presented debugging exercises to 6th-8th grade students and analyzed their problem solving behaviors in a…
Debugging classification and anti-debugging strategies
NASA Astrophysics Data System (ADS)
Gao, Shang; Lin, Qian; Xia, Mingyuan; Yu, Miao; Qi, Zhengwei; Guan, Haibing
2011-12-01
Debugging, albeit useful for software development, is also a double-edge sword since it could also be exploited by malicious attackers. This paper analyzes the prevailing debuggers and classifies them into 4 categories based on the debugging mechanism. Furthermore, as an opposite, we list 13 typical anti-debugging strategies adopted in Windows. These methods intercept specific execution points which expose the diagnostic behavior of debuggers.
NASA Technical Reports Server (NTRS)
Brooks, David E.; Gassman, Holly; Beering, Dave R.; Welch, Arun; Hoder, Douglas J.; Ivancic, William D.
1999-01-01
Transmission Control Protocol (TCP) is the underlying protocol used within the Internet for reliable information transfer. As such, there is great interest to have all implementations of TCP efficiently interoperate. This is particularly important for links exhibiting long bandwidth-delay products. The tools exist to perform TCP analysis at low rates and low delays. However, for extremely high-rate and lone-delay links such as 622 Mbps over geosynchronous satellites, new tools and testing techniques are required. This paper describes the tools and techniques used to analyze and debug various TCP implementations over high-speed, long-delay links.
Performance investigation of optical multicast overlay system using orthogonal modulation format
NASA Astrophysics Data System (ADS)
Singh, Simranjit; Singh, Sukhbir; Kaur, Ramandeep; Kaler, R. S.
2015-03-01
We proposed a bandwidth efficient wavelength division multiplexed-passive optical network (WDM-PON) to simultaneously transmit 60 Gb/s unicast and 10 Gb/s multicast services with 10 Gb/s upstream. The differential phase shift keying (DPSK) multicast signal is superimposed onto multiplexed non-return to zero/polarization shift keying (NRZ/PolSK) orthogonal modulated data signals. Upstream amplitude shift keying (ASK) signals formed without use of any additional light source and superimposed onto received unicast NRZ/PolSK signal before being transmitted back to optical line terminal (OLT). We also investigated the proposed WDM-PON system for variable optical input power, transmission distance of single mode fiber in multicast enable and disable mode. The measured Quality factor for all unicast and multicast signal is in acceptable range (>6). The original contribution of this paper is to propose a bandwidth efficient WDM-PON system that could be projected even in high speed scenario at reduced channel spacing and expected to be more technical viable due to use of optical orthogonal modulation formats.
Scalable Multicast Protocols for Overlapped Groups in Broker-Based Sensor Networks
NASA Astrophysics Data System (ADS)
Kim, Chayoung; Ahn, Jinho
In sensor networks, there are lots of overlapped multicast groups because of many subscribers, associated with their potentially varying specific interests, querying every event to sensors/publishers. And gossip based communication protocols are promising as one of potential solutions providing scalability in P(Publish)/ S(Subscribe) paradigm in sensor networks. Moreover, despite the importance of both guaranteeing message delivery order and supporting overlapped multicast groups in sensor or P2P networks, there exist little research works on development of gossip-based protocols to satisfy all these requirements. In this paper, we present two versions of causally ordered delivery guaranteeing protocols for overlapped multicast groups. The one is based on sensor-broker as delegates and the other is based on local views and delegates representing subscriber subgroups. In the sensor-broker based protocol, sensor-broker might lead to make overlapped multicast networks organized by subscriber's interests. The message delivery order has been guaranteed consistently and all multicast messages are delivered to overlapped subscribers using gossip based protocols by sensor-broker. Therefore, these features of the sensor-broker based protocol might be significantly scalable rather than those of the protocols by hierarchical membership list of dedicated groups like traditional committee protocols. And the subscriber-delegate based protocol is much stronger rather than fully decentralized protocols guaranteeing causally ordered delivery based on only local views because the message delivery order has been guaranteed consistently by all corresponding members of the groups including delegates. Therefore, this feature of the subscriber-delegate protocol is a hybrid approach improving the inherent scalability of multicast nature by gossip-based technique in all communications.
Multimedia And Internetworking Architecture Infrastructure On Interactive E-Learning System
NASA Astrophysics Data System (ADS)
Indah, K. A. T.; Sukarata, G.
2018-01-01
Interactive e-learning is a distance learning method that involves information technology, electronic system or computer as one means of learning system used for teaching and learning process that is implemented without having face to face directly between teacher and student. A strong dependence on emerging technologies greatly influences the way in which the architecture is designed to produce a powerful interactive e-learning network. In this paper analyzed an architecture model where learning can be done interactively, involving many participants (N-way synchronized distance learning) using video conferencing technology. Also used broadband internet network as well as multicast techniques as a troubleshooting method for bandwidth usage can be efficient.
Debugging and Performance Analysis Software Tools for Peregrine System |
High-Performance Computing | NREL Debugging and Performance Analysis Software Tools for Peregrine System Debugging and Performance Analysis Software Tools for Peregrine System Learn about debugging and performance analysis software tools available to use with the Peregrine system. Allinea
Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks
NASA Astrophysics Data System (ADS)
Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu
P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.
Authenticated IGMP for Controlling Access to Multicast Distribution Tree
NASA Astrophysics Data System (ADS)
Park, Chang-Seop; Kang, Hyun-Sun
A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.
Mobility based multicast routing in wireless mesh networks
NASA Astrophysics Data System (ADS)
Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan
2013-01-01
There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.
Internet Tomography in Support of Internet and Network Simulation and Emulation Modelling
NASA Astrophysics Data System (ADS)
Moloisane, A.; Ganchev, I.; O'Droma, M.
Internet performance measurement data extracted through Internet Tomography techniques and metrics and how it may be used to enhance the capacity of network simulation and emulation modelling is addressed in this paper. The advantages of network simulation and emulation as a means to aid design and develop the component networks, which make up the Internet and are fundamental to its ongoing evolution, are highlighted. The Internet's rapid growth has spurred development of new protocols and algorithms to meet changing operational requirements such as security, multicast delivery, mobile networking, policy management, and quality of service (QoS) support. Both the development and evaluation of these operational tools requires the answering of many design and operational questions. Creating the technical support required by network engineers and managers in their efforts to seek answers to these questions is in itself a major challenge. Within the Internet the number and range of services supported continues to grow exponentially, from legacy and client/server applications to VoIP, multimedia streaming services and interactive multimedia services. Services have their own distinctive requirements and idiosyncrasies. They respond differently to bandwidth limitations, latency and jitter problems. They generate different types of “conversations” between end-user terminals, back-end resources and middle-tier servers. To add to the complexity, each new or enhanced service introduced onto the network contends for available bandwidth with every other service. In an effort to ensure networking products and resources being designed and developed handling diverse conditions encountered in real Internet environments, network simulation and emulation modelling is a valuable tool, and becoming a critical element, in networking product and application design and development. The better these laboratory tools reflect real-world environment and conditions the more helpful to designers they will be.
The Verification-based Analysis of Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1996-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language
NASA Astrophysics Data System (ADS)
Madhavapeddy, Anil
Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.
The reliable multicast protocol application programming interface
NASA Technical Reports Server (NTRS)
Montgomery , Todd; Whetten, Brian
1995-01-01
The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.
Multicast backup reprovisioning problem for Hamiltonian cycle-based protection on WDM networks
NASA Astrophysics Data System (ADS)
Din, Der-Rong; Huang, Jen-Shen
2014-03-01
As networks grow in size and complexity, the chance and the impact of failures increase dramatically. The pre-allocated backup resources cannot provide 100% protection guarantee when continuous failures occur in a network. In this paper, the multicast backup re-provisioning problem (MBRP) for Hamiltonian cycle (HC)-based protection on WDM networks for the link-failure case is studied. We focus on how to recover the protecting capabilities of Hamiltonian cycle against the subsequent link-failures on WDM networks for multicast transmissions, after recovering the multicast trees affected by the previous link-failure. Since this problem is a hard problem, an algorithm, which consists of several heuristics and a genetic algorithm (GA), is proposed to solve it. The simulation results of the proposed method are also given. Experimental results indicate that the proposed algorithm can solve this problem efficiently.
The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis
NASA Technical Reports Server (NTRS)
Wu, Yunqing
1995-01-01
Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.
The multidriver: A reliable multicast service using the Xpress Transfer Protocol
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.
1990-01-01
A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.
Internet Voice Distribution System (IVoDS) Utilization in Remote Payload Operations
NASA Technical Reports Server (NTRS)
Best, Susan; Bradford, Bob; Chamberlain, Jim; Nichols, Kelvin; Bailey, Darrell (Technical Monitor)
2002-01-01
Due to limited crew availability to support science and the large number of experiments to be operated simultaneously, telescience is key to a successful International Space Station (ISS) science program. Crew, operations personnel at NASA centers, and researchers at universities and companies around the world must work closely together to perform scientific experiments on-board ISS. NASA has initiated use of Voice over Internet Protocol (VoIP) to supplement the existing HVoDS mission voice communications system used by researchers. The Internet Voice Distribution System (IVoDS) connects researchers to mission support "loops" or conferences via Internet Protocol networks such as the high-speed Internet 2. Researchers use IVoDS software on personal computers to talk with operations personnel at NASA centers. IVoDS also has the capability, if authorized, to allow researchers to communicate with the ISS crew during experiment operations. NODS was developed by Marshall Space Flight Center with contractors A2 Technology, Inc. FVC, Lockheed- Martin, and VoIP Group. IVoDS is currently undergoing field-testing with full deployment for up to 50 simultaneous users expected in 2002. Research is currently being performed to take full advantage of the digital world - the Personal Computer and Internet Protocol networks - to qualitatively enhance communications among ISS operations personnel. In addition to the current voice capability, video and data-sharing capabilities are being investigated. Major obstacles being addressed include network bandwidth capacity and strict security requirements. Techniques being investigated to reduce and overcome these obstacles include emerging audio-video protocols and network technology including multicast and quality-of-service.
Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru
2016-02-08
Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.
Demonstration of flexible multicasting and aggregation functionality for TWDM-PON
NASA Astrophysics Data System (ADS)
Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Zhu, Jinglong; Tian, Yu; Wu, Zhongying; Peng, Huangfa; Xu, Yongchi; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan
2017-06-01
The time- and wavelength-division multiplexed passive optical network (TWDM-PON) has been recognized as an attractive solution to provide broadband access for the next-generation networks. In this paper, we propose flexible service multicasting and aggregation functionality for TWDM-PON utilizing multiple-pump four-wave-mixing (FWM) and cyclic arrayed waveguide grating (AWG). With the proposed scheme, multiple TWDM-PON links share a single optical line terminal (OLT), which can greatly reduce the network deployment expense and achieve efficient network resource utilization by load balancing among different optical distribution networks (ODNs). The proposed scheme is compatible with existing TDM-PON infrastructure with fixed-wavelength OLT transmitter, thus smooth service upgrade can be achieved. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment with 10-Gb/s OOK and 10-Gb/s QPSK orthogonal frequency division multiplexing (OFDM) signal multicasting and aggregating to seven PON links. Compared with back-to-back (BTB) channel, the newly generated multicasting OOK signal and OFDM signal have power penalty of 1.6 dB and 2 dB at the BER of 10-3, respectively. For the aggregation of multiple channels, no obvious power penalty is observed. What is more, to verify the flexibility of the proposed scheme, we reconfigure the wavelength selective switch (WSS) and adjust the number of pumps to realize flexible multicasting functionality. One to three, one to seven, one to thirteen and one to twenty-one multicasting are achieved without modifying OLT structure.
Debugging embedded computer programs. [tactical missile computers
NASA Technical Reports Server (NTRS)
Kemp, G. H.
1980-01-01
Every embedded computer program must complete its debugging cycle using some system that will allow real time debugging. Many of the common items addressed during debugging are listed. Seven approaches to debugging are analyzed to evaluate how well they treat those items. Cost evaluations are also included in the comparison. The results indicate that the best collection of capabilities to cover the common items present in the debugging task occurs in the approach where a minicomputer handles the environment simulation with an emulation of some kind representing the embedded computer. This approach can be taken at a reasonable cost. The case study chosen is an embedded computer in a tactical missile. Several choices of computer for the environment simulation are discussed as well as different approaches to the embedded emulator.
Digital multi-channel stabilization of four-mode phase-sensitive parametric multicasting.
Liu, Lan; Tong, Zhi; Wiberg, Andreas O J; Kuo, Bill P P; Myslivets, Evgeny; Alic, Nikola; Radic, Stojan
2014-07-28
Stable four-mode phase-sensitive (4MPS) process was investigated as a means to enhance two-pump driven parametric multicasting conversion efficiency (CE) and signal to noise ratio (SNR). Instability of multi-beam, phase sensitive (PS) device that inherently behaves as an interferometer, with output subject to ambient induced fluctuations, was addressed theoretically and experimentally. A new stabilization technique that controls phases of three input waves of the 4MPS multicaster and maximizes CE was developed and described. Stabilization relies on digital phase-locked loop (DPLL) specifically was developed to control pump phases to guarantee stable 4MPS operation that is independent of environmental fluctuations. The technique also controls a single (signal) input phase to optimize the PS-induced improvement of the CE and SNR. The new, continuous-operation DPLL has allowed for fully stabilized PS parametric broadband multicasting, demonstrating CE improvement over 20 signal copies in excess of 10 dB.
Multicast Routing of Hierarchical Data
NASA Technical Reports Server (NTRS)
Shacham, Nachum
1992-01-01
The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.
Wang, Danshi; Zhang, Min; Qin, Jun; Lu, Guo-Wei; Wang, Hongxiang; Huang, Shanguo
2014-09-08
We propose a multifunctional optical switching unit based on the bidirectional liquid crystal on silicon (LCoS) and semiconductor optical amplifier (SOA) architecture. Add/drop, wavelength conversion, format conversion, and WDM multicast are experimentally demonstrated. Due to the bidirectional characteristic, the LCoS device cannot only multiplex the input signals, but also de-multiplex the converted signals. Dual-channel wavelength conversion and format conversion from 2 × 25Gbps differential quadrature phase-shift-keying (DQPSK) to 2 × 12.5Gbps differential phase-shift-keying (DPSK) based on four-wave mixing (FWM) in SOA is obtained with only one pump. One-to-six WDM multicast of 25Gbps DQPSK signals with two pumps is also achieved. All of the multicast channels are with a power penalty less than 1.1 dB at FEC threshold of 3.8 × 10⁻³.
Representation and Integration of Scientific Information
NASA Technical Reports Server (NTRS)
1998-01-01
The objective of this Joint Research Interchange with NASA-Ames was to investigate how the Tsimmis technology could be used to represent and integrate scientific information. The main goal of the Tsimmis project is to allow a decision maker to find information of interest from such sources, fuse it, and process it (e.g., summarize it, visualize it, discover trends). Another important goal is the easy incorporation of new sources, as well the ability to deal with sources whose structure or services evolve. During the Interchange we had research meetings approximately every month or two. The funds provided by NASA supported work that lead to the following two papers: Fusion Queries over Internet Databases; Efficient Query Subscription Processing in a Multicast Environment.
Debugging Techniques Used by Experienced Programmers to Debug Their Own Code.
1990-09-01
IS. NUMBER OF PAGES code debugging 62 computer programmers 16. PRICE CODE debug programming 17. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 119...Davis, and Schultz (1987) also compared experts and novices, but focused on the way a computer program is represented cognitively and how that...of theories in the emerging computer programming domain (Fisher, 1987). In protocol analysis, subjects are asked to talk/think aloud as they solve
Optimization of multicast optical networks with genetic algorithm
NASA Astrophysics Data System (ADS)
Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng
2007-11-01
In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Schiper, Andre; Stephenson, Pat
1990-01-01
A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele; Biegel, Bryan (Technical Monitor)
2001-01-01
This viewgraph presentation provides information on the technical aspects of debugging computer code that has been automatically converted for use in a parallel computing system. Shared memory parallelization and distributed memory parallelization entail separate and distinct challenges for a debugging program. A prototype system has been developed which integrates various tools for the debugging of automatically parallelized programs including the CAPTools Database which provides variable definition information across subroutines as well as array distribution information.
XML Tactical Chat (XTC): The Way Ahead for Navy Chat
2007-09-01
multicast transmissions via sophisticated pruning algorithms, while allowing multicast packets to “ tunnel ” through IP routers. [Macedonia, Brutzman 1994...conference was Jabber Inc. who added some great insight into the power of Jabber. • Great features including blackberry handheld connectivity and
An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs
NASA Astrophysics Data System (ADS)
Basalamah, Anas; Sato, Takuro
For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.
Reliable WDM multicast in optical burst-switched networks
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun
2000-09-01
IN this paper,l we present a reliable WDM (Wavelength-Division Multiplexing) multicast protocol in optical burst-switched (OBS) networks. Since the burst dropping (loss) probability may be potentially high in a heavily loaded OBS backbone network, reliable multicast protocols that have developed for IP networks at the transport (or application) layer may incur heavy overheads such as a large number of duplicate retransmissions. In addition, it may take a longer time for an end host to detect and then recover from burst dropping (loss) occurred at the WDM layer. For efficiency reasons, we propose burst loss recovery within the OBS backbone (i.e., at the WDM link layer). The proposed protocol requires two additional functions to be performed by the WDM switch controller: subcasting and maintaining burst states, when the WDM switch has more than one downstream on the WDM multicast tree. We show that these additional functions are simple to implement and the overhead associated with them is manageable.
Analysis on Multicast Routing Protocols for Mobile Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Xiang, Ma
As the Mobile Ad Hoc Networks technologies face a series of challenges like dynamic changes of topological structure, existence of unidirectional channel, limited wireless transmission bandwidth, the capability limitations of mobile termination and etc, therefore, the research to mobile Ad Hoc network routings inevitablely undertake a more important task than those to other networks. Multicast is a mode of communication transmission oriented to group computing, which sends the data to a group of host computers by using single source address. In a typical mobile Ad Hoc Network environment, multicast has a significant meaning. On the one hand, the users of mobile Ad Hoc Network usually need to form collaborative working groups; on the other hand, this is also an important means of fully using the broadcast performances of wireless communication and effectively using the limited wireless channel resources. This paper summarizes and comparatively analyzes the routing mechanisms of various existing multicast routing protocols according to the characteristics of mobile Ad Hoc network.
NASA Astrophysics Data System (ADS)
Bock, Carlos; Prat, Josep
2005-04-01
A hybrid WDM/TDM PON architecture implemented by means of two cascaded Arrayed Waveguide Gratings (AWG) is presented. Using the Free Spectral Range (FSR) periodicity of AWGs we transmit unicast and multicast traffic on different wavelengths to each Optical Network Unit (ONU). The OLT is equipped with two laser stacks, a tunable one for unicast transmission and a fixed one for multicast transmission. We propose the ONU to be reflective in order to avoid any light source at the Costumer Premises Equipment (CPE). Optical transmission tests demonstrate correct transmission at 2.5 Gbps up to 30 km.
Debugging from the Student Perspective
ERIC Educational Resources Information Center
Fitzgerald, S.; McCauley, R.; Hanks, B.; Murphy, L.; Simon, B.; Zander, C.
2010-01-01
Learning to debug is a difficult, yet essential, aspect of learning to program. Students in this multi-institutional study report that finding bugs is harder than fixing them. They use a wide variety of debugging strategies, some of them unexpected. Time spent on understanding the problem can be effective. Pattern matching, particularly at the…
Debugging expert systems using a dynamically created hypertext network
NASA Technical Reports Server (NTRS)
Boyle, Craig D. B.; Schuette, John F.
1991-01-01
The labor intensive nature of expert system writing and debugging motivated this study. The hypothesis is that a hypertext based debugging tool is easier and faster than one traditional tool, the graphical execution trace. HESDE (Hypertext Expert System Debugging Environment) uses Hypertext nodes and links to represent the objects and their relationships created during the execution of a rule based expert system. HESDE operates transparently on top of the CLIPS (C Language Integrated Production System) rule based system environment and is used during the knowledge base debugging process. During the execution process HESDE builds an execution trace. Use of facts, rules, and their values are automatically stored in a Hypertext network for each execution cycle. After the execution process, the knowledge engineer may access the Hypertext network and browse the network created. The network may be viewed in terms of rules, facts, and values. An experiment was conducted to compare HESDE with a graphical debugging environment. Subjects were given representative tasks. For speed and accuracy, in eight of the eleven tasks given to subjects, HESDE was significantly better.
MIRO: A debugging tool for CLIPS incorporating historical Rete networks
NASA Technical Reports Server (NTRS)
Tuttle, Sharon M.; Eick, Christoph F.
1994-01-01
At the last CLIPS conference, we discussed our ideas for adding a temporal dimension to the Rete network used to implement CLIPS. The resulting historical Rete network could then be used to store 'historical' information about a run of a CLIPS program, to aid in debugging. MIRO, a debugging tool for CLIPS built on top of CLIPS, incorporates such a historical Rete network and uses it to support its prototype question-answering capability. By enabling CLIPS users to directly ask debugging-related questions about the history of a program run, we hope to reduce the amount of single-stepping and program tracing required to debug a CLIPS program. In this paper, we briefly describe MIRO's architecture and implementation, and the current question-types that MIRO supports. These question-types are further illustrated using an example, and the benefits of the debugging tool are discussed. We also present empirical results that measure the run-time and partial storage overhead of MIRO, and discuss how MIRO may also be used to study various efficiency aspects of CLIPS programs.
Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems
NASA Technical Reports Server (NTRS)
Shu, Wei Wennie; Porter, John
2000-01-01
The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.
Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis
NASA Technical Reports Server (NTRS)
Montgomery, Todd L.
1995-01-01
This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.
Next Generation Integrated Environment for Collaborative Work Across Internets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey B. Newman
2009-02-24
We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in themore » same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets.« less
Representing and Teaching Knowledge for Troubleshooting/Debugging. Technical Report No. 292.
ERIC Educational Resources Information Center
Wescourt, Keith T.; Hemphill, Linda
The goal of the present project was to identify the types of knowledge necessary and useful for competent troubleshooting/debugging and to examine how new approaches to formal instruction might influence the attainment of competence by students. The research focused on the role of general strategies in troubleshooting/debugging, and how they might…
A Low-Complexity Subgroup Formation with QoS-Aware for Enhancing Multicast Services in LTE Networks
NASA Astrophysics Data System (ADS)
Algharem, M.; Omar, M. H.; Rahmat, R. F.; Budiarto, R.
2018-03-01
The high demand of Multimedia services on in Long Term Evolution (LTE) and beyond networks forces the networks operators to find a solution that can handle the huge traffic. Along with this, subgroup formation techniques are introduced to overcome the limitations of the Conventional Multicast Scheme (CMS) by splitting the multicast users into several subgroups based on the users’ channels quality signal. However, finding the best subgroup configuration with low complexity is need more investigations. In this paper, an efficient and simple subgroup formation mechanisms are proposed. The proposed mechanisms take the transmitter MAC queue in account. The effectiveness of the proposed mechanisms is evaluated and compared with CMS in terms of throughput, fairness, delay, Block Error Rate (BLER).
A mechanism for efficient debugging of parallel programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, B.P.; Choi, J.D.
1988-01-01
This paper addresses the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors (SMMP). The authors describe the use of flowback analysis to provide information on causal relationships between events in a program's execution without re-executing the program for debugging. The authors introduce a mechanism called incremental tracing that, by using semantic analyses of the debugged program, makes the flowback analysis practical with only a small amount of trace generated during execution. The extend flowback analysis to apply to parallel programs and describe a method to detect race conditions in the interactions ofmore » the co-operating processes.« less
Proxy-assisted multicasting of video streams over mobile wireless networks
NASA Astrophysics Data System (ADS)
Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody
2005-03-01
This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1973-01-01
The user and programming information necessary for the application of the SATELLITE programs for the STARS system are presented. The individual program functions are: (1) data debugging for the STARS-2S program, (2) Fourier series conversion program, (3) data debugging for the STARS-2B program, and (4) data debugging for the STARS-2V program.
Multicasting based optical inverse multiplexing in elastic optical network.
Guo, Bingli; Xu, Yingying; Zhu, Paikun; Zhong, Yucheng; Chen, Yuanxiang; Li, Juhao; Chen, Zhangyuan; He, Yongqi
2014-06-16
Optical multicasting based inverse multiplexing (IM) is introduced in spectrum allocation of elastic optical network to resolve the spectrum fragmentation problem, where superchannels could be split and fit into several discrete spectrum blocks in the intermediate node. We experimentally demonstrate it with a 1-to-7 optical superchannel multicasting module and selecting/coupling components. Also, simulation results show that, comparing with several emerging spectrum defragmentation solutions (e.g., spectrum conversion, split spectrum), IM could reduce blocking performance significantly but without adding too much system complexity as split spectrum. On the other hand, service fairness for traffic with different granularity of these schemes is investigated for the first time and it shows that IM performs better than spectrum conversion and almost as well as split spectrum, especially for smaller size traffic under light traffic intensity.
Lightweight causal and atomic group multicast
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Schiper, Andre; Stephenson, Pat
1991-01-01
The ISIS toolkit is a distributed programming environment based on support for virtually synchronous process groups and group communication. A suite of protocols is presented to support this model. The approach revolves around a multicast primitive, called CBCAST, which implements a fault-tolerant, causally ordered message delivery. This primitive can be used directly or extended into a totally ordered multicast primitive, called ABCAST. It normally delivers messages immediately upon reception, and imposes a space overhead proportional to the size of the groups to which the sender belongs, usually a small number. It is concluded that process groups and group communication can achieve performance and scaling comparable to that of a raw message transport layer. This finding contradicts the widespread concern that this style of distributed computing may be unacceptably costly.
A debugging method of the Quadrotor UAV based on infrared thermal imaging
NASA Astrophysics Data System (ADS)
Cui, Guangjie; Hao, Qian; Yang, Jianguo; Chen, Lizhi; Hu, Hongkang; Zhang, Lijun
2018-01-01
High-performance UAV has been popular and in great need in recent years. The paper introduces a new method in debugging Quadrotor UAVs. Based on the infrared thermal technology and heat transfer theory, a UAV is under debugging above a hot-wire grid which is composed of 14 heated nichrome wires. And the air flow propelled by the rotating rotors has an influence on the temperature distribution of the hot-wire grid. An infrared thermal imager below observes the distribution and gets thermal images of the hot-wire grid. With the assistance of mathematic model and some experiments, the paper discusses the relationship between thermal images and the speed of rotors. By means of getting debugged UAVs into test, the standard information and thermal images can be acquired. The paper demonstrates that comparing to the standard thermal images, a UAV being debugging in the same test can draw some critical data directly or after interpolation. The results are shown in the paper and the advantages are discussed.
Reliable multicast protocol specifications protocol operations
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd; Whetten, Brian
1995-01-01
This appendix contains the complete state tables for Reliable Multicast Protocol (RMP) Normal Operation, Multi-RPC Extensions, Membership Change Extensions, and Reformation Extensions. First the event types are presented. Afterwards, each RMP operation state, normal and extended, is presented individually and its events shown. Events in the RMP specification are one of several things: (1) arriving packets, (2) expired alarms, (3) user events, (4) exceptional conditions.
Bulk data transfer distributer: a high performance multicast model in ALMA ACS
NASA Astrophysics Data System (ADS)
Cirami, R.; Di Marcantonio, P.; Chiozzi, G.; Jeram, B.
2006-06-01
A high performance multicast model for the bulk data transfer mechanism in the ALMA (Atacama Large Millimeter Array) Common Software (ACS) is presented. The ALMA astronomical interferometer will consist of at least 50 12-m antennas operating at millimeter wavelength. The whole software infrastructure for ALMA is based on ACS, which is a set of application frameworks built on top of CORBA. To cope with the very strong requirements for the amount of data that needs to be transported by the software communication channels of the ALMA subsystems (a typical output data rate expected from the Correlator is of the order of 64 MB per second) and with the potential CORBA bottleneck due to parameter marshalling/de-marshalling, usage of IIOP protocol, etc., a transfer mechanism based on the ACE/TAO CORBA Audio/Video (A/V) Streaming Service has been developed. The ACS Bulk Data Transfer architecture bypasses the CORBA protocol with an out-of-bound connection for the data streams (transmitting data directly in TCP or UDP format), using at the same time CORBA for handshaking and leveraging the benefits of ACS middleware. Such a mechanism has proven to be capable of high performances, of the order of 800 Mbits per second on a 1Gbit Ethernet network. Besides a point-to-point communication model, the ACS Bulk Data Transfer provides a multicast model. Since the TCP protocol does not support multicasting and all the data must be correctly delivered to all ALMA subsystems, a distributer mechanism has been developed. This paper focuses on the ACS Bulk Data Distributer, which mimics a multicast behaviour managing data dispatching to all receivers willing to get data from the same sender.
Lu, Guo-Wei; Bo, Tianwai; Sakamoto, Takahide; Yamamoto, Naokatsu; Chan, Calvin Chun-Kit
2016-10-03
Recently the ever-growing demand for dynamic and high-capacity services in optical networks has resulted in new challenges that require improved network agility and flexibility in order for network resources to become more "consumable" and dynamic, or elastic, in response to requests from higher network layers. Flexible and scalable wavelength conversion or multicast is one of the most important technologies needed for developing agility in the physical layer. This paper will investigate how, using a reconfigurable coherent multi-carrier as a pump, the multicast scalability and the flexibility in wavelength allocation of the converted signals can be effectively improved. Moreover, the coherence in the multiple carriers prevents the phase noise transformation from the local pump to the converted signals, which is imperative for the phase-noise-sensitive multi-level single- or multi-carrier modulated signal. To verify the feasibility of the proposed scheme, we experimentally demonstrate the wavelength multicast of coherent optical orthogonal frequency division multiplexing (CO-OFDM) signals using a reconfigurable coherent multi-carrier pump, showing flexibility in wavelength allocation, scalability in multicast, and tolerance against pump phase noise. Less than 0.5 dB and 1.8 dB power penalties at a bit-error rate (BER) of 10-3 are obtained for the converted CO-OFDM-quadrature phase-shift keying (QPSK) and CO-OFDM-16-ary quadrature amplitude modulation (16QAM) signals, respectively, even when using a distributed feedback laser (DFB) as a pump source. In contrast, with a free-running pumping scheme, the phase noise from DFB pumps severely deteriorates the CO-OFDM signals, resulting in a visible error-floor at a BER of 10-2 in the converted CO-OFDM-16QAM signals.
Overview of AMS (CCSDS Asynchronous Message Service)
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2006-01-01
This viewgraph presentation gives an overview of the Consultative Committee for Space Data Systems (CCSDS) Asynchronous Message Service (AMS). The topics include: 1) Key Features; 2) A single AMS continuum; 3) The AMS Protocol Suite; 4) A multi-continuum venture; 5) Constraining transmissions; 6) Security; 7) Fault Tolerance; 8) Performance of Reference Implementation; 9) AMS vs Multicast (1); 10) AMS vs Multicast (2); 11) RAMS testing exercise; and 12) Results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishihara, T
Currently, the problem at hand is in distributing identical copies of OEP and filter software to a large number of farm nodes. One of the common methods used to transfer these softwares is through unicast. Unicast protocol faces the problem of repetitiously sending the same data over the network. Since the sending rate is limited, this process poses to be a bottleneck. Therefore, one possible solution to this problem lies in creating a reliable multicast protocol. A specific type of multicast protocol is the Bulk Multicast Protocol [4]. This system consists of one sender distributing data to many receivers. Themore » sender delivers data at a given rate of data packets. In response to that, the receiver replies to the sender with a status packet which contains information about the packet loss in terms of Negative Acknowledgment. The probability of the status packet sent back to the sender is+, where N is the number of receivers. The protocol is designed to have approximately 1 status packet for each data packet sent. In this project, we were able to show that the time taken for the complete transfer of a file to multiple receivers was about 12 times faster with multicast than by the use of unicast. The implementation of this experimental protocol shows remarkable improvement in mass data transfer to a large number of farm machines.« less
Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI
NASA Astrophysics Data System (ADS)
Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.
2015-09-01
In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.
NASA Astrophysics Data System (ADS)
Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin
2016-01-01
Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.
Apply network coding for H.264/SVC multicasting
NASA Astrophysics Data System (ADS)
Wang, Hui; Kuo, C.-C. Jay
2008-08-01
In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark.
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-05-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today's data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG's simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact.
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2001-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify the program execution without changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Relative Debugging of Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Hood, Robert; Biegel, Bryan (Technical Monitor)
2002-01-01
We describe a system that simplifies the process of debugging programs produced by computer-aided parallelization tools. The system uses relative debugging techniques to compare serial and parallel executions in order to show where the computations begin to differ. If the original serial code is correct, errors due to parallelization will be isolated by the comparison. One of the primary goals of the system is to minimize the effort required of the user. To that end, the debugging system uses information produced by the parallelization tool to drive the comparison process. In particular, the debugging system relies on the parallelization tool to provide information about where variables may have been modified and how arrays are distributed across multiple processes. User effort is also reduced through the use of dynamic instrumentation. This allows us to modify, the program execution with out changing the way the user builds the executable. The use of dynamic instrumentation also permits us to compare the executions in a fine-grained fashion and only involve the debugger when a difference has been detected. This reduces the overhead of executing instrumentation.
Multicast Parametric Synchronous Sampling
2011-09-01
enhancement in a parametric mixer device. Fig. 4 shows the principle of generating uniform, high quality replicas extending over previously un-attainable...critical part of the MPASS architecture and is responsible for the direct and continuous acquisition of data across all of the multicast signal copies...ii) ability to copy THz signals with impunity to tens of replicas ; (iii) all-optical delays > 1.9 us; (iv) 10’s of THz-fast all-optical sampling of
Fault recovery in the reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.
Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol
NASA Technical Reports Server (NTRS)
Montgomery, Todd; Callahan, John R.; Whetten, Brian
1996-01-01
The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.
Reliable multicast protocol specifications flow control and NACK policy
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Whetten, Brian
1995-01-01
This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.
Fixed-rate layered multicast congestion control
NASA Astrophysics Data System (ADS)
Bing, Zhang; Bing, Yuan; Zengji, Liu
2006-10-01
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.
NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
Ahlawat, Meenu; Bostani, Ameneh; Tehranchi, Amirhossein; Kashyap, Raman
2013-08-01
We experimentally demonstrate the possibility of agile multicasting for wavelength division multiplexing (WDM) networks, of a single-channel to two and seven channels over the C band, also extendable to S and L bands. This is based on cascaded χ(2) nonlinear mixing processes, namely, second-harmonic generation (SHG)-sum-frequency generation (SFG) and difference-frequency generation (DFG) in a 20-mm-long step-chirped periodically poled lithium niobate crystal, specially designed and fabricated for a 28-nm-wide SH-SF bandwidth centered at around 1.55 μm. The multiple idlers are simultaneously tuned by detuning the pump wavelengths within the broad SH-SF bandwidth. By selectively tuning the pump wavelengths over less than 10 and 6 nm, respectively, multicasting into two and seven idlers is successfully achieved across ~70 WDM channels within the 50 GHz International Telecommunication Union grid spacing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaines, Sherry
Intentionally simple buggy code created for use in a debugging demonstration as part of recruiting tech talks. Code exemplifies a buffer overflow, leading to return address corruption. Code also demonstrates unused return value.
BigDebug: Debugging Primitives for Interactive Big Data Processing in Spark
Gulzar, Muhammad Ali; Interlandi, Matteo; Yoo, Seunghyun; Tetali, Sai Deep; Condie, Tyson; Millstein, Todd; Kim, Miryung
2016-01-01
Developers use cloud computing platforms to process a large quantity of data in parallel when developing big data analytics. Debugging the massive parallel computations that run in today’s data-centers is time consuming and error-prone. To address this challenge, we design a set of interactive, real-time debugging primitives for big data processing in Apache Spark, the next generation data-intensive scalable cloud computing platform. This requires re-thinking the notion of step-through debugging in a traditional debugger such as gdb, because pausing the entire computation across distributed worker nodes causes significant delay and naively inspecting millions of records using a watchpoint is too time consuming for an end user. First, BIGDEBUG’s simulated breakpoints and on-demand watchpoints allow users to selectively examine distributed, intermediate data on the cloud with little overhead. Second, a user can also pinpoint a crash-inducing record and selectively resume relevant sub-computations after a quick fix. Third, a user can determine the root causes of errors (or delays) at the level of individual records through a fine-grained data provenance capability. Our evaluation shows that BIGDEBUG scales to terabytes and its record-level tracing incurs less than 25% overhead on average. It determines crash culprits orders of magnitude more accurately and provides up to 100% time saving compared to the baseline replay debugger. The results show that BIGDEBUG supports debugging at interactive speeds with minimal performance impact. PMID:27390389
AF-TRUST, Air Force Team for Research in Ubiquitous Secure Technology
2010-07-26
Charles Sutton, J. D. Tygar, and Kai Xia. Book chapter in Jeffrey J. P. Tsai and Philip S. Yu (eds.) Machine Learning in Cyber Trust: Security, Privacy...enterprise, tactical, embedded systems and command and control levels. From these studies, commissioned by Dr . Sekar Chandersekaran of the Secretary of the...Data centers avoid IP Multicast because of a series of problems with the technology. • Dr . Multicast (the MCMD), a system that maps traditional I PMC
Trace-Driven Debugging of Message Passing Programs
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert; Lopez, Louis; Bailey, David (Technical Monitor)
1998-01-01
In this paper we report on features added to a parallel debugger to simplify the debugging of parallel message passing programs. These features include replay, setting consistent breakpoints based on interprocess event causality, a parallel undo operation, and communication supervision. These features all use trace information collected during the execution of the program being debugged. We used a number of different instrumentation techniques to collect traces. We also implemented trace displays using two different trace visualization systems. The implementation was tested on an SGI Power Challenge cluster and a network of SGI workstations.
Remote software upload techniques in future vehicles and their performance analysis
NASA Astrophysics Data System (ADS)
Hossain, Irina
Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.
[General-purpose microcomputer for medical laboratory instruments].
Vil'ner, G A; Dudareva, I E; Kurochkin, V E; Opalev, A A; Polek, A M
1984-01-01
Presented in the paper is the microcomputer based on the KP580 microprocessor set. Debugging of the hardware and the software by using the unique debugging stand developed on the basis of microcomputer "Electronica-60" is discussed.
An Economic Case for End System Multicast
NASA Astrophysics Data System (ADS)
Analoui, Morteza; Rezvani, Mohammad Hossein
This paper presents a non-strategic model for the end-system multicast networks based on the concept of replica exchange economy. We believe that microeconomics is a good candidate to investigate the problem of selfishness of the end-users (peers) in order to maximize the aggregate throughput. In this solution concept, the decisions that a peer might make, does not affect the actions of the other peers at all. The proposed mechanism tunes the price of the service in such a way that general equilibrium holds.
Simplifying CEA through Excel, VBA, and Subeq
NASA Technical Reports Server (NTRS)
Foster, Ryan
2004-01-01
Many people use compound equilibrium programs for very different reasons, varying from refrigerators to light bulbs to rockets. A commonly used equilibrium program is CEA. CEA can take various inputs such as pressure, temperature, and volume along with numerous reactants and run them through equilibrium equations to obtain valuable output information, including products formed and their relative amounts. A little over a year ago, Bonnie McBride created the program subeq with the goal to simplify the calling of CEA. Subeq was also designed to be called by other programs, including Excel, through the use of Visual Basic for Applications (VBA). The largest advantage of using Excel is that it allows the user to input the information in a colorful and user-friendly environment while allowing VBA to run subeq, which is in the form of a FORTRAN DLL (Dynamic Link Library). Calling subeq in this form makes it much faster than if it were converted to VBA. Since subeq requires such large lists of reactant and product names, all of which can't be passed in as an array, subeq had to be changed to accept very long strings of reactants and products. To pass this string and adjust the transfer of input and output parameters, the subeq DLL had to be changed. One program that does this is Compaq Visual FORTRAN, which allows DLLs to be edited, debugged, and compiled. Compaq Visual FORTRAN uses FORTRAN 90/95, which has additional features to that of FORTRAN 77. My goals this summer include finishing up the excel spreadsheet of subeq, which I started last summer, and putting it on the Internet so that others can use it without having to download my spreadsheet. To finish up the spreadsheet I will need to work on debugging current options and problems. I will also work on making it as robust as possible, so that all errors that may arise will be clearly communicated to the user. New features will be added old ones will be changed as I receive comments from people using the spreadsheet. To implement this onto the Internet, I will need to develop an XML input/output format and learn how to write HTML.
Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks
NASA Astrophysics Data System (ADS)
Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli
2018-01-01
In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.
Preventing Run-Time Bugs at Compile-Time Using Advanced C++
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neswold, Richard
When writing software, we develop algorithms that tell the computer what to do at run-time. Our solutions are easier to understand and debug when they are properly modeled using class hierarchies, enumerations, and a well-factored API. Unfortunately, even with these design tools, we end up having to debug our programs at run-time. Worse still, debugging an embedded system changes its dynamics, making it tough to find and fix concurrency issues. This paper describes techniques using C++ to detect run-time bugs *at compile time*. A concurrency library, developed at Fermilab, is used for examples in illustrating these techniques.
NASA Astrophysics Data System (ADS)
Tian, Changbin; Chang, Jun; Wang, Qiang; Wei, Wei; Zhu, Cunguang
2015-03-01
An optical fiber gas sensor mainly consists of two parts: optical part and detection circuit. In the debugging for the detection circuit, the optical part usually serves as a signal source. However, in the debugging condition, the optical part can be easily influenced by many factors, such as the fluctuation of ambient temperature or driving current resulting in instability of the wavelength and intensity for the laser; for dual-beam sensor, the different bends and stresses of the optical fiber will lead to the fluctuation of the intensity and phase; the intensity noise from the collimator, coupler, and other optical devices in the system will also result in the impurity of the optical part based signal source. In order to dramatically improve the debugging efficiency of the detection circuit and shorten the period of research and development, this paper describes an analog signal source, consisting of a single chip microcomputer (SCM), an amplifier circuit, and a voltage-to-current conversion circuit. It can be used to realize the rapid debugging detection circuit of the optical fiber gas sensor instead of optical part based signal source. This analog signal source performs well with many other advantages, such as the simple operation, small size, and light weight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ennis, G.; Lala, T.K.
This document presents the results of a study undertaken by First Pacific Networks as part of EPRI Project RP-3567-01 regarding the support of broadcast services within the EPRI Utility Communications Architecture (UCA) protocols and the use of such services by UCA applications. This report has focused on the requirements and architectural implications of broadcast within UCA. A subsequent phase of this project is to develop specific recommendations for extending CUA so as to support broadcast. The conclusions of this report are presented in Section 5. The authors summarize the major conclusions as follows: broadcast and multicast support would be verymore » useful within UCA, not only for utility-specific applications but also simply to support the network engineering of a large-scale communications system, in this regard, UCA is no different from other large network systems which have found broadcast and multicast to be of substantial benefit for a variety of system management purposes; the primary architectural impact of broadcast and multicast falls on the UCA network level (which would need to be enhanced) and the UCA application level (which would be the user of broadcast); there is a useful subset of MMS services which could take advantage of broadcast; the UCA network level would need to be enhanced both in the areas of addressing and routing so as to properly support broadcast. A subsequent analysis will be required to define the specific enhancements to UCA required to support broadcast and multicast.« less
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
NASA Technical Reports Server (NTRS)
Kole, R. E.; Helmers, P. H.; Hotz, R. L.
1974-01-01
This is a reference document to be used in the process of getting HAL/S programs compiled and debugged on the IBM 360 computer. Topics from the operating system communication to interpretation of debugging aids are discussed. Features of HAL programming system that have specific system/360 dependencies are presented.
NASA Technical Reports Server (NTRS)
Hoppa, Mary Ann; Wilson, Larry W.
1994-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.
Debugging a high performance computing program
Gooding, Thomas M.
2014-08-19
Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.
Debugging a high performance computing program
Gooding, Thomas M.
2013-08-20
Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.
Multicasting for all-optical multifiber networks
NASA Astrophysics Data System (ADS)
Kã¶Ksal, Fatih; Ersoy, Cem
2007-02-01
All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Supinski, B R; Miller, B P; Liblit, B
2011-09-13
Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two years of the project, we have successfully extended STAT to determine the relative progress of different MPI processes. We have shown that the STAT, which is now included in the debugging tools distributed by Cray with their large-scale systems, substantially reduces the scale at which traditional debugging techniques are applied. We have extended CBI to large-scale systems and developed new compiler based analyses that reduce its instrumentation overhead. Our results demonstrate that CBI can identify the source of errors in large-scale applications. Finally, we have developed MPIecho, a new technique that will reduce the time required to perform key correctness analyses, such as the detection of writes to unallocated memory. Overall, our research results are the foundations for new debugging paradigms that will improve application scientist productivity by reducing the time to determine which package or module contains the root cause of a problem that arises at all scales of our high end systems. While we have made substantial progress in the first two years of CoPS research, significant work remains. While STAT provides scalable debugging assistance for incorrect application runs, we could apply its techniques to assertions in order to observe deviations from expected behavior. Further, we must continue to refine STAT's techniques to represent behavioral equivalence classes efficiently as we expect systems with millions of threads in the next year. We are exploring new CBI techniques that can assess the likelihood that execution deviations from past behavior are the source of erroneous execution. Finally, we must develop usable correctness analyses that apply the MPIecho parallelization strategy in order to locate coding errors. We expect to make substantial progress on these directions in the next year but anticipate that significant work will remain to provide usable, scalable debugging paradigms.« less
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2017-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead. PMID:28736582
Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.
Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz
2016-01-01
Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.
NASA Astrophysics Data System (ADS)
Singh, Sukhbir; Singh, Surinder
2017-11-01
This paper investigated the effect of FWM and its suppression using optical phase conjugation modules in dispersion managed hybrid WDM-OTDM multicast overlay system. Interaction between propagating wavelength signals at higher power level causes new FWM component generation that can significant limit the system performance. OPC module consists of the pump signal and 0.6 km HNLF implemented in midway of optical link to generate destructive phase FWM components. Investigation revealed that by use of even OPC module in optical link reduces the FWM power and mitigate the interaction between wavelength signals at variable signal input power, dispersion parameter (β2) and transmission distance. System performance comparison is also made between without DM-OPC module, with DM and with DM-OPC module in scenario of FWM tolerance. The BER performance of hybrid WDM-OTDM multicast system using OPC module is improved by multiplication factor of 2 as comparable to dispersion managed and coverage distance is increased by factor of 2 as in Singh and Singh (2016).
Software reliability perspectives
NASA Technical Reports Server (NTRS)
Wilson, Larry; Shen, Wenhui
1987-01-01
Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.
Web server for priority ordered multimedia services
NASA Astrophysics Data System (ADS)
Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund
2001-10-01
In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.
Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.
Jung, Young-Ho; Choi, Jihoon
2017-02-25
A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.
A Loss Tolerant Rate Controller for Reliable Multicast
NASA Technical Reports Server (NTRS)
Montgomery, Todd
1997-01-01
This paper describes the design, specification, and performance of a Loss Tolerant Rate Controller (LTRC) for use in controlling reliable multicast senders. The purpose of this rate controller is not to adapt to congestion (or loss) on a per loss report basis (such as per received negative acknowledgment), but instead to use loss report information and perceived state to decide more prudent courses of action for both the short and long term. The goal of this controller is to be responsive to congestion, but not overly reactive to spurious independent loss. Performance of the controller is verified through simulation results.
Aspects of a Theory of Simplification, Debugging, and Coaching.
ERIC Educational Resources Information Center
Fischer, Gerhard; And Others
This paper analyses new methods of teaching skiing in terms of a computational paradigm for learning called increasingly complex microworlds (ICM). Examining the factors that underlie the dramatic enhancement of the learning of skiing led to the focus on the processes of simplification, debugging, and coaching. These three processes are studied in…
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2001-01-01
This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.
A debugger-interpreter with setup facilities for assembly programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolinskii, I.S.; Zisel`man, I.M.; Belotskii, S.L.
1995-11-01
In this paper a software program allowing one to introduce and debug the descriptions of the von Nuemann architecture processors and their assemblers, efficiently debug assembly programs, and investigate the instruction sets of the described processors is considered. For a description of the processor sematics and assembler syntax, a metassembly language is suggested.
Automatic Debugging Support for UML Designs
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
Parsing Protocols Using Problem Solving Grammars. AI Memo 385.
ERIC Educational Resources Information Center
Miller, Mark L.; Goldstein, Ira P.
A theory of the planning and debugging of computer programs is formalized as a context free grammar, which is used to reveal the constituent structure of problem solving episodes by parsing protocols in which programs are written, tested, and debugged. This is illustrated by the detailed analysis of an actual session with a beginning student…
SABRINA: an interactive solid geometry modeling program for Monte Carlo
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T.
SABRINA is a fully interactive three-dimensional geometry modeling program for MCNP. In SABRINA, a user interactively constructs either body geometry, or surface geometry models, and interactively debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces the effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo Analysis.
Allinea Parallel Profiling and Debugging Tools on the Peregrine System |
client for your platform. (Mac/Windows/Linux) Configuration to connect to Peregrine: Open the Allinea view it # directly through x11 forwarding just type 'map', # it will open a GUI. $ map # to profile an enable x-forwarding when connecting to # Peregrine. $ map # This will open the GUI Debugging using
An Object-Oriented Network-Centric Software Architecture for Physical Computing
NASA Astrophysics Data System (ADS)
Palmer, Richard
1997-08-01
Recent developments in object-oriented computer languages and infrastructure such as the Internet, Web browsers, and the like provide an opportunity to define a more productive computational environment for scientific programming that is based more closely on the underlying mathematics describing physics than traditional programming languages such as FORTRAN or C++. In this talk I describe an object-oriented software architecture for representing physical problems that includes classes for such common mathematical objects as geometry, boundary conditions, partial differential and integral equations, discretization and numerical solution methods, etc. In practice, a scientific program written using this architecture looks remarkably like the mathematics used to understand the problem, is typically an order of magnitude smaller than traditional FORTRAN or C++ codes, and hence easier to understand, debug, describe, etc. All objects in this architecture are ``network-enabled,'' which means that components of a software solution to a physical problem can be transparently loaded from anywhere on the Internet or other global network. The architecture is expressed as an ``API,'' or application programmers interface specification, with reference embeddings in Java, Python, and C++. A C++ class library for an early version of this API has been implemented for machines ranging from PC's to the IBM SP2, meaning that phidentical codes run on all architectures.
Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media
NASA Astrophysics Data System (ADS)
Park, Ju-Won; Kim, JongWon
2004-10-01
As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.
A heuristic for efficient data distribution management in distributed simulation
NASA Astrophysics Data System (ADS)
Gupta, Pankaj; Guha, Ratan K.
2005-05-01
In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.
Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks
NASA Astrophysics Data System (ADS)
Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu
Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.
Fingerprint multicast in secure video streaming.
Zhao, H Vicky; Liu, K J Ray
2006-01-01
Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.
Research on Collaborative Technology in Distributed Virtual Reality System
NASA Astrophysics Data System (ADS)
Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi
2018-01-01
Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.
IPTV multicast with peer-assisted lossy error control
NASA Astrophysics Data System (ADS)
Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd
2010-07-01
Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.
Verification and validation of a reliable multicast protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks
NASA Astrophysics Data System (ADS)
Luo, Hongbin; Li, Lemin; Yu, Hongfang
2006-12-01
Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.
Making statistical inferences about software reliability
NASA Technical Reports Server (NTRS)
Miller, Douglas R.
1988-01-01
Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.
Server-Side JavaScript Debugging: Viewing the Contents of an Object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hampton, J.; Simons, R.
1999-04-21
JavaScript allows the definition and use of large, complex objects. Unlike some other object-oriented languages, it also allows run-time modifications not only of the values of object components, but also of the very structure of the object itself. This feature is powerful and sometimes very convenient, but it can be difficult to keep track of the object's structure and values throughout program execution. What's needed is a simple way to view the current state of an object at any point during execution. There is a debug function that is included in the Netscape server-side JavaScript environment. The function outputs themore » value(s) of the expression given as the argument to the function in the JavaScript Application Manager's debug window [SSJS].« less
Flexible Unicast-Based Group Communication for CoAP-Enabled Devices †
Ishaq, Isam; Hoebeke, Jeroen; Van den Abeele, Floris; Rossey, Jen; Moerman, Ingrid; Demeester, Piet
2014-01-01
Smart embedded objects will become an important part of what is called the Internet of Things. Applications often require concurrent interactions with several of these objects and their resources. Existing solutions have several limitations in terms of reliability, flexibility and manageability of such groups of objects. To overcome these limitations we propose an intermediately level of intelligence to easily manipulate a group of resources across multiple smart objects, building upon the Constrained Application Protocol (CoAP). We describe the design of our solution to create and manipulate a group of CoAP resources using a single client request. Furthermore we introduce the concept of profiles for the created groups. The use of profiles allows the client to specify in more detail how the group should behave. We have implemented our solution and demonstrate that it covers the complete group life-cycle, i.e., creation, validation, flexible usage and deletion. Finally, we quantitatively analyze the performance of our solution and compare it against multicast-based CoAP group communication. The results show that our solution improves reliability and flexibility with a trade-off in increased communication overhead. PMID:24901978
Secure distribution for high resolution remote sensing images
NASA Astrophysics Data System (ADS)
Liu, Jin; Sun, Jing; Xu, Zheng Q.
2010-09-01
The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.
Parallel program debugging with flowback analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Jongdeok.
1989-01-01
This thesis describes the design and implementation of an integrated debugging system for parallel programs running on shared memory multi-processors. The goal of the debugging system is to present to the programmer a graphical view of the dynamic program dependences while keeping the execution-time overhead low. The author first describes the use of flowback analysis to provide information on causal relationship between events in a programs' execution without re-executing the program for debugging. Execution time overhead is kept low by recording only a small amount of trace during a program's execution. He uses semantic analysis and a technique called incrementalmore » tracing to keep the time and space overhead low. As part of the semantic analysis, he uses a static program dependence graph structure that reduces the amount of work done at compile time and takes advantage of the dynamic information produced during execution time. The cornerstone of the incremental tracing concept is to generate a coarse trace during execution and fill incrementally, during the interactive portion of the debugging session, the gap between the information gathered in the coarse trace and the information needed to do the flowback analysis using the coarse trace. Then, he describes how to extend the flowback analysis to parallel programs. The flowback analysis can span process boundaries; i.e., the most recent modification to a shared variable might be traced to a different process than the one that contains the current reference. The static and dynamic program dependence graphs of the individual processes are tied together with synchronization and data dependence information to form complete graphs that represent the entire program.« less
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks
Hammad, Karim; El Bakly, Ahmed M.
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.
Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.
NASA Astrophysics Data System (ADS)
Rezvani, Mohammad Hossein; Analoui, Morteza
2010-11-01
We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.
Wang, Ke; Nirmalathas, Ampalavanapillai; Lim, Christina; Skafidas, Efstratios; Alameh, Kamal
2013-07-01
In this paper, we propose and experimentally demonstrate a free-space based high-speed reconfigurable card-to-card optical interconnect architecture with broadcast capability, which is required for control functionalities and efficient parallel computing applications. Experimental results show that 10 Gb/s data can be broadcast to all receiving channels for up to 30 cm with a worst-case receiver sensitivity better than -12.20 dBm. In addition, arbitrary multicasting with the same architecture is also investigated. 10 Gb/s reconfigurable point-to-point link and multicast channels are simultaneously demonstrated with a measured receiver sensitivity power penalty of ~1.3 dB due to crosstalk.
Titian: Data Provenance Support in Spark
Interlandi, Matteo; Shah, Kshitij; Tetali, Sai Deep; Gulzar, Muhammad Ali; Yoo, Seunghyun; Kim, Miryung; Millstein, Todd; Condie, Tyson
2015-01-01
Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today’s DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance—tracking data through transformations—in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds—orders-of-magnitude faster than alternative solutions—while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time. PMID:26726305
SABRINA: an interactive three-dimensional geometry-mnodeling program for MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, J.T. III
SABRINA is a fully interactive three-dimensional geometry-modeling program for MCNP, a Los Alamos Monte Carlo code for neutron and photon transport. In SABRINA, a user constructs either body geometry or surface geometry models and debugs spatial descriptions for the resulting objects. This enhanced capability significantly reduces effort in constructing and debugging complicated three-dimensional geometry models for Monte Carlo analysis. 2 refs., 33 figs.
Interactive debug program for evaluation and modification of assembly-language software
NASA Technical Reports Server (NTRS)
Arpasi, D. J.
1979-01-01
An assembly-language debug program written for the Honeywell HDC-601 and DDP-516/316 computers is described. Names and relative addressing to improve operator-machine interaction are used. Features include versatile display, on-line assembly, and improved program execution and analysis. The program is discussed from both a programmer's and an operator's standpoint. Functional diagrams are included to describe the program, and each command is illustrated.
Simple debugging techniques for embedded subsystems
NASA Astrophysics Data System (ADS)
MacPherson, Matthew S.; Martin, Kevin S.
1990-08-01
This paper describes some of the tools and methods used for developing and debugging embedded subsystems at Fermilab. Specifically, these tools have been used for the Flying Wire project and are currently being employed for the New TECAR upgrade. The Flying Wire is a subsystem that swings a wire through the beam in order to measure luminosity and beam density distribution, and TECAR (Tevatron excitation controller and regulator) controls the power-supply ramp generation for the superconducting Tevatron accelerator at Fermilab. In both instances the subsystem hardware consists of a VME crate with one or more processors, shared memory and a network connection to the accelerator control system. Two real-time-operating systems are currently being used: VRTX for the Flying Wire system, and MTOS for New TECAR. The code which runs in these subsystems is a combination of C and assembler and is developed using the Microtec cross-development tools on a VAX 8650 running VMS. This paper explains how multiple debuggers are used to give the greatest possible flexibility from assembly to high-level debugging. Also discussed is how network debugging and network downloading can make a very effective and efficient means of finding bugs in the subsystem environment. The debuggers used are PROBE1, TRACER and the MTOS debugger.
Bahşi, Hayretdin; Levi, Albert
2010-01-01
Wireless sensor networks (WSNs) generally have a many-to-one structure so that event information flows from sensors to a unique sink. In recent WSN applications, many-to-many structures evolved due to the need for conveying collected event information to multiple sinks. Privacy preserved data collection models in the literature do not solve the problems of WSN applications in which network has multiple un-trusted sinks with different level of privacy requirements. This study proposes a data collection framework bases on k-anonymity for preventing record disclosure of collected event information in WSNs. Proposed method takes the anonymity requirements of multiple sinks into consideration by providing different levels of privacy for each destination sink. Attributes, which may identify an event owner, are generalized or encrypted in order to meet the different anonymity requirements of sinks at the same anonymized output. If the same output is formed, it can be multicasted to all sinks. The other trivial solution is to produce different anonymized outputs for each sink and send them to related sinks. Multicasting is an energy efficient data sending alternative for some sensor nodes. Since minimization of energy consumption is an important design criteria for WSNs, multicasting the same event information to multiple sinks reduces the energy consumption of overall network.
Debugging and Logging Services for Defence Service Oriented Architectures
2012-02-01
Service A software component and callable end point that provides a logically related set of operations, each of which perform a logical step in a...important to note that in some cases when the fault is identified to lie in uneditable code such as program libraries, or outsourced software services ...debugging is limited to characterisation of the fault, reporting it to the software or service provider and development of work-arounds and management
Admission and Preventive Load Control for Delivery of Multicast and Broadcast Services via S-UMTS
NASA Astrophysics Data System (ADS)
Angelou, E.; Koutsokeras, N.; Andrikopoulos, I.; Mertzanis, I.; Karaliopoulos, M.; Henrio, P.
2003-07-01
An Admission Control strategy is proposed for unidirectional satellite systems delivering multicast and broadcast services to mobile users. In such systems, both the radio interface and the targeted services impose particular requirements on the RRM task. We briefly discuss the RRM requirements that stem from the services point of view and from the features of the SATIN access scheme that differentiate it from the conventional T-UMTS radio interface. The main functional entities of RRM and the alternative modes of operation are outlined and the proposed Admission Control algorithm is described in detail. The results from the simulation study that demonstrate its performance for a number of different scenarios are finally presented and conclusions derived.
Flow-Centric, Back-in-Time Debugging
NASA Astrophysics Data System (ADS)
Lienhard, Adrian; Fierz, Julien; Nierstrasz, Oscar
Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even Back-in-Time Debuggers do not help answer the question, “Where did this object come from?” The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.
Advances Made in the Next Generation of Satellite Networks
NASA Technical Reports Server (NTRS)
Bhasin, Kul B.
1999-01-01
Because of the unique networking characteristics of communications satellites, global satellite networks are moving to the forefront in enhancing national and global information infrastructures. Simultaneously, broadband data services, which are emerging as the major market driver for future satellite and terrestrial networks, are being widely acknowledged as the foundation for an efficient global information infrastructure. In the past 2 years, various task forces and working groups around the globe have identified pivotal topics and key issues to address if we are to realize such networks in a timely fashion. In response, industry, government, and academia undertook efforts to address these topics and issues. A workshop was organized to provide a forum to assess the current state-of-the-art, identify key issues, and highlight the emerging trends in the next-generation architectures, data protocol development, communication interoperability, and applications. The Satellite Networks: Architectures, Applications, and Technologies Workshop was hosted by the Space Communication Program at the NASA Lewis Research Center in Cleveland, Ohio. Nearly 300 executives and technical experts from academia, industry, and government, representing the United States and eight other countries, attended the event (June 2 to 4, 1998). The program included seven panels and invited sessions and nine breakout sessions in which 42 speakers presented on technical topics. The proceedings covers a wide range of topics: access technology and protocols, architectures and network simulations, asynchronous transfer mode (ATM) over satellite networks, Internet over satellite networks, interoperability experiments and applications, multicasting, NASA interoperability experiment programs, NASA mission applications, and Transmission Control Protocol/Internet Protocol (TCP/IP) over satellite: issues, relevance, and experience.
NASA Astrophysics Data System (ADS)
Cheng, Yuh-Jiuh; Yeh, Tzuoh-Chyau; Cheng, Shyr-Yuan
2011-09-01
In this paper, a non-blocking multicast optical packet switch based on fiber Bragg grating technology with optical output buffers is proposed. Only the header of optical packets is converted to electronic signals to control the fiber Bragg grating array of input ports and the packet payloads should be transparently destined to their output ports so that the proposed switch can reduce electronic interfaces as well as the bit rate. The modulation and the format of packet payloads may be non-standard where packet payloads could also include different wavelengths for increasing the volume of traffic. The advantage is obvious: the proposed switch could transport various types of traffic. An easily implemented architecture which can provide multicast services is also presented. An optical output buffer is designed to queue the packets if more than one incoming packet should reach to the same destination output port or including any waiting packets in optical output buffer that will be sent to the output port at a time slot. For preserving service-packet sequencing and fairness of routing sequence, a priority scheme and a round-robin algorithm are adopted at the optical output buffer. The fiber Bragg grating arrays for both input ports and output ports are designed for routing incoming packets using optical code division multiple access technology.
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
Flexible Decision Support in Device-Saturated Environments
2003-10-01
also output tuples to a remote MySQL or Postgres database. 3.3 GUI The GUI allows the user to pose queries using SQL and to display query...DatabaseConnection.java – handles connections to an external database (such as MySQL or Postgres ). • Debug.java – contains the code for printing out Debug messages...also provided. It is possible to output the results of queries to a MySQL or Postgres database for archival and the GUI can query those results
Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks
NASA Astrophysics Data System (ADS)
Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan
In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.
Polarization-insensitive PAM-4-carrying free-space orbital angular momentum (OAM) communications.
Liu, Jun; Wang, Jian
2016-02-22
We present a simple configuration incorporating single polarization-sensitive phase-only liquid crystal spatial light modulator (SLM) to facilitate polarization-insensitive free-space optical communications employing orbital angular momentum (OAM) modes. We experimentally demonstrate several polarization-insensitive optical communication subsystems by propagating a single OAM mode, multicasting 4 and 10 OAM modes, and multiplexing 8 OAM modes, respectively. Free-space polarization-insensitive optical communication links using OAM modes that carry four-level pulse-amplitude modulation (PAM-4) signal are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties are less than 1 dB in both polarization-insensitive N-fold OAM modes multicasting and multiple OAM modes multiplexing at a bit-error rate (BER) of 2e-3 (enhanced forward-error correction (EFEC) threshold).
A Lightweight Protocol for Secure Video Streaming
Morkevicius, Nerijus; Bagdonas, Kazimieras
2018-01-01
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing “Fog Node-End Device” layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard. PMID:29757988
A Lightweight Protocol for Secure Video Streaming.
Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis
2018-05-14
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.
Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?
NASA Astrophysics Data System (ADS)
Swartjes, Ivo; Theune, Mariët
We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.
Insertion of coherence requests for debugging a multiprocessor
Blumrich, Matthias A.; Salapura, Valentina
2010-02-23
A method and system are disclosed to insert coherence events in a multiprocessor computer system, and to present those coherence events to the processors of the multiprocessor computer system for analysis and debugging purposes. The coherence events are inserted in the computer system by adding one or more special insert registers. By writing into the insert registers, coherence events are inserted in the multiprocessor system as if they were generated by the normal coherence protocol. Once these coherence events are processed, the processing of coherence events can continue in the normal operation mode.
Backtracking and Re-execution in the Automatic Debugging of Parallelized Programs
NASA Technical Reports Server (NTRS)
Matthews, Gregory; Hood, Robert; Johnson, Stephen; Leggett, Peter; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we describe a new approach using relative debugging to find differences in computation between a serial program and a parallel version of th it program. We use a combination of re-execution and backtracking in order to find the first difference in computation that may ultimately lead to an incorrect value that the user has indicated. In our prototype implementation we use static analysis information from a parallelization tool in order to perform the backtracking as well as the mapping required between serial and parallel computations.
Exact and heuristic algorithms for Space Information Flow.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng
2018-01-01
Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.
Tethered Forth system for FPGA applications
NASA Astrophysics Data System (ADS)
Goździkowski, Paweł; Zabołotny, Wojciech M.
2013-10-01
This paper presents the tethered Forth system dedicated for testing and debugging of FPGA based electronic systems. Use of the Forth language allows to interactively develop and run complex testing or debugging routines. The solution is based on a small, 16-bit soft core CPU, used to implement the Forth Virtual Machine. Thanks to the use of the tethered Forth model it is possible to minimize usage of the internal RAM memory in the FPGA. The function of the intelligent terminal, which is an essential part of the tethered Forth system, may be fulfilled by the standard PC computer or by the smartphone. System is implemented in Python (the software for intelligent terminal), and in VHDL (the IP core for FPGA), so it can be easily ported to different hardware platforms. The connection between the terminal and FPGA may be established and disconnected many times without disturbing the state of the FPGA based system. The presented system has been verified in the hardware, and may be used as a tool for debugging, testing and even implementing of control algorithms for FPGA based systems.
Monitoring Data-Structure Evolution in Distributed Message-Passing Programs
NASA Technical Reports Server (NTRS)
Sarukkai, Sekhar R.; Beers, Andrew; Woodrow, Thomas S. (Technical Monitor)
1996-01-01
Monitoring the evolution of data structures in parallel and distributed programs, is critical for debugging its semantics and performance. However, the current state-of-art in tracking and presenting data-structure information on parallel and distributed environments is cumbersome and does not scale. In this paper we present a methodology that automatically tracks memory bindings (not the actual contents) of static and dynamic data-structures of message-passing C programs, using PVM. With the help of a number of examples we show that in addition to determining the impact of memory allocation overheads on program performance, graphical views can help in debugging the semantics of program execution. Scalable animations of virtual address bindings of source-level data-structures are used for debugging the semantics of parallel programs across all processors. In conjunction with light-weight core-files, this technique can be used to complement traditional debuggers on single processors. Detailed information (such as data-structure contents), on specific nodes, can be determined using traditional debuggers after the data structure evolution leading to the semantic error is observed graphically.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
An Approach to Verification and Validation of a Reliable Multicasting Protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1994-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
An approach to verification and validation of a reliable multicasting protocol
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.
INFORM: An interactive data collection and display program with debugging capability
NASA Technical Reports Server (NTRS)
Cwynar, D. S.
1980-01-01
A computer program was developed to aid ASSEMBLY language programmers of mini and micro computers in solving the man machine communications problems that exist when scaled integers are involved. In addition to producing displays of quasi-steady state values, INFORM provides an interactive mode for debugging programs, making program patches, and modifying the displays. Auxiliary routines SAMPLE and DATAO add dynamic data acquisition and high speed dynamic display capability to the program. Programming information and flow charts to aid in implementing INFORM on various machines together with descriptions of all supportive software are provided. Program modifications to satisfy the individual user's needs are considered.
A practice course to cultivate students' comprehensive ability of photoelectricity
NASA Astrophysics Data System (ADS)
Lv, Yong; Liu, Yang; Niu, Chunhui; Liu, Lishuang
2017-08-01
After the studying of many theoretical courses, it's important and urgent for the students from specialty of optoelectronic information science and engineering to cultivate their comprehensive ability of photoelectricity. We set up a comprehensive practice course named "Integrated Design of Optoelectronic Information System" (IDOIS) for the purpose that students can integrate their knowledge of optics, electronics and computer programming to design, install and debug an optoelectronic system with independent functions. Eight years of practice shows that this practice course can train students' ability of analysis, design/development and debugging of photoelectric system, improve their ability in document retrieval, design proposal and summary report writing, teamwork, innovation consciousness and skill.
Purple L1 Milestone Review Panel TotalView Debugger Functionality and Performance for ASC Purple
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, M
2006-12-12
ASC code teams require a robust software debugging tool to help developers quickly find bugs in their codes and get their codes running. Development debugging commonly runs up to 512 processes. Production jobs run up to full ASC Purple scale, and at times require introspection while running. Developers want a debugger that runs on all their development and production platforms and that works with all compilers and runtimes used with ASC codes. The TotalView Multiprocess Debugger made by Etnus was specified for ASC Purple to address this needed capability. The ASC Purple environment builds on the environment seen by TotalViewmore » on ASCI White. The debugger must now operate with the Power5 CPU, Federation switch, AIX 5.3 operating system including large pages, IBM compilers 7 and 9, POE 4.2 parallel environment, and rs6000 SLURM resource manager. Users require robust, basic debugger functionality with acceptable performance at development debugging scale. A TotalView installation must be provided at the beginning of the early user access period that meets these requirements. A functional enhancement, fast conditional data watchpoints, and a scalability enhancement, capability up to 8192 processes, are to be demonstrated.« less
A unified approach for debugging is-a structure and mappings in networked taxonomies
2013-01-01
Background With the increased use of ontologies and ontology mappings in semantically-enabled applications such as ontology-based search and data integration, the issue of detecting and repairing defects in ontologies and ontology mappings has become increasingly important. These defects can lead to wrong or incomplete results for the applications. Results We propose a unified framework for debugging the is-a structure of and mappings between taxonomies, the most used kind of ontologies. We present theory and algorithms as well as an implemented system RepOSE, that supports a domain expert in detecting and repairing missing and wrong is-a relations and mappings. We also discuss two experiments performed by domain experts: an experiment on the Anatomy ontologies from the Ontology Alignment Evaluation Initiative, and a debugging session for the Swedish National Food Agency. Conclusions Semantically-enabled applications need high quality ontologies and ontology mappings. One key aspect is the detection and removal of defects in the ontologies and ontology mappings. Our system RepOSE provides an environment that supports domain experts to deal with this issue. We have shown the usefulness of the approach in two experiments by detecting and repairing circa 200 and 30 defects, respectively. PMID:23548155
NASA Technical Reports Server (NTRS)
Wilson, Larry
1991-01-01
There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.
vMon-mobile provides wireless connection to the electronic patient record
NASA Astrophysics Data System (ADS)
Oliveira, Pedro P., Jr.; Rebelo, Marina; Pilon, Paulo E.; Gutierrez, Marco A.; Tachinardi, Umberto
2002-05-01
This work presents the development of a set of tools to help doctors to continuously monitor critical patients. Real-time monitoring signals are displayed via a Web Based Electronic Patient Record (Web-EPR) developed at the Heart Institute. Any computer on the Hospital's Intranet can access the Web-EPR that will open a browser plug-in called vMon. Recently vMon was adapted to wireless mobile devices providing the same real-time visualization of vital signals of its desktop counterpart. The monitoring network communicates with the hospital network through a gateway using HL7 messages and has the ability to export waveforms in real time using the multicast protocol through an API library. A dedicated ActiveX component was built that establishes the streaming of the biomedical signals under monitoring and displays them on an Internet Explorer 5.x browser. The mobile version - called vMon-mobile - will parse the browser window and deliver it to a PDA device connected to a local area network. The result is a virtual monitor presenting real-time data on a mobile device. All parameters and signals acquired from the moment the patient is connected to the monitors are stored for a few days. The most clinically relevant information is added to patient's EPR.
NASA Astrophysics Data System (ADS)
Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo
2006-01-01
Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.
Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter
NASA Astrophysics Data System (ADS)
Ashizawa, Kunitaka; Sato, Takehiro; Tokuhashi, Kazumasa; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki; Oki, Eiji
This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30msec, which is acceptable for on-demand broadcast services.
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2010 CFR
2010-10-01
... offerings. Television and Class A television broadcast stations may make these announcements visually or... multicast audio programming streams, in a manner that appropriately alerts its audience to the fact that it is listening to a digital audio broadcast. No other insertion between the station's call letters and...
Back pressure based multicast scheduling for fair bandwidth allocation.
Sarkar, Saswati; Tassiulas, Leandros
2005-09-01
We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.
Secure Multicast Tree Structure Generation Method for Directed Diffusion Using A* Algorithms
NASA Astrophysics Data System (ADS)
Kim, Jin Myoung; Lee, Hae Young; Cho, Tae Ho
The application of wireless sensor networks to areas such as combat field surveillance, terrorist tracking, and highway traffic monitoring requires secure communication among the sensor nodes within the networks. Logical key hierarchy (LKH) is a tree based key management model which provides secure group communication. When a sensor node is added or evicted from the communication group, LKH updates the group key in order to ensure the security of the communications. In order to efficiently update the group key in directed diffusion, we propose a method for secure multicast tree structure generation, an extension to LKH that reduces the number of re-keying messages by considering the addition and eviction ratios of the history data. For the generation of the proposed key tree structure the A* algorithm is applied, in which the branching factor at each level can take on different value. The experiment results demonstrate the efficiency of the proposed key tree structure against the existing key tree structures of fixed branching factors.
Context-based user grouping for multi-casting in heterogeneous radio networks
NASA Astrophysics Data System (ADS)
Mannweiler, C.; Klein, A.; Schneider, J.; Schotten, H. D.
2011-08-01
Along with the rise of sophisticated smartphones and smart spaces, the availability of both static and dynamic context information has steadily been increasing in recent years. Due to the popularity of social networks, these data are complemented by profile information about individual users. Making use of this information by classifying users in wireless networks enables targeted content and advertisement delivery as well as optimizing network resources, in particular bandwidth utilization, by facilitating group-based multi-casting. In this paper, we present the design and implementation of a web service for advanced user classification based on user, network, and environmental context information. The service employs simple and advanced clustering algorithms for forming classes of users. Available service functionalities include group formation, context-aware adaptation, and deletion as well as the exposure of group characteristics. Moreover, the results of a performance evaluation, where the service has been integrated in a simulator modeling user behavior in heterogeneous wireless systems, are presented.
A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process
NASA Technical Reports Server (NTRS)
Wang, Yi; Tamai, Tetsuo
2009-01-01
Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.
The Development of Interactive Distance Learning in Taiwan: Challenges and Prospects.
ERIC Educational Resources Information Center
Chu, Clarence T.
1999-01-01
Describes three types of interactive distance-education systems under development in Taiwan: real-time multicast systems; virtual-classroom systems; and curriculum-on-demand systems. Discusses the use of telecommunications and computer technology in higher education, problems and challenges, and future prospects. (Author/LRW)
Australian DefenceScience. Volume 16, Number 1, Autumn
2008-01-01
are carried via VOIP technology, and multicast IP traffic for audio -visual communications is also supported. The SSATIN system overall is seen to...Artificial Intelligence and Soft Computing Palma de Mallorca, Spain http://iasted.com/conferences/home-628.html 1 - 3 Sep 2008 Visualisation , Imaging and
The Little Thompson Observatory
NASA Astrophysics Data System (ADS)
Schweitzer, A.; VanLew, K.; Melsheimer, T.; Sackett, C.
2000-12-01
The Little Thompson Observatory is the second member of the Telescopes in Education (TIE) project. The observatory is located on the grounds of Berthoud High School in northern Colorado. The telescope is operational over the Internet, and we are now debugging the software to enable schools to control the telescope from classroom computers and take images. Local schools and youth organizations have prioritized access to the telescope, and there are monthly opportunities for public viewing. In the future, the telescope will be open after midnight to world-wide use by schools following the model of the first TIE observatory, the 24" telescope on Mt. Wilson. The observatory grew out of grassroots support from the local community surrounding Berthoud, Colorado, a town of 3,500 residents. TIE has provided the observatory with a Tinsley 18" Cassegrain telescope on a 10-year loan. The facility has been built with tremendous support from volunteers and the local school district. With funding from an IDEAS grant, we have completed the first teacher training workshops to allow K-12 schools in northern Colorado to make use of the Little Thompson Observatory, including remote observing from classrooms. The workshops were accredited by the school district, and received very favorable reviews.
Intelligent control system based on ARM for lithography tool
NASA Astrophysics Data System (ADS)
Chen, Changlong; Tang, Xiaoping; Hu, Song; Wang, Nan
2014-08-01
The control system of traditional lithography tool is based on PC and MCU. The PC handles the complex algorithm, human-computer interaction, and communicates with MCU via serial port; The MCU controls motors and electromagnetic valves, etc. This mode has shortcomings like big volume, high power consumption, and wasting of PC resource. In this paper, an embedded intelligent control system of lithography tool, based on ARM, is provided. The control system used S5PV210 as processor, completing the functions of PC in traditional lithography tool, and provided a good human-computer interaction by using LCD and capacitive touch screen. Using Android4.0.3 as operating system, the equipment provided a cool and easy UI which made the control more user-friendly, and implemented remote control and debug, pushing video information of product by network programming. As a result, it's convenient for equipment vendor to provide technical support for users. Finally, compared with traditional lithography tool, this design reduced the PC part, making the hardware resources efficiently used and reducing the cost and volume. Introducing embedded OS and the concepts in "The Internet of things" into the design of lithography tool can be a development trend.
ScaMo: Realisation of an OO-functional DSL for cross platform mobile applications development
NASA Astrophysics Data System (ADS)
Macos, Dragan; Solymosi, Andreas
2013-10-01
The software market is dynamically changing: the Internet is going mobile, the software applications are shifting from the desktop hardware onto the mobile devices. The largest markets are the mobile applications for iOS, Android and Windows Phone and for the purpose the typical programming languages include Objective-C, Java and C ♯. The realization of the native applications implies the integration of the developed software into the environments of mentioned mobile operating systems to enable the access to different hardware components of the devices: GPS module, display, GSM module, etc. This paper deals with the definition and possible implementation of an environment for the automatic application generation for multiple mobile platforms. It is based on a DSL for mobile application development, which includes the programming language Scala and a DSL defined in Scala. As part of a multi-stage cross-compiling algorithm, this language is translated into the language of the affected mobile platform. The advantage of our method lies in the expressiveness of the defined language and the transparent source code translation between different languages, which implies, for example, the advantages of debugging and development of the generated code.
47 CFR 76.66 - Satellite broadcast signal carriage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... free over-the-air signal, including multicast and high definition digital signals. (c) Election cycle... first retransmission consent-mandatory carriage election cycle shall be for a four-year period... carriage election cycle, and all cycles thereafter, shall be for a period of three years (e.g. the second...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2014 CFR
2014-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2013 CFR
2013-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2011 CFR
2011-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
47 CFR 73.1201 - Station identification.
Code of Federal Regulations, 2012 CFR
2012-10-01
...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...
A design of camera simulator for photoelectric image acquisition system
NASA Astrophysics Data System (ADS)
Cai, Guanghui; Liu, Wen; Zhang, Xin
2015-02-01
In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.
The Development of CyberLearning in Dual-Mode: Higher Education Institutions in Taiwan.
ERIC Educational Resources Information Center
Chen, Yau Jane
2002-01-01
Open and distance education in Taiwan has evolved into cyberlearning. Over half (56 percent) of the conventional universities and colleges have been upgraded to dual-mode institutions offering real-time multicast instructional systems using videoconferencing, cable television, virtual classrooms, and curriculum-on-demand systems. The Ministry of…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-30
...'s channel number, as stated on the station's license, and/or the station's network affiliation may... Stations, choosing to include the station's channel number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station...
Multimedia C for Remote Language Teaching over SuperJANET.
ERIC Educational Resources Information Center
Matthews, E.; And Others
1996-01-01
Describes work carried out as part of a remote language teaching research investigation, which is looking into the use of multicast, multimedia conferencing over SuperJANET. The fundamental idea is to investigate the feasibility of sharing language teaching resources among universities within the United Kingdom by using the broadband SuperJANET…
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2013 CFR
2013-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
75 FR 53198 - Rate Adjustment for the Satellite Carrier Compulsory License
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-31
... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Part 386 [Docket No. 2010-4 CRB Satellite Rate] Rate Adjustment for the Satellite Carrier Compulsory License AGENCY: Copyright Royalty Board, Library... last day of a given month. (2) In the case of a station engaged in digital multicasting, the rates set...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.
Code of Federal Regulations, 2012 CFR
2012-07-01
... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...
Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management
ERIC Educational Resources Information Center
Osawa, Noritaka; Asai, Kikuo
2008-01-01
A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…
Using Interactive Broadband Multicasting in a Museum Lifelong Learning Program.
ERIC Educational Resources Information Center
Steinbach, Leonard
The Cleveland Museum of Art has embarked on an innovative approach for delivering high quality video-on-demand and live interactive cultural programming, along with Web-based complementary material, to seniors in assisted living residence facilities, community-based centers, and disabled persons in their homes. The project is made possible in part…
Cooperation and information replication in wireless networks.
Poularakis, Konstantinos; Tassiulas, Leandros
2016-03-06
A significant portion of today's network traffic is due to recurring downloads of a few popular contents. It has been observed that replicating the latter in caches installed at network edges-close to users-can drastically reduce network bandwidth usage and improve content access delay. Such caching architectures are gaining increasing interest in recent years as a way of dealing with the explosive traffic growth, fuelled further by the downward slope in storage space price. In this work, we provide an overview of caching with a particular emphasis on emerging network architectures that enable caching at the radio access network. In this context, novel challenges arise due to the broadcast nature of the wireless medium, which allows simultaneously serving multiple users tuned into a multicast stream, and the mobility of the users who may be frequently handed off from one cell tower to another. Existing results indicate that caching at the wireless edge has a great potential in removing bottlenecks on the wired backbone networks. Taking into consideration the schedule of multicast service and mobility profiles is crucial to extract maximum benefit in network performance. © 2016 The Author(s).
Design alternatives for process group membership and multicast
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.
Lee, Chaewoo
2014-01-01
The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Edwin S.
Under the CRADA, NREL will provide assistance to NRGsim to debug and convert the EnergyPlus Hysteresis Phase Change Material ('PCM') model to C++ for adoption into the main code package of the EnergyPlus simulation engine.
2008-03-01
in all parts of the program except the predicates. B. PRELIMINARY EXPERIMENTATION Working with the hand written program initially to get a feel...PROBLEM STATEMENT AND MOTIVATION .......................................2 II. RELATED WORK ...ISOLATION.........................................7 III. PRELIMINARY WORK
CLON: Overlay Networks and Gossip Protocols for Cloud Environments
NASA Astrophysics Data System (ADS)
Matos, Miguel; Sousa, António; Pereira, José; Oliveira, Rui; Deliot, Eric; Murray, Paul
Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource consumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links as happens in Cloud Computing infrastructures composed by several interconnected data centers across the globe.
A Security Architecture for Fault-Tolerant Systems
1993-06-03
aspect of our effort to achieve better performance is integrating the system into microkernel -based operating systems. 4 Summary and discussion In...135-171, June 1983. [vRBC+92] R. van Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels . In...Proceedings of the USENIX Microkernels and Other Kernel Architectures Workshop, April 1992. 29
Fast Whole-Engine Stirling Analysis
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Wilson, Scott D.; Tew, Roy C.; Demko, Rikako
2006-01-01
This presentation discusses the simulation approach to whole-engine for physical consistency, REV regenerator modeling, grid layering for smoothness, and quality, conjugate heat transfer method adjustment, high-speed low cost parallel cluster, and debugging.
A Reference Model for Monitoring IoT WSN-Based Applications.
Capella, Juan Vicente; Campelo, José Carlos; Bonastre, Alberto; Ors, Rafael
2016-10-30
The Internet of Things (IoT) is, at this moment, one of the most promising technologies that has arisen for decades. Wireless Sensor Networks (WSNs) are one of the main pillars for many IoT applications, insofar as they require to obtain context-awareness information. The bibliography shows many difficulties in their real implementation that have prevented its massive deployment. Additionally, in IoT environments where data producers and data consumers are not directly related, compatibility and certification issues become fundamental. Both problems would profit from accurate knowledge of the internal behavior of WSNs that must be obtained by the utilization of appropriate tools. There are many ad-hoc proposals with no common structure or methodology, and intended to monitor a particular WSN. To overcome this problem, this paper proposes a structured three-layer reference model for WSN Monitoring Platforms (WSN-MP), which offers a standard environment for the design of new monitoring platforms to debug, verify and certify a WSN's behavior and performance, and applicable to every WSN. This model also allows the comparative analysis of the current proposals for monitoring the operation of WSNs. Following this methodology, it is possible to achieve a standardization of WSN-MP, promoting new research areas in order to solve the problems of each layer.
Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai
2009-01-01
Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.
Mahmoudi, Morteza
2018-03-17
Despite considerable efforts in the field of nanomedicine that have been made by researchers, funding agencies, entrepreneurs, and the media, fewer nanoparticle (NP) technologies than expected have made it to clinical trials. The wide gap between the efforts and effective clinical translation is, at least in part, due to multiple overlooked factors in both in vitro and in vivo environments, a poor understanding of the nano-bio interface, and misinterpretation of the data collected in vitro, all of which reduce the accuracy of predictions regarding the NPs' fate and safety in humans. To minimize this bench-to-clinic gap, which may accelerate successful clinical translation of NPs, this opinion paper aims to introduce strategies for systematic debugging of nano-bio interfaces in the current literature. Copyright © 2018 Elsevier Ltd. All rights reserved.
Risk management technique for liquefied natural gas facilities
NASA Technical Reports Server (NTRS)
Fedor, O. H.; Parsons, W. N.
1975-01-01
Checklists have been compiled for planning, design, construction, startup and debugging, and operation of liquefied natural gas facilities. Lists include references to pertinent safety regulations. Methods described are applicable to handling of other hazardous materials.
75 FR 52267 - Waiver of Statement of Account Filing Deadline for the 2010/1 Period
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-25
... available in a print format, a PDF format, and a software ``fill-in'' format created by Gralin Associates... retransmission of multicast streams. The paper and PDF versions of the form have been available to cable... recognize that the paper and PDF versions of the SOA have been available since July, many large and small...
High Performance Computing Multicast
2012-02-01
responsiveness, first-tier applications often implement replicated in- memory key-value stores , using them to store state or to cache data from services...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response. A...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-29
... to offer remote multi-cast ITCH Wave Ports for clients co-located at other third party data centers... delivery of third party market data to market center clients via a wireless network using millimeter wave... Multi- cast ITCH Wave Ports for clients co-located at other third-party data centers, through which...
Design and Implementation of Replicated Object Layer
NASA Technical Reports Server (NTRS)
Koka, Sudhir
1996-01-01
One of the widely used techniques for construction of fault tolerant applications is the replication of resources so that if one copy fails sufficient copies may still remain operational to allow the application to continue to function. This thesis involves the design and implementation of an object oriented framework for replicating data on multiple sites and across different platforms. Our approach, called the Replicated Object Layer (ROL) provides a mechanism for consistent replication of data over dynamic networks. ROL uses the Reliable Multicast Protocol (RMP) as a communication protocol that provides for reliable delivery, serialization and fault tolerance. Besides providing type registration, this layer facilitates distributed atomic transactions on replicated data. A novel algorithm called the RMP Commit Protocol, which commits transactions efficiently in reliable multicast environment is presented. ROL provides recovery procedures to ensure that site and communication failures do not corrupt persistent data, and male the system fault tolerant to network partitions. ROL will facilitate building distributed fault tolerant applications by performing the burdensome details of replica consistency operations, and making it completely transparent to the application.Replicated databases are a major class of applications which could be built on top of ROL.
Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani
2015-01-01
Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting.
Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani
2015-01-01
Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting. PMID:26236773
Toward fidelity between specification and implementation
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing
1994-01-01
This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.
NASA Technical Reports Server (NTRS)
OFarrell, Zachary L.
2013-01-01
The goal of this project is to create a website that displays video, countdown clock, and event times to customers during launches, without needing to be connected to the internal operations network. The requirements of this project are to also minimize the delay in the clock and events to be less than two seconds. The two parts of this are the webpage, which will display the data and videos to the user, and a server to send clock and event data to the webpage. The webpage is written in HTML with CSS and JavaScript. The JavaScript is responsible for connecting to the server, receiving new clock data, and updating the webpage. JavaScript is used for this because it can send custom HTTP requests from the webpage, and provides the ability to update parts of the webpage without having to refresh the entire page. The server application will act as a relay between the operations network, and the open internet. On the operations network side, the application receives multicast packets that contain countdown clock and events data. It will then parse the data into current countdown times and events, and create a packet with that information that can be sent to webpages. The other part will accept HTTP requests from the webpage, and respond to them with current data. The server is written in C# with some C++ files used to define the structure of data packets. The videos for the webpage will be shown in an embedded player from UStream.
A Novel Group Coordination Protocol for Collaborative Multimedia Systems
1998-01-01
technology have advanced considerably, ef- ficient group coordination support for applications characterized by synchronous and wide-area groupwork is...As a component within a general coordination architecture for many-to-many groupwork , floor control coexists with proto- cols for reliable ordered...multicast and media synchronization at a sub-application level. Orchestration of multiparty groupwork with fine-grained and fair floor control is an
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-28
... that is in test mode in excess of one. (c)-(f) No change. (g) Other Port Fees Remote Multi-cast ITCH... environment to test upcoming NASDAQ releases and product enhancements, as well as test software prior to... public in accordance with the provisions of 5 U.S.C. 552, will be available for Web site viewing and...
Multiuser Transmit Beamforming for Maximum Sum Capacity in Tactical Wireless Multicast Networks
2006-08-01
commonly used extended Kalman filter . See [2, 5, 6] for recent tutorial overviews. In particle filtering , continuous distributions are approximated by...signals (using and developing associated particle filtering tools). Our work on these topics has been reported in seven (IEEE, SIAM) journal papers and...multidimensional scaling, tracking, intercept, particle filters . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT 18. SECURITY CLASSIFICATION OF
Sketchcode: A Documentation Technique for Computer Hobbyists and Programmers
ERIC Educational Resources Information Center
Voros, Todd, L.
1978-01-01
Sketchcode is a metaprograming pseudo-language documentation technique intended to simplify the process of program writing and debugging for both high and low-level users. Helpful hints and examples for the use of the technique are included. (CMV)
Scientific computation systems quality branch manual
NASA Technical Reports Server (NTRS)
1972-01-01
A manual is presented which is designed to familiarize the GE 635 user with the configuration and operation of the overall system. Work submission, programming standards, restrictions, testing and debugging, and related general information is provided for GE 635 programmer.
Mission and data operations IBM 360 user's guide
NASA Technical Reports Server (NTRS)
Balakirsky, J.
1973-01-01
The M and DO computer systems are introduced and supplemented. The hardware and software status is discussed, along with standard processors and user libraries. Data management techniques are presented, as well as machine independence, debugging facilities, and overlay considerations.
Alverson, Dale C; Saiki, Stanley M; Jacobs, Joshua; Saland, Linda; Keep, Marcus F; Norenberg, Jeffrey; Baker, Rex; Nakatsu, Curtis; Kalishman, Summers; Lindberg, Marlene; Wax, Diane; Mowafi, Moad; Summers, Kenneth L; Holten, James R; Greenfield, John A; Aalseth, Edward; Nickles, David; Sherstyuk, Andrei; Haines, Karen; Caudell, Thomas P
2004-01-01
Medical knowledge and skills essential for tomorrow's healthcare professionals continue to change faster than ever before creating new demands in medical education. Project TOUCH (Telehealth Outreach for Unified Community Health) has been developing methods to enhance learning by coupling innovations in medical education with advanced technology in high performance computing and next generation Internet2 embedded in virtual reality environments (VRE), artificial intelligence and experiential active learning. Simulations have been used in education and training to allow learners to make mistakes safely in lieu of real-life situations, learn from those mistakes and ultimately improve performance by subsequent avoidance of those mistakes. Distributed virtual interactive environments are used over distance to enable learning and participation in dynamic, problem-based, clinical, artificial intelligence rules-based, virtual simulations. The virtual reality patient is programmed to dynamically change over time and respond to the manipulations by the learner. Participants are fully immersed within the VRE platform using a head-mounted display and tracker system. Navigation, locomotion and handling of objects are accomplished using a joy-wand. Distribution is managed via the Internet2 Access Grid using point-to-point or multi-casting connectivity through which the participants can interact. Medical students in Hawaii and New Mexico (NM) participated collaboratively in problem solving and managing of a simulated patient with a closed head injury in VRE; dividing tasks, handing off objects, and functioning as a team. Students stated that opportunities to make mistakes and repeat actions in the VRE were extremely helpful in learning specific principles. VRE created higher performance expectations and some anxiety among VRE users. VRE orientation was adequate but students needed time to adapt and practice in order to improve efficiency. This was also demonstrated successfully between Western Australia and UNM. We successfully demonstrated the ability to fully immerse participants in a distributed virtual environment independent of distance for collaborative team interaction in medical simulation designed for education and training. The ability to make mistakes in a safe environment is well received by students and has a positive impact on their understanding, as well as memory of the principles involved in correcting those mistakes. Bringing people together as virtual teams for interactive experiential learning and collaborative training, independent of distance, provides a platform for distributed "just-in-time" training, performance assessment and credentialing. Further validation is necessary to determine the potential value of the distributed VRE in knowledge transfer, improved future performance and should entail training participants to competence in using these tools.
Second CLIPS Conference Proceedings, volume 1
NASA Technical Reports Server (NTRS)
Giarratano, Joseph (Editor); Culbert, Christopher J. (Editor)
1991-01-01
Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems.
Enhanced Performance & Functionality of Tunable Delay Lines
2012-08-01
Figure 6. Experimental setup. Transmitter is capable of generating 80-Gb/s RZ-DQPSK, 40-Gb/s RZ-DPSK and 40-Gb/s RZ-OOK modulation formats. Phase...Power penalty with respect to B2B of each channel for 2-, 4-, 8-fold multicasting. (c) Pulsewidth as a function of DGD along with eye diagrams of 2...63 Figure 99. Concept. (a) A distributed optical network ; (b) NOLMs for
Scalable Technology for a New Generation of Collaborative Applications
2007-04-01
of the International Symposium on Distributed Computing (DISC), Cracow, Poland, September 2005. Classic Paxos vs. Fast Paxos: Caveat Emptor, Flavio...grou or able and fast multicast primitive to layer under high-level latency across dimensions as varied as group size [10, 17],abstractions such as...servers, networked via fast , dedicated interconnects. The system to subscribe to a fraction of the equities on the software stack running on a single
Saguaro: A Distributed Operating System Based on Pools of Servers.
1988-03-25
asynchronous message passing, multicast, and semaphores are supported. We have found this flexibility to be very useful for distributed programming. The...variety of communication primitives provided by SR has facilitated the research of Stella Atkins, who was a visiting professor at Arizona during Spring...data bits in a raw communication channel to help keep the source and destination synchronized , Psync explicitly embeds timing information drawn from the
Extensible Interest Management for Scalable Persistent Distributed Virtual Environments
1999-12-01
Calvin, Cebula et al. 1995; Morse, Bic et al. 2000) uses a two grid, with each grid cell having two multicast addresses. An entity expresses interest...Entity distribution for experimental runs 78 s I * • ...... ^..... * * a» Sis*«*»* 1 ***** Jj |r...Multiple Users and Shared Applications with VRML. VRML 97, Monterey, CA. pp. 33-40. Calvin, J. O., D. P. Cebula , et al. (1995). Data Subscription in
Improved Lower Bounds on the Price of Stability of Undirected Network Design Games
NASA Astrophysics Data System (ADS)
Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero
Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.
A framework using cluster-based hybrid network architecture for collaborative virtual surgery.
Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann
2009-12-01
Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.
PlanWorks: A Debugging Environment for Constraint Based Planning Systems
NASA Technical Reports Server (NTRS)
Daley, Patrick; Frank, Jeremy; Iatauro, Michael; McGann, Conor; Taylor, Will
2005-01-01
Numerous planning and scheduling systems employ underlying constraint reasoning systems. Debugging such systems involves the search for errors in model rules, constraint reasoning algorithms, search heuristics, and the problem instance (initial state and goals). In order to effectively find such problems, users must see why each state or action is in a plan by tracking causal chains back to part of the initial problem instance. They must be able to visualize complex relationships among many different entities and distinguish between those entities easily. For example, a variable can be in the scope of several constraints, as well as part of a state or activity in a plan; the activity can arise as a consequence of another activity and a model rule. Finally, they must be able to track each logical inference made during planning. We have developed PlanWorks, a comprehensive system for debugging constraint-based planning and scheduling systems. PlanWorks assumes a strong transaction model of the entire planning process, including adding and removing parts of the constraint network, variable assignment, and constraint propagation. A planner logs all transactions to a relational database that is tailored to support queries for of specialized views to display different forms of data (e.g. constraints, activities, resources, and causal links). PlanWorks was specifically developed for the Extensible Universal Remote Operations Planning Architecture (EUROPA(sub 2)) developed at NASA, but the underlying principles behind PlanWorks make it useful for many constraint-based planning systems. The paper is organized as follows. We first describe some fundamentals of EUROPA(sub 2). We then describe PlanWorks' principal components. We then discuss each component in detail, and then describe inter-component navigation features. We close with a discussion of how PlanWorks is used to find model flaws.
Microsupercomputers: Design and Implementation
1991-03-01
been ported to the DASH hardware. Hardware problems and software problems with DPV itself prevented its use as a debugging tool until recently. Both the...M.PD) [21]. an LU- decomposition program (LU). and a digita! logic simulation prgram 1 Introduction (PTHOR) [28]. The applcations are typical of those
Data Acquisition Unit for SATCOM Signal Analyzer
1980-01-01
APSIM simulator program APDEBUG debugging program APTEST diagnostic and test program MATH Library IOP-16 16 bit I/O port 223 APPENDIX C Table...3. SYNTEST Corporation, Frequency Synthesizer Module, Data Sheet, The Syntest SM-101 Frequency Synthesizer Module, not dated . 4. DATEL Systems Inc
Performance Metrics for Monitoring Parallel Program Executions
NASA Technical Reports Server (NTRS)
Sarukkai, Sekkar R.; Gotwais, Jacob K.; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
Existing tools for debugging performance of parallel programs either provide graphical representations of program execution or profiles of program executions. However, for performance debugging tools to be useful, such information has to be augmented with information that highlights the cause of poor program performance. Identifying the cause of poor performance necessitates the need for not only determining the significance of various performance problems on the execution time of the program, but also needs to consider the effect of interprocessor communications of individual source level data structures. In this paper, we present a suite of normalized indices which provide a convenient mechanism for focusing on a region of code with poor performance and highlights the cause of the problem in terms of processors, procedures and data structure interactions. All the indices are generated from trace files augmented with data structure information.. Further, we show with the help of examples from the NAS benchmark suite that the indices help in detecting potential cause of poor performance, based on augmented execution traces obtained by monitoring the program.
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
In-situ FPGA debug driven by on-board microcontroller
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Zachary Kent
2009-01-01
Often we are faced with the situation that the behavior of a circuit changes in an unpredictable way when chassis cover is attached or the system is not easily accessible. For instance, in a deployed environment, such as space, hardware can malfunction in unpredictable ways. What can a designer do to ascertain the cause of the problem? Register interrogations only go so far, and sometimes the problem being debugged is register transactions themselves, or the problem lies in FPGA programming. This work provides a solution to this; namely, the ability to drive a JTAG chain via an on-board microcontroller andmore » use a simple clone of the Xilinx Chipscope core without a Xilinx JTAG cable or any external interfaces required. We have demonstrated the functionality of the prototype system using a Xilinx Spartan 3E FPGA and a Microchip PIC18j2550 microcontroller. This paper will discuss the implementation details as well as present case studies describing how the tools have aided satellite hardware development.« less
DI: An interactive debugging interpreter for applicative languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skedzielewski, S.K.; Yates, R.K.; Oldehoeft, R.R.
1987-03-12
The DI interpreter is both a debugger and interpreter of SISLAL programs. Its use as a program interpreter is only a small part of its role; it is designed to be a tool for studying compilation techniques for applicative languages. DI interprets dataflow graphs expressed in the IF1 and IF2 languages, and is heavily instrumented to report the activity of dynamic storage activity, reference counting, copying and updating of structured data values. It also aids the SISAL language evaluation by providing an interim execution vehicle for SISAL programs. DI provides determinate, sequential interpretation of graph nodes for sequential and parallelmore » operations in a canonical order. As a debugging aid, DI allows tracing, breakpointing, and interactive display of program data values. DI handles creation of SISAL and IF1 error values for each data type and propagates them according to a well-defined algebra. We have begun to implement IF1 optimizers and have measured the improvements with DI.« less
Using PAFEC as a preprocessor for COSMIC/NASTRAN
NASA Technical Reports Server (NTRS)
Gray, W. H.; Baudry, T. V.
1983-01-01
Programs for Automatic Finite Element Calculations (PAFEC) is a general purpose, three dimensional linear and nonlinear finite element program (ref. 1). PAFEC's features include free format input utilizing engineering keywords, powerful mesh generating facilities, sophisticated data base management procedures, and extensive data validation checks. Presented here is a description of a software interface that permits PAFEC to be used as a preprocessor for COSMIC/NASTRAN. This user friendly software, called PAFCOS, frees the stress analyst from the laborious and error prone procedure of creating and debugging a rigid format COSMIC/NASTRAN bulk data deck. By interactively creating and debugging a finite element model with PAFEC, thus taking full advantage of the free format engineering keyword oriented data structure of PAFEC, the amount of time spent during model generation can be drastically reduced. The PAFCOS software will automatically convert a PAFEC data structure into a COSMIC/NASTRAN bulk data deck. The capabilities and limitations of the PAFCOS software are fully discussed in the following report.
Dynamic visualization techniques for high consequence software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-02-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification. The prototype tool is described along with the requirements constraint language after a brief literature review is presented. Examples of howmore » the tool can be used are also presented. In conclusion, the most significant advantage of this tool is to provide a first step in evaluating specification completeness, and to provide a more productive method for program comprehension and debugging. The expected payoff is increased software surety confidence, increased program comprehension, and reduced development and debugging time.« less
GUMP: Adapting Client/Server Messaging Protocols into Peer-to-Peer Serverless Environments
2010-06-11
and other related metadata, such as message re- ceiver ID (for supporting multiple connections) and so forth. The Proxy consumes the message and uses...the underlying discovery subsystem and multicast to process the message and translate the request into behaviour suitable for the un- derlying...communication i.e. a chat. Jingle (XEP-0166) [26] is a related specification that de- fines an extension to the XMPP protocol for initiating and
Tactical Mobile Communications (Communications tactiques mobiles)
1999-11-01
13]. randomly at the network nodes. Each multicast group Our studies do, in fact, support this conjecture. consists of the source node plus at least...Capability investigate the MMR concept in some more detail. The study was contracted to a group which Multi-role denotes the capability to support a...through the HW- and SW-resources of the frontends can be incorporated in a task-dedicated GPU. Functions can be grouped into four categories: MMR
Multimedia Data Capture with Multicast Dissemination for Online Distance Learning
2001-12-01
Juan Gril and Dr. Don Brutzman to wrap the multiple videos in a user- friendly environment. The web pages also contain the original PowerPoint...this CD, Juan Gril , a volunteer for the Siggraph 2001 Online Committee, created web pages that match the style and functionality desired by the...leader. The Committee for 2001 consisted of Don Brutzman, Stephen. Matsuba, Mike Collins, Allen Dutton, Juan Gril , Mike Hunsberger, Jerry Isdale
2002-09-01
Secure Multicast......................................................................24 i. Message Digests and Message Authentication Codes ( MACs ...that is, the needs of the VE will determine what the design will look like (e.g., reliable vs . unreliable data communications). In general, there...Molva00] and [Abdalla00]. i. Message Digests and Message Authentication Codes ( MACs ) Message digests and MACs are used for data integrity verification
Robust Airborne Networking Extensions (RANGE)
2008-02-01
IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code
MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.
Synthetic biology projects in vitro.
Forster, Anthony C; Church, George M
2007-01-01
Advances in the in vitro synthesis and evolution of DNA, RNA, and polypeptides are accelerating the construction of biopolymers, pathways, and organisms with novel functions. Known functions are being integrated and debugged with the aim of synthesizing life-like systems. The goals are knowledge, tools, smart materials, and therapies.
Epistemic Gameplay and Discovery in Computational Model-Based Inquiry Activities
ERIC Educational Resources Information Center
Wilkerson, Michelle Hoda; Shareff, Rebecca; Laina, Vasiliki; Gravel, Brian
2018-01-01
In computational modeling activities, learners are expected to discover the inner workings of scientific and mathematical systems: First elaborating their understandings of a given system through constructing a computer model, then "debugging" that knowledge by testing and refining the model. While such activities have been shown to…
Towards an Intelligent Planning Knowledge Base Development Environment
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.
Debugging Geographers: Teaching Programming to Non-Computer Scientists
ERIC Educational Resources Information Center
Muller, Catherine L.; Kidd, Chris
2014-01-01
The steep learning curve associated with computer programming can be a daunting prospect, particularly for those not well aligned with this way of logical thinking. However, programming is a skill that is becoming increasingly important. Geography graduates entering careers in atmospheric science are one example of a particularly diverse group who…
Sight Application Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G.
2014-09-17
The scale and complexity of scientific applications makes it very difficult to optimize, debug and extend them to support new capabilities. We have developed a tool that supports developers’ efforts to understand the logical flow of their applications and interactions between application components and hardware in a way that scales with application complexity and parallelism.
Knowledge Acquisition, Knowledge Programming, and Knowledge Refinement.
ERIC Educational Resources Information Center
Hayes-Roth, Frederick; And Others
This report describes the principal findings and recommendations of a 2-year Rand research project on machine-aided knowledge acquisition and discusses the transfer of expertise from humans to machines, as well as the functions of planning, debugging, knowledge refinement, and autonomous machine learning. The relative advantages of humans and…
ERIC Educational Resources Information Center
Gerhold, George; And Others
This paper describes an effective microprocessor-based CAI system which has been repeatedly tested by a large number of students and edited accordingly. Tasks not suitable for microprocessor based systems (authoring, testing, and debugging) were handled on larger multi-terminal systems. This approach requires that the CAI language used on the…
When "Less is More": The Optimal Design of Language Laboratory Hardware.
ERIC Educational Resources Information Center
Kershaw, Gary; Boyd, Gary
1980-01-01
The results of a process of designing, building, and "de-bugging" two replacement language laboratory hardware systems at Concordia University (Montreal) are described. Because commercially available systems did not meet specifications within budgetary constraints, the systems were built by the university technical department. The systems replaced…
Where Is Logo Taking Our Kids?
ERIC Educational Resources Information Center
Mace, Scott
1984-01-01
Discusses various aspects, features, and uses of the Logo programing language. A comparison (in chart format) of several Logo languages is also included, providing comments on the language as well as producer, current price, number of sprites and turtles, computer needed, and whether debugging aids and list operations are included. (JN)
Overview of a Linguistic Theory of Design. AI Memo 383A.
ERIC Educational Resources Information Center
Miller, Mark L.; Goldstein, Ira P.
The SPADE theory, which uses linguistic formalisms to model the planning and debugging processes of computer programming, was simultaneously developed and tested in three separate contexts--computer uses in education, automatic programming (a traditional artificial intelligence arena), and protocol analysis (the domain of information processing…
ERIC Educational Resources Information Center
Gandolfi, Enrico
2018-01-01
This article investigates the phenomenon of open and participative development (e.g. beta testing, Kickstarter projects)--i.e. extended prototyping--in digital entertainment as a potential source of insights for instructional interventions. Despite the increasing popularity of this practice and the potential implications for educators and…
Describing the What and Why of Students' Difficulties in Boolean Logic
ERIC Educational Resources Information Center
Herman, Geoffrey L.; Loui, Michael C.; Kaczmarczyk, Lisa; Zilles, Craig
2012-01-01
The ability to reason with formal logic is a foundational skill for computer scientists and computer engineers that scaffolds the abilities to design, debug, and optimize. By interviewing students about their understanding of propositional logic and their ability to translate from English specifications to Boolean expressions, we characterized…
Collaboration Services: Enabling Chat in Disadvantaged Grids
2014-06-01
grids in the tactical domain" [2]. The main focus of this group is to identify what we call tactical SOA foundation services. By this we mean which...Here, only IPv4 is supported, as differences relating to IPv4 and IPv6 addressing meant that this functionality was not easily extended to use IPv6 ...multicast groups. Our IPv4 implementation is fully compliant with the specification, whereas the IPv6 implementation uses our own interpretation of
Design and Implementation of the MARG Human Body Motion Tracking System
2004-10-01
7803-8463-6/041$20.00 ©:!004 IEEE 625 OPTOTRAK from Northern Digital Inc. is a typical example of a marker-based system [I 0]. Another is the...technique called tunneling is :used to overcome this problem. Tunneling is a software solution that runs on the end point routers/computers and allows...multicast packets to traverse the network by putting them into unicast packets. MUTUP overcomes the tunneling problem using shared memory in the
A Secure Group Communication Architecture for a Swarm of Autonomous Unmanned Aerial Vehicles
2008-03-01
members to use the same decryption key. This shared decryption key is called the Session Encryption Key ( SEK ) or Traffic Encryption Key (TEK...Since everyone shares the SEK , members need to hold additional Key Encryption Keys (KEK) that are used to securely distribute the SEK to each valid...managing this process. To preserve the secrecy of the multicast data, the SEK needs to be updated upon certain events such as a member joining and
NASA Technical Reports Server (NTRS)
Stoenescu, Tudor M.; Woo, Simon S.
2009-01-01
In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.
Network connectivity enhancement by exploiting all optical multicast in semiconductor ring laser
NASA Astrophysics Data System (ADS)
Siraj, M.; Memon, M. I.; Shoaib, M.; Alshebeili, S.
2015-03-01
The use of smart phone and tablet applications will provide the troops for executing, controlling and analyzing sophisticated operations with the commanders providing crucial documents directly to troops wherever and whenever needed. Wireless mesh networks (WMNs) is a cutting edge networking technology which is capable of supporting Joint Tactical radio System (JTRS).WMNs are capable of providing the much needed bandwidth for applications like hand held radios and communication for airborne and ground vehicles. Routing management tasks can be efficiently handled through WMNs through a central command control center. As the spectrum space is congested, cognitive radios are a much welcome technology that will provide much needed bandwidth. They can self-configure themselves, can adapt themselves to the user requirement, provide dynamic spectrum access for minimizing interference and also deliver optimal power output. Sometimes in the indoor environment, there are poor signal issues and reduced coverage. In this paper, a solution utilizing (CR WMNs) over optical network is presented by creating nanocells (PCs) inside the indoor environment. The phenomenon of four-wave mixing (FWM) is exploited to generate all-optical multicast using semiconductor ring laser (SRL). As a result same signal is transmitted at different wavelengths. Every PC is assigned a unique wavelength. By using CR technology in conjunction with PC will not only solve network coverage issue but will provide a good bandwidth to the secondary users.
NASA Astrophysics Data System (ADS)
Duan, Haoran
1997-12-01
This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 μm CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.
A Computer Approach to Mathematics Curriculum Developments Debugging
ERIC Educational Resources Information Center
Martínez-Zarzuelo, Angélica; Roanes-Lozano, Eugenio; Fernández-Díaz, José
2016-01-01
Sequencing contents is of great importance for instructional design within the teaching planning processes. We believe that it is key for a meaningful learning. Therefore, we propose to formally establish a partial order relation among the contents. We have chosen the binary relation "to be a prerequisite" for that purpose. We have…
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Ogilvie, P.
1975-01-01
A special data debugging package called SAT-1P created for the STARS-2P computer program is described. The program was written exclusively in FORTRAN 4 for the IBM 370-165 computer, and then converted to the UNIVAC 1108.
01010000 01001100 01000001 01011001: Play Elements in Computer Programming
ERIC Educational Resources Information Center
Breslin, Samantha
2013-01-01
This article explores the role of play in human interaction with computers in the context of computer programming. The author considers many facets of programming including the literary practice of coding, the abstract design of programs, and more mundane activities such as testing, debugging, and hacking. She discusses how these incorporate the…
ERIC Educational Resources Information Center
Deek, Fadi; Espinosa, Idania
2005-01-01
Traditionally, novice programmers have had difficulties in three distinct areas: breaking down a given problem, designing a workable solution, and debugging the resulting program. Many programming environments, software applications, and teaching tools have been developed to address the difficulties faced by these novices. Along with advancements…
An Introduction to Fortran Programming: An IPI Approach.
ERIC Educational Resources Information Center
Fisher, D. D.; And Others
This text is designed to give individually paced instruction in Fortran Programing. The text contains fifteen units. Unit titles include: Flowcharts, Input and Output, Loops, and Debugging. Also included is an extensive set of appendices. These were designed to contain a great deal of practical information necessary to the course. These appendices…
Debugging and Analysis of Large-Scale Parallel Programs
1989-09-01
Przybylski, T. Riordan , C. Rowen, and D. Van’t Hof, "A CMOS RISC Processor with Integrated System Functions," In Proc. of the 1986 COMPCON. IEEE, March 1986...Sequencers," Communications of the ACM, 22(2):115-123, 1979. 115 [Richardson, 1988] Rick Richardson, "Dhrystone 2.1 Benchmark," Usenet Distribution
Inquiry-Based Learning Case Studies for Computing and Computing Forensic Students
ERIC Educational Resources Information Center
Campbell, Jackie
2012-01-01
Purpose: The purpose of this paper is to describe and discuss the use of specifically-developed, inquiry-based learning materials for Computing and Forensic Computing students. Small applications have been developed which require investigation in order to de-bug code, analyse data issues and discover "illegal" behaviour. The applications…
Predicting the Readability of FORTRAN Programs.
ERIC Educational Resources Information Center
Domangue, J. C.; Karbowski, S. A.
This paper reports the results of two studies of the readability of FORTRAN programs, i.e., the ease with which a programmer can read and analyze programs already written, particularly in the processes of maintenance and debugging. In the first study, low-level characteristics of 202 FORTRAN programs stored on the general-use UNIX systems at Bell…
Don't Gamble with Y2K Compliance.
ERIC Educational Resources Information Center
Sturgeon, Julie
1999-01-01
Examines one school district's (Clark County, Nevada) response to the Y2K computer problem and provides tips on time-saving Y2K preventive measures other school districts can use. Explains how the district de-bugged its computer system including mainframe considerations and client-server applications. Highlights office equipment and teaching…
A Support System for Error Correction Questions in Programming Education
ERIC Educational Resources Information Center
Hachisu, Yoshinari; Yoshida, Atsushi
2014-01-01
For supporting the education of debugging skills, we propose a system for generating error correction questions of programs and checking the correctness. The system generates HTML files for answering questions and CGI programs for checking answers. Learners read and answer questions on Web browsers. For management of error injection, we have…
Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript
2014-09-01
scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of
The Use of End-to-End Multicast Measurements for Characterizing Internal Network Behavior
2002-08-01
dropping on the basis Random Early Detection ( RED ) [17] is another mechanism by which packet loss may become decorrelated. It remains to be seen whether...this mechanism will be widely deployed in communications networks. On the other hand, the use of RED to merely mark packets will not break correlations...Tail and Random Early Detection ( RED ) buffer discard methods, [17]. We compared the inferred loss and delay with actual probe loss and delay. We found
FPGA Flash Memory High Speed Data Acquisition
NASA Technical Reports Server (NTRS)
Gonzalez, April
2013-01-01
The purpose of this research is to design and implement a VHDL ONFI Controller module for a Modular Instrumentation System. The goal of the Modular Instrumentation System will be to have a low power device that will store data and send the data at a low speed to a processor. The benefit of such a system will give an advantage over other purchased binary IP due to the capability of allowing NASA to re-use and modify the memory controller module. To accomplish the performance criteria of a low power system, an in house auxiliary board (Flash/ADC board), FPGA development kit, debug board, and modular instrumentation board will be jointly used for the data acquisition. The Flash/ADC board contains four, 1 MSPS, input channel signals and an Open NAND Flash memory module with an analog to digital converter. The ADC, data bits, and control line signals from the board are sent to an Microsemi/Actel FPGA development kit for VHDL programming of the flash memory WRITE, READ, READ STATUS, ERASE, and RESET operation waveforms using Libero software. The debug board will be used for verification of the analog input signal and be able to communicate via serial interface with the module instrumentation. The scope of the new controller module was to find and develop an ONFI controller with the debug board layout designed and completed for manufacture. Successful flash memory operation waveform test routines were completed, simulated, and tested to work on the FPGA board. Through connection of the Flash/ADC board with the FPGA, it was found that the device specifications were not being meet with Vdd reaching half of its voltage. Further testing showed that it was the manufactured Flash/ADC board that contained a misalignment with the ONFI memory module traces. The errors proved to be too great to fix in the time limit set for the project.
Global Interoperability of High Definition Video Streams Via ACTS and Intelsat
NASA Technical Reports Server (NTRS)
Hsu, Eddie; Wang, Charles; Bergman, Larry; Pearman, James; Bhasin, Kul; Clark, Gilbert; Shopbell, Patrick; Gill, Mike; Tatsumi, Haruyuki; Kadowaki, Naoto
2000-01-01
In 1993, a proposal at the Japan.-U.S. Cooperation in Space Program Workshop lead to a subsequent series of satellite communications experiments and demonstrations, under the title of Trans-Pacific High Data Rate Satellite Communications Experiments. The first of which is a joint collaboration between government and industry teams in the United States and Japan that successfully demonstrated distributed high definition video (HDV) post-production on a global scale using a combination of high data rate satellites and terrestrial fiber optic asynchronous transfer mode (ATM) networks. The HDV experiment is the first GIBN experiment to establish a dual-hop broadband satellite link for the transmission of digital HDV over ATM. This paper describes the team's effort in using the NASA Advanced Communications Technology Satellite (ACTS) at rates up to OC-3 (155 Mbps) between Los Angeles and Honolulu, and using Intelsat at rates up to DS-3 (45 Mbps) between Kapolei and Tokyo, with which HDV source material was transmitted between Sony Pictures High Definition Center (SPHDC) in Los Angeles and Sony Visual Communication Center (VCC) in Shinagawa, Tokyo. The global-scale connection also used terrestrial networks in Japan, the States of Hawaii and California. The 1.2 Gbps digital HDV stream was compressed down to 22.5 Mbps using a proprietary Mitsubishi MPEG-2 codec that was ATM AAL-5 compatible. The codec: employed four-way parallel processing. Improved versions of the codec are now commercially available. The successful post-production activity performed in Tokyo with a HDV clip transmitted from Los Angeles was predicated on the seamless interoperation of all the equipment between the sites, and was an exciting example in deploying a global-scale information infrastructure involving a combination of broadband satellites and terrestrial fiber optic networks. Correlation of atmospheric effects with cell loss, codec drop-out, and picture quality were made. Current efforts in the Trans-Pacific series plan to examine the use of Internet Protocol (IP)-related technologies over such an infrastructure. The use of IP allows the general public to be an integral part of the exciting activities, helps to examine issues in constructing the solar-system internet, and affords an opportunity to tap the research results from the (reliable) multicast and distributed systems communities. The current Trans- Pacific projects, including remote astronomy and digital library (visible human) are briefly described.
MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming
Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño
2013-01-01
In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530
A Unified Algebraic and Logic-Based Framework Towards Safe Routing Implementations
2015-08-13
Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative...and debugging several SDN applications. Example-based SDN synthesis. Recent emergence of software - defined networks offers an opportunity to design...domain of Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative networking
Data-Driven Hint Generation from Peer Debugging Solutions
ERIC Educational Resources Information Center
Liu, Zhongxiu
2015-01-01
Data-driven methods have been a successful approach to generating hints for programming problems. However, the majority of previous studies are focused on procedural hints that aim at moving students to the next closest state to the solution. In this paper, I propose a data-driven method to generate remedy hints for BOTS, a game that teaches…
ERIC Educational Resources Information Center
Taylor, Karen A.
This review of the literature and annotated bibliography summarizes the available research relating to teaching programming to high school students. It is noted that, while the process of programming a computer could be broken down into five steps--problem definition, algorithm design, code writing, debugging, and documentation--current research…
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
Young Children and Turtle Graphics Programming: Generating and Debugging Simple Turtle Programs.
ERIC Educational Resources Information Center
Cuneo, Diane O.
Turtle graphics is a popular vehicle for introducing children to computer programming. Children combine simple graphic commands to get a display screen cursor (called a turtle) to draw designs on the screen. The purpose of this study was to examine young children's abilities to function in a simple computer programming environment. Four- and…
Visual Debugging of Object-Oriented Systems With the Unified Modeling Language
2004-03-01
to be “the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design , typography ... Graphics volume 23 no 6, pp893-901, 1999. [SHN98] Shneiderman, B. Designing the User Interface. Strategies for Effective Human-Computer Interaction...System Design Objectives ................................................................................ 44 3.3 System Architecture
Teaching Conversations with the XDS Sigma 7. System Users Manual.
ERIC Educational Resources Information Center
Mosmann, Charles; Bork, Alfred M.
This manual is intended as a reference handbook for use in writing instructional dialogs on the Sigma-7 computer. The concern is to give concise information which one would need to write and debug dialogs on this system. Metasymbol, the macro-assembly program for the Sigma-7, is described. Definitions of terminology, legal forms descriptions of…
SSL - THE SIMPLE SOCKETS LIBRARY
NASA Technical Reports Server (NTRS)
Campbell, C. E.
1994-01-01
The Simple Sockets Library (SSL) allows C programmers to develop systems of cooperating programs using Berkeley streaming Sockets running under the TCP/IP protocol over Ethernet. The SSL provides a simple way to move information between programs running on the same or different machines and does so with little overhead. The SSL can create three types of Sockets: namely a server, a client, and an accept Socket. The SSL's Sockets are designed to be used in a fashion reminiscent of the use of FILE pointers so that a C programmer who is familiar with reading and writing files will immediately feel comfortable with reading and writing with Sockets. The SSL consists of three parts: the library, PortMaster, and utilities. The user of the SSL accesses it by linking programs to the SSL library. The PortMaster initializes connections between clients and servers. The PortMaster also supports a "firewall" facility to keep out socket requests from unapproved machines. The "firewall" is a file which contains Internet addresses for all approved machines. There are three utilities provided with the SSL. SKTDBG can be used to debug programs that make use of the SSL. SPMTABLE lists the servers and port numbers on requested machine(s). SRMSRVR tells the PortMaster to forcibly remove a server name from its list. The package also includes two example programs: multiskt.c, which makes multiple accepts on one server, and sktpoll.c, which repeatedly attempts to connect a client to some server at one second intervals. SSL is a machine independent library written in the C-language for computers connected via Ethernet using the TCP/IP protocol. It has been successfully compiled and implemented on a variety of platforms, including Sun series computers running SunOS, DEC VAX series computers running VMS, SGI computers running IRIX, DECstations running ULTRIX, DEC alpha AXPs running OSF/1, IBM RS/6000 computers running AIX, IBM PC and compatibles running BSD/386 UNIX and HP Apollo 3000/4000/9000/400T computers running HP-UX. SSL requires 45K of RAM to run under SunOS and 80K of RAM to run under VMS. For use on IBM PC series computers and compatibles running DOS, SSL requires Microsoft C 6.0 and the Wollongong TCP/IP package. Source code for sample programs and debugging tools are provided. The documentation is available on the distribution medium in TeX and PostScript formats. The standard distribution medium for SSL is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 3.5 inch diskette in UNIX tar format and a 5.25 inch 360K MS-DOS format diskette. The SSL was developed in 1992 and was updated in 1993.
Space Fabrication Demonstration System
NASA Technical Reports Server (NTRS)
1978-01-01
The completion of assembly of the beam builder and its first automatic production of truss is discussed. A four bay, hand assembled, roll formed members truss was built and tested to ultimate load. Detail design of the fabrication facility (beam builder) was completed and designs for subsystem debugging are discussed. Many one bay truss specimens were produced to demonstrate subsystem operation and to detect problem areas.
Experiences Building an Object-Oriented System in C++
NASA Technical Reports Server (NTRS)
Madany, Peter W.; Campbell, Roy H.; Kougiouris, Panagiotis
1991-01-01
This paper describes tools that we built to support the construction of an object-oriented operating system in C++. The tools provide the automatic deletion of unwanted objects, first-class classes, dynamically loadable classes, and class-oriented debugging. As a consequence of our experience building Choices, we advocate these features as useful, simplifying and unifying many aspects of system programming.
C Language Integrated Production System, Ada Version
NASA Technical Reports Server (NTRS)
Culbert, Chris; Riley, Gary; Savely, Robert T.; Melebeck, Clovis J.; White, Wesley A.; Mcgregor, Terry L.; Ferguson, Melisa; Razavipour, Reza
1992-01-01
CLIPS/Ada provides capabilities of CLIPS v4.3 but uses Ada as source language for CLIPS executable code. Implements forward-chaining rule-based language. Program contains inference engine and language syntax providing framework for construction of expert-system program. Also includes features for debugging application program. Based on Rete algorithm which provides efficient method for performing repeated matching of patterns. Written in Ada.
Apple (LCSI) LOGO vs. MIT (Terrapin/Krell) LOGO: A Comparison for Grades 2 thru 4.
ERIC Educational Resources Information Center
Wappler, Reinhold D.
Two LOGO dialects are compared for appropriateness for use with second, third, and fourth grade students on the basis of 18 months of experience with teaching LOGO programing language at this level in a four-machine laboratory setting. Benefits and drawbacks of the dialects are evaluated in the areas of editing. screen modes, debugging,…
Information Processing Approaches to Cognitive Development
1989-08-04
O’Connor (Eds.), Intelligence and learning . New York: Plenum Press. Deloache, J.S. (1988). The development of representation in young chidren . In H.W...Klahr, D., & Carver, S.M. (1988). Cognitive objectives in a LOGO debugging curriculum: Instruction, Learning , and Transfer. Cognitive Psychology, 20...Production system models of learning and development. Cambridge, MA: MIT Press. TWO KINDS OF INFORMATION PROCESSING APPROACHES TO COGNITIVE DEVELOPMENT
A Framework for Debugging Geoscience Projects in a High Performance Computing Environment
NASA Astrophysics Data System (ADS)
Baxter, C.; Matott, L.
2012-12-01
High performance computing (HPC) infrastructure has become ubiquitous in today's world with the emergence of commercial cloud computing and academic supercomputing centers. Teams of geoscientists, hydrologists and engineers can take advantage of this infrastructure to undertake large research projects - for example, linking one or more site-specific environmental models with soft computing algorithms, such as heuristic global search procedures, to perform parameter estimation and predictive uncertainty analysis, and/or design least-cost remediation systems. However, the size, complexity and distributed nature of these projects can make identifying failures in the associated numerical experiments using conventional ad-hoc approaches both time- consuming and ineffective. To address these problems a multi-tiered debugging framework has been developed. The framework allows for quickly isolating and remedying a number of potential experimental failures, including: failures in the HPC scheduler; bugs in the soft computing code; bugs in the modeling code; and permissions and access control errors. The utility of the framework is demonstrated via application to a series of over 200,000 numerical experiments involving a suite of 5 heuristic global search algorithms and 15 mathematical test functions serving as cheap analogues for the simulation-based optimization of pump-and-treat subsurface remediation systems.
Simulation Testing of Embedded Flight Software
NASA Technical Reports Server (NTRS)
Shahabuddin, Mohammad; Reinholtz, William
2004-01-01
Virtual Real Time (VRT) is a computer program for testing embedded flight software by computational simulation in a workstation, in contradistinction to testing it in its target central processing unit (CPU). The disadvantages of testing in the target CPU include the need for an expensive test bed, the necessity for testers and programmers to take turns using the test bed, and the lack of software tools for debugging in a real-time environment. By virtue of its architecture, most of the flight software of the type in question is amenable to development and testing on workstations, for which there is an abundance of commercially available debugging and analysis software tools. Unfortunately, the timing of a workstation differs from that of a target CPU in a test bed. VRT, in conjunction with closed-loop simulation software, provides a capability for executing embedded flight software on a workstation in a close-to-real-time environment. A scale factor is used to convert between execution time in VRT on a workstation and execution on a target CPU. VRT includes high-resolution operating- system timers that enable the synchronization of flight software with simulation software and ground software, all running on different workstations.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
A Real-Time Executive for Multiple-Computer Clusters.
1984-12-01
in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by ’b4, internal _loopback by 02"b4, clear loooback by ’b4, go offline by Ŝ"b4, eo online by ’b4, onboard _diagnostic by Oa’b4, cdr
The Virtual Brain: a simulator of primate brain network dynamics.
Sanz Leon, Paula; Knock, Stuart A; Woodman, M Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor
2013-01-01
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.
The Virtual Brain: a simulator of primate brain network dynamics
Sanz Leon, Paula; Knock, Stuart A.; Woodman, M. Marmaduke; Domide, Lia; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor
2013-01-01
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications. PMID:23781198
The LHEA PDP 11/70 graphics processing facility users guide
NASA Technical Reports Server (NTRS)
1978-01-01
A compilation of all necessary and useful information needed to allow the inexperienced user to program on the PDP 11/70. Information regarding the use of editing and file manipulation utilities as well as operational procedures are included. The inexperienced user is taken through the process of creating, editing, compiling, task building and debugging his/her FORTRAN program. Also, documentation on additional software is included.
Hierarchical Task Network Prototyping In Unity3d
2016-06-01
visually debug. Here we present a solution for prototyping HTNs by extending an existing commercial implementation of Behavior Trees within the Unity3D game ...HTN, dynamic behaviors, behavior prototyping, agent-based simulation, entity-level combat model, game engine, discrete event simulation, virtual...commercial implementation of Behavior Trees within the Unity3D game engine prior to building the HTN in COMBATXXI. Existing HTNs were emulated within
Approaches to Debugging at Scale on the Peregrine System | High-Performance
nodes=100 walltime=1:00:00:00 -A CSC001 This asks for 100 nodes for 1 day. When the nodes are available interactive debugger such as TotalView. When you are done working, exit the queue-name When you want to disconnect from the session, type control-A then d. The interactive job
Embracing Statistical Challenges in the Information Technology Age
2006-01-01
computation and feature selection. Moreover, two research projects on network tomography and arctic cloud detection are used throughout the paper to bring...prominent Network Tomography problem, origin- destination (OD) traffic estimation. It demonstrates well how the two modes of data collection interact...software debugging (Biblit et al, 2005 [2]), and network tomography for computer network management. Computer sys- tem problems exist long before the IT
Techniques for the Detection of Faulty Packet Header Modifications
2014-03-12
layer approaches to check if packets are being altered by middleboxes and were primarily developed as network neutrality analysis tools. Switzerland works...local and metropolitan area networks –specific requirements part 11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications...policy or position of the Department of Defense or the U.S. Government. Understanding, measuring, and debugging IP networks , particularly across
The QCDSP project —a status report
NASA Astrophysics Data System (ADS)
Chen, Dong; Chen, Ping; Christ, Norman; Edwards, Robert; Fleming, George; Gara, Alan; Hansen, Sten; Jung, Chulwoo; Kaehler, Adrian; Kasow, Steven; Kennedy, Anthony; Kilcup, Gregory; Luo, Yubin; Malureanu, Catalin; Mawhinney, Robert; Parsons, John; Sexton, James; Sui, Chengzhong; Vranas, Pavlos
1998-01-01
We give a brief overview of the massively parallel computer project underway for nearly the past four years, centered at Columbia University. A 6 Gflops and a 50 Gflops machine are presently being debugged for installation at OSU and SCRI respectively, while a 0.4 Tflops machine is under construction for Columbia and a 0.6 Tflops machine is planned for the new RIKEN Brookhaven Research Center.
Data Provenance as a Tool for Debugging Hydrological Models based on Python
NASA Astrophysics Data System (ADS)
Wombacher, A.; Huq, M.; Wada, Y.; Van Beek, R.
2012-12-01
There is an increase in data volume used in hydrological modeling. The increasing data volume requires additional efforts in debugging models since a single output value is influenced by a multitude of input values. Thus, it is difficult to keep an overview among the data dependencies. Further, knowing these dependencies, it is a tedious job to infer all the relevant data values. The aforementioned data dependencies are also known as data provenance, i.e. the determination of how a particular value has been created and processed. The proposed tool infers the data provenance automatically from a python script and visualizes the dependencies as a graph without executing the script. To debug the model the user specifies the value of interest in space and time. The tool infers all related data values and displays them in the graph. The tool has been evaluated by hydrologists developing a model for estimating the global water demand [1]. The model uses multiple different data sources. The script we analysed has 120 lines of codes and used more than 3000 individual files, each of them representing a raster map of 360*720 cells. After importing the data of the files into a SQLite database, the data consumes around 40 GB of memory. Using the proposed tool a modeler is able to select individual values and infer which values have been used to calculate the value. Especially in cases of outliers or missing values it is a beneficial tool to provide the modeler with efficient information to investigate the unexpected behavior of the model. The proposed tool can be applied to many python scripts and has been tested with other scripts in different contexts. In case a python code contains an unknown function or class the tool requests additional information about the used function or class to enable the inference. This information has to be entered only once and can be shared with colleagues or in the community. Reference [1] Y. Wada, L. P. H. van Beek, D. Viviroli, H. H. Drr, R. Weingartner, and M. F. P. Bierkens, "Global monthly water stress: II. water demand and severity of water," Water Resources Research, vol. 47, 2011.
Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation
NASA Technical Reports Server (NTRS)
Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat
2011-01-01
The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.
Integrating security in a group oriented distributed system
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth; Gong, LI
1992-01-01
A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.
An approach to verification and validation of a reliable multicasting protocol: Extended Abstract
NASA Technical Reports Server (NTRS)
Callahan, John R.; Montgomery, Todd L.
1995-01-01
This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.
Consistent detection of global predicates
NASA Technical Reports Server (NTRS)
Cooper, Robert; Marzullo, Keith
1991-01-01
A fundamental problem in debugging and monitoring is detecting whether the state of a system satisfies some predicate. If the system is distributed, then the resulting uncertainty in the state of the system makes such detection, in general, ill-defined. Three algorithms are presented for detecting global predicates in a well-defined way. These algorithms do so by interpreting predicates with respect to the communication that has occurred in the system.
Viewer: a User Interface for Failure Region Analysis
1990-12-01
another possible area of continued research. The program could detect whether the user is a beginner , intermediate, or expert and provide different...interfaces for each level. The beginner level would provide detailed help functions, and prompt the user with detailed explanations of what the program...June 1990. Brooke, J.B. and Duncan, K.D., "Experimental Studies of Flowchart Use at Different Stages of Program Debugging" (Ergonomics, Vol 23, No
Assessing GPS Constellation Resiliency in an Urban Canyon Environment
2015-03-26
Taipei, Taiwan as his area of interest. His GPS constellation is modeled in the Satellite Toolkit ( STK ) where augmentation satellites can be added and...interaction. SEAS also provides a visual display of the simulation which is useful for verification and debugging portions of the analysis. Furthermore...entire system. Interpreting the model is aided by the visual display of the agents moving in the region of inter- est. Furthermore, SEAS collects
Electronic and software subsystems for an autonomous roving vehicle. M.S. Thesis
NASA Technical Reports Server (NTRS)
Doig, G. A.
1980-01-01
The complete electronics packaging which controls the Mars roving vehicle is described in order to provide a broad overview of the systems that are part of that package. Some software debugging tools are also discussed. Particular emphasis is given to those systems that are controlled by the microprocessor. These include the laser mast, the telemetry system, the command link prime interface board, and the prime software.
Characterizing and Implementing Efficient Primitives for Privacy-Preserving Computation
2015-07-01
the mobile device. From this, the mobile will detect any tampering from the malicious party by a discrepancy in these returned values, eliminating...the need for an output MAC. If no tampering is detected , the mobile device then decrypts the output of computation. APPROVED FOR PUBLIC RELEASE...useful error messages when the compiler detects a problem with an application, making debugging the application significantly easier than with other
Observation sand Results Gained from the Jade Project
2002-05-04
project different dependency-based Center, 5095 Mawson Lakes (Adelaide) SA, Australia, email: models have been created that vary in their levels of...test eris Columna# T indenotesth n erfofmtests the Java programming language. Currently, exception han- of the respective test series. dling and...meets sentation in the debugging of software to reduce the problem error diagnosis in logic programs. In Proceedings 1 3 t h of structural faults in
NASA Technical Reports Server (NTRS)
Jaworski, Allan; Lavallee, David; Zoch, David
1987-01-01
The prototype demonstrates the feasibility of using Ada for expert systems and the implementation of an expert-friendly interface which supports knowledge entry. In the Ford LISP-Ada Connection (FLAC) system LISP and Ada are used in ways which complement their respective capabilities. Future investigation will concentrate on the enhancement of the expert knowledge entry/debugging interface and on the issues associated with multitasking and real-time expert systems implementation in Ada.
6 DOF Nonlinear AUV Simulation Toolbox
1997-01-01
is to supply a flexible 3D -simulation platform for motion visualization, in-lab debugging and testing of mission-specific strategies as well as those...Explorer are modular designed [Smith] in order to cut time and cost for vehicle recontlguration. A flexible 3D -simulation platform is desired to... 3D models. Current implemented modules include a nonlinear dynamic model for the OEX, shared memory and semaphore manager tools, shared memory monitor
A distributed data acquisition software scheme for the Laboratory Telerobotic Manipulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, P.L.; Glassell, R.L.; Rowe, J.C.
1990-01-01
A custom software architecture was developed for use in the Laboratory Telerobotic Manipulator (LTM) to provide support for the distributed data acquisition electronics. This architecture was designed to provide a comprehensive development environment that proved to be useful for both hardware and software debugging. This paper describes the development environment and the operational characteristics of the real-time data acquisition software. 8 refs., 5 figs.
Plan Debugging Using Approximate Domain Theories.
1995-03-01
compelling suggestion that generative plan- ning systems solving large problems will need to exploit the control information implicit in uncertain...control information implicit in uncertain information may well lead the planner to expand one portion of a plan at one point, and a separate portion of...solutions that have been proposed are to abandon declarativism (as suggested in the work on situated automata theory and its variants [1, 16, 56, 72
Instrumentation, performance visualization, and debugging tools for multiprocessors
NASA Technical Reports Server (NTRS)
Yan, Jerry C.; Fineman, Charles E.; Hontalas, Philip J.
1991-01-01
The need for computing power has forced a migration from serial computation on a single processor to parallel processing on multiprocessor architectures. However, without effective means to monitor (and visualize) program execution, debugging, and tuning parallel programs becomes intractably difficult as program complexity increases with the number of processors. Research on performance evaluation tools for multiprocessors is being carried out at ARC. Besides investigating new techniques for instrumenting, monitoring, and presenting the state of parallel program execution in a coherent and user-friendly manner, prototypes of software tools are being incorporated into the run-time environments of various hardware testbeds to evaluate their impact on user productivity. Our current tool set, the Ames Instrumentation Systems (AIMS), incorporates features from various software systems developed in academia and industry. The execution of FORTRAN programs on the Intel iPSC/860 can be automatically instrumented and monitored. Performance data collected in this manner can be displayed graphically on workstations supporting X-Windows. We have successfully compared various parallel algorithms for computational fluid dynamics (CFD) applications in collaboration with scientists from the Numerical Aerodynamic Simulation Systems Division. By performing these comparisons, we show that performance monitors and debuggers such as AIMS are practical and can illuminate the complex dynamics that occur within parallel programs.
Symphony: A Framework for Accurate and Holistic WSN Simulation
Riliskis, Laurynas; Osipov, Evgeny
2015-01-01
Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144
Solutions and debugging for data consistency in multiprocessors with noncoherent caches
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, D.; Mendelson, B.; Breternitz, M. Jr.
1995-02-01
We analyze two important problems that arise in shared-memory multiprocessor systems. The stale data problem involves ensuring that data items in local memory of individual processors are current, independent of writes done by other processors. False sharing occurs when two processors have copies of the same shared data block but update different portions of the block. The false sharing problem involves guaranteeing that subsequent writes are properly combined. In modern architectures these problems are usually solved in hardware, by exploiting mechanisms for hardware controlled cache consistency. This leads to more expensive and nonscalable designs. Therefore, we are concentrating on softwaremore » methods for ensuring cache consistency that would allow for affordable and scalable multiprocessing systems. Unfortunately, providing software control is nontrivial, both for the compiler writer and for the application programmer. For this reason we are developing a debugging environment that will facilitate the development of compiler-based techniques and will help the programmer to tune his or her application using explicit cache management mechanisms. We extend the notion of a race condition for IBM Shared Memory System POWER/4, taking into consideration its noncoherent caches, and propose techniques for detection of false sharing problems. Identification of the stale data problem is discussed as well, and solutions are suggested.« less
Rock Deformation at High Confining Pressure and Temperature.
debugged, delivered and installed to the contracting agency. Clay specimens of illite, kaolinite and montmorillonite were deformed in tri-axial compression...at 25 and 3000C at a constant confining pressure of 2 kb and a constant strain rate of .0001 sec. The illite and kaolinite are stronger under these...conditions than montmorillonite . Cores from dolomite single crystals were deformed at a confining pressure of 7 kb and temperatures of 300 and 500C
REDIR: Automated Static Detection of Obfuscated Anti-Debugging Techniques
2014-03-27
analyzing code samples that resist other forms of analysis. 2.5.6 RODS and HASTI: Software Engineering Cognitive Support Software Engineering (SE) is another...and (c) this method is resistant to common obfuscation techniques. To achieve this goal, the Data/Frame sensemaking theory guides the process of...No Starch Press, 2012. [46] C.-W. Hsu, S. W. Shieh et al., “Divergence Detector: A Fine-Grained Approach to Detecting VM-Awareness Malware,” in
Project UNITY: Cross Domain Visualization Collaboration
2015-10-18
location is at the Space Operations Coordination Center (UK-SPOCC) in High Wycombe, UK. Identical AFRL-developed ErgoWorkstations (see Figure 2) were...installed in both locations. The AFRL ErgoWorkstation is made up of a high performance Windows-based PC with three displays: two 30” Dell Cinema ...system can be seen in Figure 1. The intent of using identical hardware is to minimize complexity, to simplify debugging, and to provide an opportunity
Monitoring and tracing of critical software systems: State of the work and project definition
2008-12-01
analysis, troubleshooting and debugging. Some of these subsystems already come with ad hoc tracers for events like wireless connections or SCSI disk... SQLite ). Additional synthetic events (e.g. states) are added to the database. The database thus consists in contexts (process, CPU, state), event...capability on a [operating] system-by-system basis. Additionally, the mechanics of querying the data in an ad - hoc manner outside the boundaries of the
Performance management system enhancement and maintenance
NASA Technical Reports Server (NTRS)
Cleaver, T. G.; Ahour, R.; Johnson, B. R.
1984-01-01
The research described in this report concludes a two-year effort to develop a Performance Management System (PMS) for the NCC computers. PMS provides semi-automated monthly reports to NASA and contractor management on the status and performance of the NCC computers in the TDRSS program. Throughout 1984, PMS was tested, debugged, extended, and enhanced. Regular PMS monthly reports were produced and distributed. PMS continues to operate at the NCC under control of Bendix Corp. personnel.
Sample Batch Scripts for Running Jobs on the Peregrine System |
script for a serial job in the debug queue #!/bin/bash #PBS -lnodes=1:ppn=1,walltime=500 #PBS -N test1 limit #PBS -l nodes=1 # one node #PBS -N test1 # Name of job #PBS -A CSC001 # project handle cd #PBS -q short # short queue #PBS -l nodes=4:ppn=24 # Number of nodes, put 24 processes on each #PBS -N
The Design and Implementation of a Data Flow Multiprocessor.
1981-12-01
to thank Captain Charles Papp who taught me how to use the logic analyzer and the storage oscilloscope. Without these tools, I could never have...debugged and repaired the microprocessors. Finally, I wish to thank my thesis readers, Major Charles Lillie and Major Walt Seward, for taking valuable time...Neumann/ Babbage architecture with the a data flow architecture. The next section describes the benefits of data flow computing. The following section
1980-01-15
Code B364078464 V99QAXNH30303 H2590D. IS KEY WORDS fCo.. e.1 Odn Od It -C.eWV WHO Idnlif b 61-k n 0ber) Strategic Targeting Copper Industry INDATAK 20...develop, debug and test an industrial simulation model (INDATAK) using the LOGATAK model as a point of departure. The copper processing industry is...significant processes in the copper industry, including the transportation network connecting the processing elements, have been formatted for use in
Programming Environments Based on Structured Editors: The MENTOR Experience,
1980-07-01
ambitious plan has been actually implemented in MENTOR- PASCAL. There are niostly two reaons for this, which are actually cnmplementary aspects of...languages. As might be expecteo, these desig,, * r.t.. ia ar closely 8 related to those based on semantic considerationslO. We have good hope that the...d) it has reasonably good user interaction facilities: there are various debugging aids osuch as a trace package, an interrupt facility, and the user
Generating a 2D Representation of a Complex Data Structure
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A computer program, designed to assist in the development and debugging of other software, generates a two-dimensional (2D) representation of a possibly complex n-dimensional (where n is an integer >2) data structure or abstract rank-n object in that other software. The nature of the 2D representation is such that it can be displayed on a non-graphical output device and distributed by non-graphical means.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holcomb, F.; Kroes, J.; Jessen, T.
1973-10-18
EZQUERY is a generalized information retrieval and reporting system developed by the Data Processing Services Department to provide a method of accessing and displaying information from common types of data-base files. By eliminating the costs and delays associated with coding and debugging special purpose programs, it produces simple reports. It was designed with the user in mind, and may be used by programmers and nonprogrammers to access data base files and obtain reports in a reasonably brief period of time. (auth)
IPACS Electronics: Comments on the Original Design and Current Efforts at Langley Research Center
NASA Technical Reports Server (NTRS)
Gowdey, J. C.
1983-01-01
The development of the integrated power altitude control system (IPACS) is described. The power bridge was fabricated, and all major parts are in hand. The bridge was tested with a 1/4 HP motor for another program. The PWM, Control Logic, and upper bridge driver power supply are breadboarded and are debugged prior to starting testing on a passive load. The Hall sensor circuit for detecting rotor position is in design.
Causality-Preserving Timestamps in Distributed Programs
1993-06-01
monitoring,deh!bigging. tac-hvon,catisaIi tvý Abstract A tachyon is an improperly ordered event in a distributed program. Tachvonis are most often...that tachyons do in fact. occur commonly in distributed pro- grams on our Ethernet at Carnegie Mellon University. and we disc’ss some ways of...before it is sent) is called a tach yon. (’learly it is very disconcerting to try to debug a parallel program that contains tachyons . Of course, in "real
Color graphics, interactive processing, and the supercomputer
NASA Technical Reports Server (NTRS)
Smith-Taylor, Rudeen
1987-01-01
The development of a common graphics environment for the NASA Langley Research Center user community and the integration of a supercomputer into this environment is examined. The initial computer hardware, the software graphics packages, and their configurations are described. The addition of improved computer graphics capability to the supercomputer, and the utilization of the graphic software and hardware are discussed. Consideration is given to the interactive processing system which supports the computer in an interactive debugging, processing, and graphics environment.
NASA Technical Reports Server (NTRS)
Butler, C.; Kindle, E. C.
1984-01-01
The capabilities of the DIAL data acquisition system (DAS) for the remote measurement of atmospheric trace gas concentrations from ground and aircraft platforms were extended through the purchase and integration of other hardware and the implementation of improved software. An operational manual for the current system is presented. Hardware and peripheral device registers are outlined only as an aid in debugging any DAS problems which may arise.
1993-03-01
I1. NON COHERENT-REFLECTOMETRY The design of sources of steady-state intencive noise signals of mm wave band with sufficiently wide and homogenious...structures exhibit non -reciprocity effects, as well as magnetically controlled resonances, which are observable in reflection, absorption, and...performance of the oscillator. Accordingly, we designed a 3mm electronically tuned harmonic -420- oscillator in which it is easy to debug and control
2009-11-01
interest of scientific and technical information exchange. This work is sponsored by the U.S. Department of Defense. The Software Engineering Institute is a...an interesting conti- nuum between how many different requirements a program must satisfy: the more complex and diverse the requirements, the more... Gender differences in approaches to end-user software development have also been reported in debugging feature usage [1] and in end-user web programming
Parallel-Processing Test Bed For Simulation Software
NASA Technical Reports Server (NTRS)
Blech, Richard; Cole, Gary; Townsend, Scott
1996-01-01
Second-generation Hypercluster computing system is multiprocessor test bed for research on parallel algorithms for simulation in fluid dynamics, electromagnetics, chemistry, and other fields with large computational requirements but relatively low input/output requirements. Built from standard, off-shelf hardware readily upgraded as improved technology becomes available. System used for experiments with such parallel-processing concepts as message-passing algorithms, debugging software tools, and computational steering. First-generation Hypercluster system described in "Hypercluster Parallel Processor" (LEW-15283).
Volume Sensor Canadian Demonstrator Prototype User’s Guide
2011-03-23
The “ VSCS ” checkbox controls whether or not all network communications traffic is logged locally for debugging purposes. All of the shown settings...given in the VSCS.Bridge application. On the “Clusters” tab, as shown in Figure 10-4, the information to form the VSCS string ID (SID) are shown...for VSCS communication that is specified on the “Destinations” tab. Changes are committed by pressing the [+] button. On the “Destinations” tab, as
An Evaluation of a Management Wargame and the Factors Affecting Game Performance.
1987-09-01
in residence. This is not a criticism of the author, but rather a systematic flaw in game development in general. Therefore, TEMPO-AI is an excellent...establish the test procedure used in this thesis. This stage of game development is absolutely vital, if the game is intended for serious academic use...Unfortunately, this important step is sadly neglected in nearly all military game development . While TEMPO-AI was extensively debugged as a computer
The remote controlling technique based on the serial port for SR-620 universal counter
NASA Astrophysics Data System (ADS)
Su, Jian-Feng; Chen, Shu-Fang; Li, Xiao-Hui; Wu, Hai-Tao; Bian, Yu-Jing
2004-12-01
The function of SR-620 universal counter and the remote work mode are introduced, and the remote controlling technique for the counter is analysed. A method to realize the remote controlling via the serial port for the counter is demonstrated, in which an ActiveX control is used. Besides, some points for attention in debugging are discussed based on the experience, and a case of program running for measuring time-delay is presented.
United States Air Force College Science and Engineering Program. Volume 1
1988-12-01
with debugging and testing Potfit and AtmBis and for explaining the chemical concepts necessary to understand these two programs. Dr. Phil Christiansen ...work interesting, and in general, making the summer an extremely informative experience. Mr. Russ Leighton gave me invaluable assistance in programming...help and guidance in all phases of my work. i My gratitude also extends to Russ Leighton for his technical advice; to Les Tepe for his support; to my
NASA Technical Reports Server (NTRS)
Feller, A.
1978-01-01
The entire complement of standard cells and components, except for the set-reset flip-flop, was completed. Two levels of checking were performed on each device. Logic cells and topological layout are described. All the related computer programs were coded and one level of debugging was completed. The logic for the test chip was modified and updated. This test chip served as the first test vehicle to exercise the standard cell complementary MOS(C-MOS) automatic artwork generation capability.
The Modular Design and Production of an Intelligent Robot Based on a Closed-Loop Control Strategy.
Zhang, Libo; Zhu, Junjie; Ren, Hao; Liu, Dongdong; Meng, Dan; Wu, Yanjun; Luo, Tiejian
2017-10-14
Intelligent robots are part of a new generation of robots that are able to sense the surrounding environment, plan their own actions and eventually reach their targets. In recent years, reliance upon robots in both daily life and industry has increased. The protocol proposed in this paper describes the design and production of a handling robot with an intelligent search algorithm and an autonomous identification function. First, the various working modules are mechanically assembled to complete the construction of the work platform and the installation of the robotic manipulator. Then, we design a closed-loop control system and a four-quadrant motor control strategy, with the aid of debugging software, as well as set steering gear identity (ID), baud rate and other working parameters to ensure that the robot achieves the desired dynamic performance and low energy consumption. Next, we debug the sensor to achieve multi-sensor fusion to accurately acquire environmental information. Finally, we implement the relevant algorithm, which can recognize the success of the robot's function for a given application. The advantage of this approach is its reliability and flexibility, as the users can develop a variety of hardware construction programs and utilize the comprehensive debugger to implement an intelligent control strategy. This allows users to set personalized requirements based on their needs with high efficiency and robustness.
The Ruggedized STD Bus Microcomputer - A low cost computer suitable for Space Shuttle experiments
NASA Technical Reports Server (NTRS)
Budney, T. J.; Stone, R. W.
1982-01-01
Previous space flight computers have been costly in terms of both hardware and software. The Ruggedized STD Bus Microcomputer is based on the commercial Mostek/Pro-Log STD Bus. Ruggedized PC cards can be based on commercial cards from more than 60 manufacturers, reducing hardware cost and design time. Software costs are minimized by using standard 8-bit microprocessors and by debugging code using commercial versions of the ruggedized flight boards while the flight hardware is being fabricated.
Estimation in a discrete tail rate family of recapture sampling models
NASA Technical Reports Server (NTRS)
Gupta, Rajan; Lee, Larry D.
1990-01-01
In the context of recapture sampling design for debugging experiments the problem of estimating the error or hitting rate of the faults remaining in a system is considered. Moment estimators are derived for a family of models in which the rate parameters are assumed proportional to the tail probabilities of a discrete distribution on the positive integers. The estimators are shown to be asymptotically normal and fully efficient. Their fixed sample properties are compared, through simulation, with those of the conditional maximum likelihood estimators.
Automated Weather Observing System (AWOS) Demonstration Program.
1984-09-01
month "bur:-in" r "debugging" period and a 10-month ’usefu I life " period. Fhe butrn- in pr i ,J was i sed to establish the Data Acquisition System...Histograms. Histograms provide a graphical means of showing how well the probability distribution of residu : , approaches a normal or Gaussian distribution...Organization Report No. 7- Author’s) Paul .J. O t Brien et al. DOT/FAA/CT-84/20 9. Performing Organlzation Name and Address 10. Work Unit No. (TRAIS
VLSI (Very Large Scale Integrated Circuits) Design with the MacPitts Silicon Compiler.
1985-09-01
the background. If the algorithm is not fully debugged, then issue instead macpitts basename herald so MacPitts diagnostics and Liszt diagnostics both...command interpreter. Upon compilation, however, the following LI!F compiler ( Liszt ) diagnostic results, Error: Non-number to minus nil where the first...language used in the MacPitts source code. The more instructive solution is to write the Franz LISP code to decide if a jumper wire is needed, and if so, to
The Design of the Digital Multiplexer based on Power Carrier Communication on Sports Venues
NASA Astrophysics Data System (ADS)
Lu, Ming-jing; Liang, Li; Yu, Xiao-yan
In this paper, one kind of double CPU, the low power loss, the low cost digital multiplexer has been designed in conducted the full research to this communicated way, which is satisfied the need of the electric power correspondence transmission system, especially in sports venues. This article is elaborated the digital multiplexer's hardware and the software principle of design in detail, carries on the simulation using the monolithic integrated circuit simulator, has achieved the satisfactory effect through the debug.
Comprehensive analysis of helicopters with bearingless rotors
NASA Technical Reports Server (NTRS)
Murthy, V. R.
1988-01-01
A modified Galerkin method is developed to analyze the dynamic problems of multiple-load-path bearingless rotor blades. The development and selection of functions are quite parallel to CAMRAD procedures, greatly facilitating the implementation of the method into the CAMRAD program. A software is developed implementing the modified Galerkin method to determine free vibration characteristics of multiple-load-path rotor blades undergoing coupled flapwise bending, chordwise bending, twisting, and extensional motions. Results are in the process of being obtained by debugging the software.
How to avoid the ten most frequent EMS pitfalls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, W.
1982-04-19
It pays to do your homework before investing in an energy management system if you want to avoid the 10 most common pitfalls listed by users, consultants, and manufacturers as: oversimplification, improper maintenance, failure to involve operating personnel, inaccurate savings estimates, failure to include monitoring capability, incompetent or fradulent firms, improper load control, not allowing for a de-bugging period, failure to include manual override, and software problems. The article describes how each of these pitfalls can lead to poor decisions and poor results. (DCK)
Development and application of structural dynamics analysis capabilities
NASA Technical Reports Server (NTRS)
Heinemann, Klaus W.; Hozaki, Shig
1994-01-01
Extensive research activities were performed in the area of multidisciplinary modeling and simulation of aerospace vehicles that are relevant to NASA Dryden Flight Research Facility. The efforts involved theoretical development, computer coding, and debugging of the STARS code. New solution procedures were developed in such areas as structures, CFD, and graphics, among others. Furthermore, systems-oriented codes were developed for rendering the code truly multidisciplinary and rather automated in nature. Also, work was performed in pre- and post-processing of engineering analysis data.
1994-01-20
Category 2 - Investigation/Debug Required ..................................... 11 Table 3-1 Field Test Report Status/Corrective Action...in Table 3-1 in section 3.1. The Field Test Reports and SP/CR’s are listed below for the two categories: Table 3.0-1. Category 1 - LADS PMO Direction...symbology, consisting Wing Aircraft of the laser code A - H plus the four digit data field shall be displayed for 10 seconds, after which time only
Survey and Recommendations for the Use of Microcomputers in the Naval Audit Service.
1987-03-01
capital investment * Higher maintenance costs * Longer design-time * Troublesome de-bugging during the start-up period * Serious compounding of downtime...traditional revi.ws have often ailed to see the "total picture." This problem has been turther compounded by the fact thatconventional reviews are freuentlv...328 W11 1M E3 130 II1.5 ".A . m . MICROCOP RESOLUTION TEST CHART NATIOMAl. BURMA OF STANDARDS- 1963-A * .~ .*w -- - ~. -. w- ~ ~ w % W% the auditor
NASA Technical Reports Server (NTRS)
Friend, J.
1971-01-01
A manual designed both as an instructional manual for beginning coders and as a reference manual for the coding language INSTRUCT, is presented. The manual includes the major programs necessary to implement the teaching system and lists the limitation of current implementation. A detailed description is given of how to code a lesson, what buttons to push, and what utility programs to use. Suggestions for debugging coded lessons and the error messages that may be received during assembly or while running the lesson are given.
A Process Elaboration Formalism for Writing and Analyzing Programs
1975-10-01
program is to be proved, a description of its (i) See [MANN 73] for a survey of these debugging tools, di) See [ELSPAS 72] for a complete review of this...öy the instructions which might be found on a shampoo bottle. 1) Wet hair 2) Lather 3) Rinse «) Repeat Statement 4, the source of the problem, *M...tor this simple aigonthm is shown m Figure 52. SHAMPOO WET-HAIR LATHER RINSE REPEAT - WET-HAIR -> LATHER -> RINSE -> REPEAT - (TCRMMAL "WE’T.’NG
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Wang, Zhong-Xu; Qin, Ru-Li; Li, Yu-Zhen; Zhang, Xue-Yan; Jia, Ning; Zhang, Qiu-Ling; Li, Gang; Zhao, Jie; Li, Huan-Huan; Jiang, Hai-Qiang
2011-08-01
To investigate the work-related musculoskeletal disorders among automobile assembly workers, to discusses the related risk factors and their relationship. The selected 1508 automobile assembly workers from a north car manufacturing company were regarded as the study object. The hazard zone jobs checklist, Nordic musculoskeletal symptom questionnaire (NMQ) and pain questionnaire were used to perform the epidemiological cross-sectional and retrospective survey and study for the General status, awkward ergonomics factors and related influencing factors, and musculoskeletal disorders of workers. The predominant body sites of occurring WMSDs among automobile assembly workers were mainly low back, wrist, neck and shoulders, the predominant workshop section of occurring WMSDs were mostly concentrated in engine compartment, interior ornament, door cover, chassis and debugging section. The predominant body site of WMSDs among engine compartment and chassis section workers was low back, interior ornament workers were low back and wrist, door cover workers was wrist, chassis workers was low back, debugging workers were neck and low back. Neck musculoskeletal disorders had the trend with the increase of a body height; Smoking may increase the occurrence of musculoskeletal disorders. The WMSDs appears to be a serious ergonomic proble assem among automobile assembly workers, predominant occurring site of WMSDs is with different workshop section, its characteristics is quite obvious, probably related to its existing awkward work position or activities. The worker height and smoking habits may be important factors which affect musculoskeletal disorders happen.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance.
NASA Astrophysics Data System (ADS)
Letts, J.; Magini, N.
2011-12-01
Tier-2 to Tier-2 data transfers have been identified as a necessary extension of the CMS computing model. The Debugging Data Transfers (DDT) Task Force in CMS was charged with commissioning Tier-2 to Tier-2 PhEDEx transfer links beginning in late 2009, originally to serve the needs of physics analysis groups for the transfer of their results between the storage elements of the Tier-2 sites associated with the groups. PhEDEx is the data transfer middleware of the CMS experiment. For analysis jobs using CRAB, the CMS Remote Analysis Builder, the challenges of remote stage out of job output at the end of the analysis jobs led to the introduction of a local fallback stage out, and will eventually require the asynchronous transfer of user data over essentially all of the Tier-2 to Tier-2 network using the same PhEDEx infrastructure. In addition, direct file sharing of physics and Monte Carlo simulated data between Tier-2 sites can relieve the operational load of the Tier-1 sites in the original CMS Computing Model, and already represents an important component of CMS PhEDEx data transfer volume. The experience, challenges and methods used to debug and commission the thousands of data transfers links between CMS Tier-2 sites world-wide are explained and summarized. The resulting operational experience with Tier-2 to Tier-2 transfers is also presented.
Research and design of portable photoelectric rotary table data-acquisition and analysis system
NASA Astrophysics Data System (ADS)
Yang, Dawei; Yang, Xiufang; Han, Junfeng; Yan, Xiaoxu
2015-02-01
Photoelectric rotary table as the main test tracking measurement platform, widely use in shooting range and aerospace fields. In the range of photoelectric tracking measurement system, in order to meet the photoelectric testing instruments and equipment of laboratory and field application demand, research and design the portable photoelectric rotary table data acquisition and analysis system, and introduces the FPGA device based on Xilinx company Virtex-4 series and its peripheral module of the system hardware design, and the software design of host computer in VC++ 6.0 programming platform and MFC package based on class libraries. The data acquisition and analysis system for data acquisition, display and storage, commission control, analysis, laboratory wave playback, transmission and fault diagnosis, and other functions into an organic whole, has the advantages of small volume, can be embedded, high speed, portable, simple operation, etc. By photoelectric tracking turntable as experimental object, carries on the system software and hardware alignment, the experimental results show that the system can realize the data acquisition, analysis and processing of photoelectric tracking equipment and control of turntable debugging good, and measurement results are accurate, reliable and good maintainability and extensibility. The research design for advancing the photoelectric tracking measurement equipment debugging for diagnosis and condition monitoring and fault analysis as well as the standardization and normalization of the interface and improve the maintainability of equipment is of great significance, and has certain innovative and practical value.
DDS as middleware of the Southern African Large Telescope control system
NASA Astrophysics Data System (ADS)
Maartens, Deneys S.; Brink, Janus D.
2016-07-01
The Southern African Large Telescope (SALT) software control system1 is realised as a distributed control system, implemented predominantly in National Instruments' LabVIEW. The telescope control subsystems communicate using cyclic, state-based messages. Currently, transmitting a message is accomplished by performing an HTTP PUT request to a WebDAV directory on a centralised Apache web server, while receiving is based on polling the web server for new messages. While the method works, it presents a number of drawbacks; a scalable distributed communication solution with minimal overhead is a better fit for control systems. This paper describes our exploration of the Data Distribution Service (DDS). DDS is a formal standard specification, defined by the Object Management Group (OMG), that presents a data-centric publish-subscribe model for distributed application communication and integration. It provides an infrastructure for platform- independent many-to-many communication. A number of vendors provide implementations of the DDS standard; RTI, in particular, provides a DDS toolkit for LabVIEW. This toolkit has been evaluated against the needs of SALT, and a few deficiencies have been identified. We have developed our own implementation that interfaces LabVIEW to DDS in order to address our specific needs. Our LabVIEW DDS interface implementation is built against the RTI DDS Core component, provided by RTI under their Open Community Source licence. Our needs dictate that the interface implementation be platform independent. Since we have access to the RTI DDS Core source code, we are able to build the RTI DDS libraries for any of the platforms on which we require support. The communications functionality is based on UDP multicasting. Multicasting is an efficient communications mechanism with low overheads which avoids duplicated point-to-point transmission of data on a network where there are multiple recipients of the data. In the paper we present a performance evaluation of DDS against the current HTTP-based implementation as well as the historical DataSocket implementation. We conclude with a summary and describe future work.
The Priority Inversion Problem and Real-Time Symbolic Model Checking
1993-04-23
real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties
1982-11-01
Avionic Systems Integration Facilities, Mark van den Broek 1113 and Paul M. Vicen, AFLC/LOE Planning of Operational Software Implementation Tool...classified as software tools, including: * o" Operating System " Language Processors (compilers, assem’blers, link editors) o Source Editors " Debug Systems ...o Data Base Systems o Utilities o Etc . This talk addresses itself to the current set of tools provided JOVIAL iJ73 1750A application programmners by
A language comparison for scientific computing on MIMD architectures
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.; Voigt, Robert G.
1989-01-01
Choleski's method for solving banded symmetric, positive definite systems is implemented on a multiprocessor computer using three FORTRAN based parallel programming languages, the Force, PISCES and Concurrent FORTRAN. The capabilities of the language for expressing parallelism and their user friendliness are discussed, including readability of the code, debugging assistance offered, and expressiveness of the languages. The performance of the different implementations is compared. It is argued that PISCES, using the Force for medium-grained parallelism, is the appropriate choice for programming Choleski's method on the multiprocessor computer, Flex/32.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1977-07-18
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 16 figures, 7 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1976-10-07
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers, two CDC STAR computers, and a broad array of peripheral equipment, from any of 800 or so remote terminals. 8 figures, 4 tables.
User's guide to the Octopus computer network (the SHOC manual)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, C.; Thompson, D.; Whitten, G.
1975-06-02
This guide explains how to enter, run, and debug programs on the Octopus network. It briefly describes the network's operation, and directs the reader to other documents for further information. It stresses those service programs that will be most useful in the long run; ''quick'' methods that have little flexibility are not discussed. The Octopus timesharing network gives the user access to four CDC 7600 computers and a broad array of peripheral equipment, from any of 800 remote terminals. Octopus will soon include the Laboratory's STAR-100 computers. 9 figures, 5 tables. (auth)
NASA Technical Reports Server (NTRS)
Smith, W. W.
1973-01-01
A Langley Research Center version of NASTRAN Level 15.1.0 designed to provide the analyst with an added tool for debugging massive NASTRAN input data is described. The program checks all NASTRAN input data cards and displays on a CRT the graphic representation of the undeformed structure. In addition, the program permits the display and alteration of input data and allows reexecution without physically resubmitting the job. Core requirements on the CDC 6000 computer are approximately 77,000 octal words of central memory.
Automated solar panel assembly line
NASA Technical Reports Server (NTRS)
Somberg, H.
1981-01-01
The initial stage of the automated solar panel assembly line program was devoted to concept development and proof of approach through simple experimental verification. In this phase, laboratory bench models were built to demonstrate and verify concepts. Following this phase was machine design and integration of the various machine elements. The third phase was machine assembly and debugging. In this phase, the various elements were operated as a unit and modifications were made as required. The final stage of development was the demonstration of the equipment in a pilot production operation.
Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.
van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim
2018-05-21
Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.
A monitoring system based on electric vehicle three-stage wireless charging
NASA Astrophysics Data System (ADS)
Hei, T.; Liu, Z. Z.; Yang, Y.; Hongxing, CHEN; Zhou, B.; Zeng, H.
2016-08-01
An monitoring system for three-stage wireless charging was designed. The vehicle terminal contained the core board which was used for battery information collection and charging control and the power measurement and charging control core board was provided at the transmitting terminal which communicated with receiver by Bluetooth. A touch-screen display unit was designed based on MCGS (Monitor and Control Generated System) to simulate charging behavior and to debug the system conveniently. The practical application shown that the system could be stable and reliable, and had a favorable application foreground.
ACCELERATORS: Preliminary application of turn-by-turn data analysis to the SSRF storage ring
NASA Astrophysics Data System (ADS)
Chen, Jian-Hui; Zhao, Zhen-Tang
2009-07-01
There is growing interest in utilizing the beam position monitor turn-by-turn (TBT) data to debug accelerators. TBT data can be used to determine the linear optics, coupled optics and nonlinear behaviors of the storage ring lattice. This is not only a useful complement to other methods of determining the linear optics such as LOCO, but also provides a possibility to uncover more hidden phenomena. In this paper, a preliminary application of a β function measurement to the SSRF storage ring is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Erik; Blume-Kohout, Robin; Rudinger, Kenneth
PyGSTi is an implementation of Gate Set Tomography in the python programming language. Gate Set Tomography (GST) is a theory and protocol for simultaneously estimating the state preparation, gate operations, and measurement effects of a physical system of one or many quantum bits (qubits). These estimates are based entirely on the statistics of experimental measurements, and their interpretation and analysis can provide a detailed understanding of the types of errors/imperfections in the physical system. In this way, GST provides not only a means of certifying the "goodness" of qubits but also a means of debugging (i.e. improving) them.
The RAVE/VERTIGO vertex reconstruction toolkit and framework
NASA Astrophysics Data System (ADS)
Waltenberger, W.; Mitaroff, W.; Moser, F.; Pflugfelder, B.; Riedel, H. V.
2008-07-01
A detector-independent toolkit for vertex reconstruction (RAVE1) is being developed, along with a standalone framework (VERTIGO2) for testing, analyzing and debugging. The core algorithms represent state-of-the-art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available.
Off-line robot programming and graphical verification of path planning
NASA Technical Reports Server (NTRS)
Tonkay, Gregory L.
1989-01-01
The objective of this project was to develop or specify an integrated environment for off-line programming, graphical path verification, and debugging for robotic systems. Two alternatives were compared. The first was the integration of the ASEA Off-line Programming package with ROBSIM, a robotic simulation program. The second alternative was the purchase of the commercial product IGRIP. The needs of the RADL (Robotics Applications Development Laboratory) were explored and the alternatives were evaluated based on these needs. As a result, IGRIP was proposed as the best solution to the problem.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1997-01-01
Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.
Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System
Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi
2015-01-01
WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication. PMID:26351658
Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System.
Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi
2015-01-01
WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication.
Robertson, Brian; Zhang, Zichen; Yang, Haining; Redmond, Maura M; Collings, Neil; Liu, Jinsong; Lin, Ruisheng; Jeziorska-Chapman, Anna M; Moore, John R; Crossland, William A; Chu, D P
2012-04-20
It is shown that reflective liquid crystal on silicon (LCOS) spatial light modulator (SLM) based interconnects or fiber switches that use defocus to reduce crosstalk can be evaluated and optimized using a fractional Fourier transform if certain optical symmetry conditions are met. Theoretically the maximum allowable linear hologram phase error compared to a Fourier switch is increased by a factor of six before the target crosstalk for telecom applications of -40 dB is exceeded. A Gerchberg-Saxton algorithm incorporating a fractional Fourier transform modified for use with a reflective LCOS SLM is used to optimize multi-casting holograms in a prototype telecom switch. Experiments are in close agreement to predicted performance.
Modelling and temporal performances evaluation of networked control systems using (max, +) algebra
NASA Astrophysics Data System (ADS)
Ammour, R.; Amari, S.
2015-01-01
In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.
Distributed Ship Navigation Control System Based on Dual Network
NASA Astrophysics Data System (ADS)
Yao, Ying; Lv, Wu
2017-10-01
Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.
Li, Qiuying; Pham, Hoang
2017-01-01
In this paper, we propose a software reliability model that considers not only error generation but also fault removal efficiency combined with testing coverage information based on a nonhomogeneous Poisson process (NHPP). During the past four decades, many software reliability growth models (SRGMs) based on NHPP have been proposed to estimate the software reliability measures, most of which have the same following agreements: 1) it is a common phenomenon that during the testing phase, the fault detection rate always changes; 2) as a result of imperfect debugging, fault removal has been related to a fault re-introduction rate. But there are few SRGMs in the literature that differentiate between fault detection and fault removal, i.e. they seldom consider the imperfect fault removal efficiency. But in practical software developing process, fault removal efficiency cannot always be perfect, i.e. the failures detected might not be removed completely and the original faults might still exist and new faults might be introduced meanwhile, which is referred to as imperfect debugging phenomenon. In this study, a model aiming to incorporate fault introduction rate, fault removal efficiency and testing coverage into software reliability evaluation is developed, using testing coverage to express the fault detection rate and using fault removal efficiency to consider the fault repair. We compare the performance of the proposed model with several existing NHPP SRGMs using three sets of real failure data based on five criteria. The results exhibit that the model can give a better fitting and predictive performance. PMID:28750091
Design and development of 24 times high-power laser beam expander
NASA Astrophysics Data System (ADS)
Lin, Zhao-heng; Gong, Xiu-ming; Wu, Shi-bin; Tan, Yi; Jing, Hong-wei; Wei, Zhong-wei
2013-09-01
As currently, laser calibration, laser radar, laser ranging and the relative field raised up the demand for high magnification laser beam expander. This article intends to introduce a high-energy laser beam expander research and design, large- diameter, wide-band, high-magnification and small obscuration ratio are the main features. By using Cassegrain reflective optical system, this laser beam expander achieves 24 times beam expand, and outgoing effective limiting aperture is Φ600 mm, band scope between 0.45μm to 5μm, single-pulse laser damage threshold greater than 1J/cm2, continuous-wave laser damage threshold greater than 200W/cm2 and obscuration ratio 1:10. Primary mirror underside support uses 9 points float supporting, lateral support mainly depends on mercury belt support and assists by mandrel ball head positioning support. An analyzing base on finite element analysis software ANSYS, and primary mirror deformation status analysis with debug mode and operativemode, when inputs four groups of Angle 170°, 180°, 210° and 240° , mercury belt under each group of angle load-bearing is 65%, 75% , 85% and 100% respectively, totally 16 workingcondition analyze results. At last, the best way to support primary mirror is finalized. Through design of secondary mirror to achieve a five-dimensional precision fine-tune. By assembling and debugging laser beam expander, Zygo interferometer detection system proof image quality (RMS) is 0.043λ (λ=632.8nm), stability (RMS) is 0.007λ (λ=632.8nm), and effective transmission hit 94%, meets the requirements of practical application completely.
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
The Research on Linux Memory Forensics
NASA Astrophysics Data System (ADS)
Zhang, Jun; Che, ShengBing
2018-03-01
Memory forensics is a branch of computer forensics. It does not depend on the operating system API, and analyzes operating system information from binary memory data. Based on the 64-bit Linux operating system, it analyzes system process and thread information from physical memory data. Using ELF file debugging information and propose a method for locating kernel structure member variable, it can be applied to different versions of the Linux operating system. The experimental results show that the method can successfully obtain the sytem process information from physical memory data, and can be compatible with multiple versions of the Linux kernel.
The Role of Program Structure in Software Maintenance.
1986-05-29
0NFDUTC We have entered an era in which it has become increasingly important to develop humlan engineering principles which will 0 significantly...Programmers use slices when debugging. Communications of the ACM1, 25, 446-452. Winer, B. J. (1971). Statistical principles in experimental desin. New York...d dir C.VIo Lir~ 7i, LE -3 C Wi nd J ir E’r~~. ..CJ .. J 1 1Lt . * . BE I N Top Iip END; END-z FU14CiT I L44 Erpt .tALk I. JjiLE;,N; VJIuiH S-taCA DU I
TRICCS: A proposed teleoperator/robot integrated command and control system for space applications
NASA Technical Reports Server (NTRS)
Will, R. W.
1985-01-01
Robotic systems will play an increasingly important role in space operations. An integrated command and control system based on the requirements of space-related applications and incorporating features necessary for the evolution of advanced goal-directed robotic systems is described. These features include: interaction with a world model or domain knowledge base, sensor feedback, multiple-arm capability and concurrent operations. The system makes maximum use of manual interaction at all levels for debug, monitoring, and operational reliability. It is shown that the robotic command and control system may most advantageously be implemented as packages and tasks in Ada.
1980-01-31
Nuclear Agency 13 NUMFIER OF PAGES Washington, D.C. 20305 534 14 MONITORING AGENCY NAME 8 ADDRESS(,! itI(reI t &,n, ( f n ... o.I.n. Offi,e) 15...Initialization 13 3. Storage Management 13 4. Event Control 14 E. Diagnostics 17 1. Debug Routines 17 2. Data Structure Display Routines 19 F . Software...Structures 71 1. Player Lists 72 2. C2 Trees 78 3. Passive Target List 86 F . RED Structures 89 1. Red Theater Commander 90 2. Potential Target List 98 3
Multitasking kernel for the C and Fortran programming languages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, E.D. III
1984-09-01
A multitasking kernel for the C and Fortran programming languages which runs on the Unix operating system is presented. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the coding, debugging and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessors. The performance evaluation features require no changes in the source code of the application and are implemented as a set of compile and run time options in the kernel.
Influence of important factors in industrial debugging of desilication of diaspore by flotation
NASA Astrophysics Data System (ADS)
Ma, Chiyu; Chen, Chen; Hu, Yuehua; Zhu, Hailing; Deng, Haibo
2018-02-01
The beneficiation of diaspore with a low ratio of A/S can be realized by desilication using gradient flotation. The concentrate water can be reused in the flotation process after treating with BXF flocculant. Industrial tests of six months of continuous operation were conducted, and the results show that the concentrate with Al2O3 recovery of 83.04% is obtained, and the ratio of A/S increases from 4.46% in the raw ore to 9.57% in the concentrate. Moreover, the comprehensive reuse rate of the waste-water is 75%.
Experiences on developing digital down conversion algorithms using Xilinx system generator
NASA Astrophysics Data System (ADS)
Xu, Chengfa; Yuan, Yuan; Zhao, Lizhi
2013-07-01
The Digital Down Conversion (DDC) algorithm is a classical signal processing method which is widely used in radar and communication systems. In this paper, the DDC function is implemented by Xilinx System Generator tool on FPGA. System Generator is an FPGA design tool provided by Xilinx Inc and MathWorks Inc. It is very convenient for programmers to manipulate the design and debug the function, especially for the complex algorithm. Through the developing process of DDC function based on System Generator, the results show that System Generator is a very fast and efficient tool for FPGA design.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
This document briefly describes the use of the CRAY supercomputers that are an integral part of the Supercomputing Network Subsystem of the Central Scientific Computing Complex at LaRC. Features of the CRAY supercomputers are covered, including: FORTRAN, C, PASCAL, architectures of the CRAY-2 and CRAY Y-MP, the CRAY UNICOS environment, batch job submittal, debugging, performance analysis, parallel processing, utilities unique to CRAY, and documentation. The document is intended for all CRAY users as a ready reference to frequently asked questions and to more detailed information contained in the vendor manuals. It is appropriate for both the novice and the experienced user.
NASA Astrophysics Data System (ADS)
Wang, Qiang
2017-09-01
As an important part of software engineering, the software process decides the success or failure of software product. The design and development feature of security software process is discussed, so is the necessity and the present significance of using such process. Coordinating the function software, the process for security software and its testing are deeply discussed. The process includes requirement analysis, design, coding, debug and testing, submission and maintenance. In each process, the paper proposed the subprocesses to support software security. As an example, the paper introduces the above process into the power information platform.
NASA Technical Reports Server (NTRS)
Adams, M. L.; Padovan, J.; Fertis, D. G.
1980-01-01
A general purpose squeeze-film damper interactive force element was developed, coded into a software package (module) and debugged. This software package was applied to nonliner dynamic analyses of some simple rotor systems. Results for pressure distributions show that the long bearing (end sealed) is a stronger bearing as compared to the short bearing as expected. Results of the nonlinear dynamic analysis, using a four degree of freedom simulation model, showed that the orbit of the rotating shaft increases nonlinearity to fill the bearing clearance as the unbalanced weight increases.
NASA Technical Reports Server (NTRS)
Trosin, J.
1985-01-01
Use of the Display AButments (DAB) which plots PAN AIR geometries is presented. The DAB program creates hidden line displays of PAN AIR geometries and labels specified geometry components, such as abutments, networks, and network edges. It is used to alleviate the very time consuming and error prone abutment list checking phase of developing a valid PAN AIR geometry, and therefore represents a valuable tool for debugging complex PAN AIR geometry definitions. DAB is written in FORTRAN 77 and runs on a Digital Equipment Corporation VAX 11/780 under VMS. It utilizes a special color version of the SKETCH hidden line analysis routine.
Electrical production testing of the D0 Silicon microstrip tracker detector modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
D0, SMT Production Testing Group; /Fermilab
The D0 Silicon Microstrip Tracker (SMT) is the innermost system of the D0 detector in Run 2. It consists of 912 detector units, corresponding to 5 different types of assemblies, which add up to a system with 792,576 readout channels. The task entrusted to the Production Testing group was to thoroughly debug, test and grade each detector module before its installation in the tracker. This note describes the production testing sequence and the procedures by which the detector modules were electrically tested and characterized at the various stages of their assembly.
NASA Technical Reports Server (NTRS)
Hopcroft, J.
1987-01-01
The potential benefits of automation in space are significant. The science base needed to support this automation not only will help control costs and reduce lead-time in the earth-based design and construction of space stations, but also will advance the nation's capability for computer design, simulation, testing, and debugging of sophisticated objects electronically. Progress in automation will require the ability to electronically represent, reason about, and manipulate objects. Discussed here is the development of representations, languages, editors, and model-driven simulation systems to support electronic prototyping. In particular, it identifies areas where basic research is needed before further progress can be made.
1981-06-01
RETIIN. •4 C H’T SE’LE(T Till’E S()NAR S TO EI Sonai r’ types are specified INCI.’I)EI) IN TIllI. RAN( G ’ individually. Prompt repeats PR I(’ICTION...complete revision, Revision A. Symbols are not used in this revision to identify changes with respect to the pre- vious issue, due to the exten- g siveness...maintenance such as source editing, compiling, and debugging. In addition, it provides the user with a simple and uniform interface for transfcrrin g files of
Development of flying qualities criteria for single pilot instrument flight operations
NASA Technical Reports Server (NTRS)
Bar-Gill, A.; Nixon, W. B.; Miller, G. E.
1982-01-01
Flying qualities criteria for Single Pilot Instrument Flight Rule (SPIFR) operations were investigated. The ARA aircraft was modified and adapted for SPIFR operations. Aircraft configurations to be flight-tested were chosen and matched on the ARA in-flight simulator, implementing modern control theory algorithms. Mission planning and experimental matrix design were completed. Microprocessor software for the onboard data acquisition system was debugged and flight-tested. Flight-path reconstruction procedure and the associated FORTRAN program were developed. Algorithms associated with the statistical analysis of flight test results and the SPIFR flying qualities criteria deduction are discussed.
RAVE—a Detector-independent vertex reconstruction toolkit
NASA Astrophysics Data System (ADS)
Waltenberger, Wolfgang; Mitaroff, Winfried; Moser, Fabian
2007-10-01
A detector-independent toolkit for vertex reconstruction (RAVE ) is being developed, along with a standalone framework (VERTIGO ) for testing, analyzing and debugging. The core algorithms represent state of the art for geometric vertex finding and fitting by both linear (Kalman filter) and robust estimation methods. Main design goals are ease of use, flexibility for embedding into existing software frameworks, extensibility, and openness. The implementation is based on modern object-oriented techniques, is coded in C++ with interfaces for Java and Python, and follows an open-source approach. A beta release is available. VERTIGO = "vertex reconstruction toolkit and interface to generic objects".
Towards Scalable Cost-Effective Service and Survivability Provisioning in Ultra High Speed Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bin Wang
Optical transport networks based on wavelength division multiplexing (WDM) are considered to be the most appropriate choice for future Internet backbone. On the other hand, future DOE networks are expected to have the ability to dynamically provision on-demand survivable services to suit the needs of various high performance scientific applications and remote collaboration. Since a failure in aWDMnetwork such as a cable cut may result in a tremendous amount of data loss, efficient protection of data transport in WDM networks is therefore essential. As the backbone network is moving towards GMPLS/WDM optical networks, the unique requirement to support DOE’s sciencemore » mission results in challenging issues that are not directly addressed by existing networking techniques and methodologies. The objectives of this project were to develop cost effective protection and restoration mechanisms based on dedicated path, shared path, preconfigured cycle (p-cycle), and so on, to deal with single failure, dual failure, and shared risk link group (SRLG) failure, under different traffic and resource requirement models; to devise efficient service provisioning algorithms that deal with application specific network resource requirements for both unicast and multicast; to study various aspects of traffic grooming in WDM ring and mesh networks to derive cost effective solutions while meeting application resource and QoS requirements; to design various diverse routing and multi-constrained routing algorithms, considering different traffic models and failure models, for protection and restoration, as well as for service provisioning; to propose and study new optical burst switched architectures and mechanisms for effectively supporting dynamic services; and to integrate research with graduate and undergraduate education. All objectives have been successfully met. This report summarizes the major accomplishments of this project. The impact of the project manifests in many aspects: First, the project addressed many essential problems that arisen in current and future WDM optical networks, and provided a host of innovative solutions though there was no invention or patent filing. This project resulted in more than 2 dozens publications in major journals and conferences (including papers in IEEE Transactions and journals, as well as a book chapter). Our publications have been cited by many peer researchers. In particular, one of our conference papers was nominated for the best paper award of IEEE/Create-Net Broadnets (International Conference on Broadband Communications, Networks, and Systems) 2006. Second, the results and solutions of this project were well received by DOE Labs where presentations were given by the PI. We hope to continue the collaboration with DOE Labs in the future. Third, the project was the first to propose and extensively study multicast traffic grooming, new traffic models such as sliding scheduled traffic model and scheduled traffic model. Our research has sparkled a flurry of recent studies and publications by the research community in these areas. Fourth, the project has benefited a diverse population of students by motivating, engaging, enhancing their learning and skills. The project has been conducted in a manner conducive to the training of students both at graduate and undergraduate levels. As a result, one Ph.D., Dr. Abdur Billah, was graduated. Another Ph.D. student, Tianjian Li, will graduate in January 2007. In addition, four MS students were graduated. One undergraduate student, Jeffrey Alan Shininger, completed his university honors project. Fifth, thanks to the support of this ECPI project, the PI has obtained additional funding from the National Science Foundation, the Air Force Research Lab, and other sources. A few other proposals are pending. Finally, this project has also significantly impacted the curricula and resulted in the enhancement of courses at the graduate and undergraduate levels, therefore strengthening the bond between research and education.« less
A Protocol for Scalable Loop-Free Multicast Routing
1997-01-01
N ÁŜ+@"B�tQDVO:(#ÄŜ+ÄIO8+@D@DÀiÄq/"VÂaÀi:!8A:<B?V+B3/"V+I 8AQ "kVÄDV|¾VO¾K/"VO:J;¬K�@ÄDVNt�QDV:(+:=V|QD@D:zV+8cI#ඝ/RöV> Ä...8@DVOK¶Å�)/"Ài:�:zÀcQDÄzVO:yø|K¶@.W¢K¶ÅiQD:=Vù iIc5�T�@*(@�QR¾7/"VO:�ÀA :=VS8c;qÀi@"ILÀcQRÃMBqŜOWV I 8AQ "kV+B7¸qî�÷ï ½�ÄzÀ�8A:z:=K�WVI
NASA Astrophysics Data System (ADS)
Deng, Ning
In recent years, optical phase modulation has attracted much research attention in the field of fiber optic communications. Compared with the traditional optical intensity-modulated signal, one of the main merits of the optical phase-modulated signal is the better transmission performance. For optical phase modulation, in spite of the comprehensive study of its transmission performance, only a little research has been carried out in terms of its functions, applications and signal processing for future optical networks. These issues are systematically investigated in this thesis. The research findings suggest that optical phase modulation and its signal processing can greatly facilitate flexible network functions and high bandwidth which can be enjoyed by end users. In the thesis, the most important physical-layer technology, signal processing and multiplexing, are investigated with optical phase-modulated signals. Novel and advantageous signal processing and multiplexing approaches are proposed and studied. Experimental investigations are also reported and discussed in the thesis. Optical time-division multiplexing and demultiplexing. With the ever-increasing demand on communication bandwidth, optical time division multiplexing (OTDM) is an effective approach to upgrade the capacity of each wavelength channel in current optical systems. OTDM multiplexing can be simply realized, however, the demultiplexing requires relatively complicated signal processing and stringent timing control, and thus hinders its practicability. To tackle this problem, in this thesis a new OTDM scheme with hybrid DPSK and OOK signals is proposed. Experimental investigation shows this scheme can greatly enhance the demultiplexing timing misalignment and improve the demultiplexing performance, and thus make OTDM more practical and cost effective. All-optical signal processing. In current and future optical communication systems and networks, the data rate per wavelength has been approaching the speed limitation of electronics. Thus, all-optical signal processing techniques are highly desirable to support the necessary optical switching functionalities in future ultrahigh-speed optical packet-switching networks. To cope with the wide use of optical phase-modulated signals, in the thesis, an all-optical logic for DPSK or PSK input signals is developed, for the first time. Based on four-wave mixing in semiconductor optical amplifier, the structure of the logic gate is simple, compact, and capable of supporting ultrafast operation. In addition to the general logic processing, a simple label recognition scheme, as a specific signal processing function, is proposed for phase-modulated label signals. The proposed scheme can recognize any incoming label pattern according to the local pattern, and is potentially capable of handling variable-length label patterns. Optical access network with multicast overlay and centralized light sources. In the arena of optical access networks, wavelength division multiplexing passive optical network (WDM-PON) is a promising technology to deliver high-speed data traffic. However, most of proposed WDM-PONs only support conventional point-to-point service, and cannot meet the requirement of increasing demand on broadcast and multicast service. In this thesis, a simple network upgrade is proposed based on the traditional PON architecture to support both point-to-point and multicast service. In addition, the two service signals are modulated on the same lightwave carrier. The upstream signal is also remodulated on the same carrier at the optical network unit, which can significantly relax the requirement on wavelength management at the network unit.
Bridging the Gap from Networking Technologies to Applications: Workshop Report
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.; desJardins, Richard
2000-01-01
The objective of the Next Generation Internet (NGI) Federal program is threefold, encompassing development of networking technologies, high-performance network testbeds, and revolutionary applications. There have been notable advances in emerging network technologies and several nationwide testbeds have been established, but the integration of emerging technologies into applications is lagging. To help bridge this gap between developers of NGI networking technologies and developers of NGI applications, the NASA Research and Education Network (NREN) project hosted a two-day workshop at NASA Ames Research Center in August 1999. This paper presents a summary of the results of this workshop and also describes some of the challenges NREN is facing while incorporating new technologies into HPCC and other NASA applications. The workshop focused on three technologies - Quality of Service (QoS), advanced multicast, and security-and five major NGI application areas - telemedicine, digital earth, digital video, distributed data-intensive applications, and computational infrastructure applications. Network technology experts, application developers, and NGI testbed representatives came together at the workshop to promote cross-fertilization between the groups. Presentations on the first day, including an overview of the three technologies, application case studies and testbed status reports, laid the foundation for discussions on the second day. The objective of these latter discussions, held within smaller breakout groups, was to establish a coherent picture of the current status of the various pieces of each of the three technologies, to create a roadmap outlining future technology development, and to offer technological guidance to application developers. In this paper we first present a brief overview of the NGI applications that were represented at the workshop, focusing on the identification of technological advances that have successfully been incorporated in each application and technological challenges that remain. Next we present the technology roadmaps that were created at the workshop, summarizing the status of various mechanisms that are currently under development and forecasting when various advances are likely to occur within the next one-to-three-year time span. Then we identify issues that were raised at the workshop that might hinder technology development or that might impede integration into NGI applications. We also report some specific guidelines that were offered at the workshop to enable application developers to integrate and effectively use emerging NGI technology building blocks. Finally, we describe NREN activities to incorporate emerging technologies into NASA applications. These activities include support for other NASA High-Performance Computing and Communications Program areas such as IPG (Information Power Grid), support for NASA science enterprises such as Earth science and Mars program prototyping activities, support for satellite/terrestrial networking applications such as the TransAtlantic and TransPacific demonstrations and the Interplanetary Internet, support for NASA telemedicine applications such as the Virtual Collaborative Clinic, and participation in NGI advanced technology testbed initiatives such as the QBone and the NTON/Supernet. For each activity we highlight the primary technological challenge that is associated with it.
MARTe: A Multiplatform Real-Time Framework
NASA Astrophysics Data System (ADS)
Neto, André C.; Sartori, Filippo; Piccolo, Fabio; Vitelli, Riccardo; De Tommasi, Gianmaria; Zabeo, Luca; Barbalace, Antonio; Fernandes, Horacio; Valcarcel, Daniel F.; Batista, Antonio J. N.
2010-04-01
Development of real-time applications is usually associated with nonportable code targeted at specific real-time operating systems. The boundary between hardware drivers, system services, and user code is commonly not well defined, making the development in the target host significantly difficult. The Multithreaded Application Real-Time executor (MARTe) is a framework built over a multiplatform library that allows the execution of the same code in different operating systems. The framework provides the high-level interfaces with hardware, external configuration programs, and user interfaces, assuring at the same time hard real-time performances. End-users of the framework are required to define and implement algorithms inside a well-defined block of software, named Generic Application Module (GAM), that is executed by the real-time scheduler. Each GAM is reconfigurable with a set of predefined configuration meta-parameters and interchanges information using a set of data pipes that are provided as inputs and required as output. Using these connections, different GAMs can be chained either in series or parallel. GAMs can be developed and debugged in a non-real-time system and, only once the robustness of the code and correctness of the algorithm are verified, deployed to the real-time system. The software also supplies a large set of utilities that greatly ease the interaction and debugging of a running system. Among the most useful are a highly efficient real-time logger, HTTP introspection of real-time objects, and HTTP remote configuration. MARTe is currently being used to successfully drive the plasma vertical stabilization controller on the largest magnetic confinement fusion device in the world, with a control loop cycle of 50 ?s and a jitter under 1 ?s. In this particular project, MARTe is used with the Real-Time Application Interface (RTAI)/Linux operating system exploiting the new ?86 multicore processors technology.
On-Die Sensors for Transient Events
NASA Astrophysics Data System (ADS)
Suchak, Mihir Vimal
Failures caused by transient electromagnetic events like Electrostatic Discharge (ESD) are a major concern for embedded systems. The component often failing is an integrated circuit (IC). Determining which IC is affected in a multi-device system is a challenging task. Debugging errors often requires sophisticated lab setups which require intentionally disturbing and probing various parts of the system which might not be easily accessible. Opening the system and adding probes may change its response to the transient event, which further compounds the problem. On-die transient event sensors were developed that require relatively little area on die, making them inexpensive, they consume negligible static current, and do not interfere with normal operation of the IC. These circuits can be used to determine the pin involved and the level of the event in the event of a transient event affecting the IC, thus allowing the user to debug system-level transient events without modifying the system. The circuit and detection scheme design has been completed and verified in simulations with Cadence Virtuoso environment. Simulations accounted for the impact of the ESD protection circuits, parasitics from the I/O pin, package and I/O ring, and included a model of an ESD gun to test the circuit's response to an ESD pulse as specified in IEC 61000-4-2. Multiple detection schemes are proposed. The final detection scheme consists of an event detector and a level sensor. The event detector latches on the presence of an event at a pad, to determine on which pin an event occurred. The level sensor generates current proportional to the level of the event. This current is converted to a voltage and digitized at the A/D converter to be read by the microprocessor. Detection scheme shows good performance in simulations when checked against process variations and different kind of events.
ASC-AD penetration modeling FY05 status report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kistler, Bruce L.; Ostien, Jakob T.; Chiesa, Michael L.
2006-04-01
Sandia currently lacks a high fidelity method for predicting loads on and subsequent structural response of earth penetrating weapons. This project seeks to test, debug, improve and validate methodologies for modeling earth penetration. Results of this project will allow us to optimize and certify designs for the B61-11, Robust Nuclear Earth Penetrator (RNEP), PEN-X and future nuclear and conventional penetrator systems. Since this is an ASC Advanced Deployment project the primary goal of the work is to test, debug, verify and validate new Sierra (and Nevada) tools. Also, since this project is part of the V&V program within ASC, uncertaintymore » quantification (UQ), optimization using DAKOTA [1] and sensitivity analysis are an integral part of the work. This project evaluates, verifies and validates new constitutive models, penetration methodologies and Sierra/Nevada codes. In FY05 the project focused mostly on PRESTO [2] using the Spherical Cavity Expansion (SCE) [3,4] and PRESTO Lagrangian analysis with a preformed hole (Pen-X) methodologies. Modeling penetration tests using PRESTO with a pilot hole was also attempted to evaluate constitutive models. Future years work would include the Alegra/SHISM [5] and AlegrdEP (Earth Penetration) methodologies when they are ready for validation testing. Constitutive models such as Soil-and-Foam, the Sandia Geomodel [6], and the K&C Concrete model [7] were also tested and evaluated. This report is submitted to satisfy annual documentation requirements for the ASC Advanced Deployment program. This report summarizes FY05 work performed in the Penetration Mechanical Response (ASC-APPS) and Penetration Mechanics (ASC-V&V) projects. A single report is written to document the two projects because of the significant amount of technical overlap.« less
Data Integration Tool: Permafrost Data Debugging
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.
2017-12-01
We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.
Hines, Michael; Kumar, Sameer; Schürmann, Felix
2011-01-01
For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
NASA Astrophysics Data System (ADS)
Cannon, Brice M.
This thesis investigates the all-optical combination of amplitude and phase modulated signals into one unified multi-level phase modulated signal, utilizing the Kerr nonlinearity of cross-phase modulation (XPM). Predominantly, the first experimental demonstration of simultaneous polarization-insensitive phase-transmultiplexing and multicasting (PI-PTMM) will be discussed. The PI-PTMM operation combines the data of a single 10-Gbaud carrier-suppressed return-to-zero (CSRZ) on-off keyed (OOK) pump signal and 4x10-Gbaud return-to-zero (RZ) binary phase-shift keyed (BPSK) probe signals to generate 4x10-GBd RZ-quadrature phase-shift keyed (QPSK) signals utilizing a highly nonlinear, birefringent photonic crystal fiber (PCF). Since XPM is a highly polarization dependent nonlinearity, a polarization sensitivity reduction technique was used to alleviate the fluctuations due to the remotely generated signals' unpredictable states of polarization (SOP). The measured amplified spontaneous emission (ASE) limited receiver sensitivity optical signal-to-noise ratio (OSNR) penalty of the PI-PTMM signal relative to the field-programmable gate array (FPGA) pre-coded RZ-DQPSK baseline at a forward-error correction (FEC) limit of 10-3 BER was ≈ 0.3 dB. In addition, the OSNR of the remotely generated CSRZ-OOK signal could be degraded to ≈ 29 dB/0.1nm, before the bit error rate (BER) performance of the PI-PTMM operation began to exponentially degrade. A 138-km dispersion-managed recirculating loop system with a 100-GHz, 13-channel mixed-format dense-wavelength-division multiplexed (DWDM) transmitter was constructed to investigate the effect of metro/long-haul transmission impairments. The PI-PTMM DQPSK and the FPGA pre-coded RZ-DQPSK baseline signals were transmitted 1,900 km and 2,400 km in the nonlinearity-limited transmission regime before reaching the 10-3 BER FEC limit. The relative reduction in transmission distance for the PI-PTMM signal was due to the additional transmitter impairments in the PCF that interact negatively with the transmission fiber.
NASTRAN analysis of Tokamak vacuum vessel using interactive graphics
NASA Technical Reports Server (NTRS)
Miller, A.; Badrian, M.
1978-01-01
Isoparametric quadrilateral and triangular elements were used to represent the vacuum vessel shell structure. For toroidally symmetric loadings, MPCs were employed across model boundaries and rigid format 24 was invoked. Nonsymmetric loadings required the use of the cyclic symmetry analysis available with rigid format 49. NASTRAN served as an important analysis tool in the Tokamak design effort by providing a reliable means for assessing structural integrity. Interactive graphics were employed in the finite element model generation and in the post-processing of results. It was felt that model generation and checkout with interactive graphics reduced the modelling effort and debugging man-hours significantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gharibyan, N.
In order to fully characterize the NIF neutron spectrum, SAND-II-SNL software was requested/received from the Radiation Safety Information Computational Center. The software is designed to determine the neutron energy spectrum through analysis of experimental activation data. However, given that the source code was developed in Sparcstation 10, it is not compatible with current version of FORTRAN. Accounts have been established through the Lawrence Livermore National Laboratory’s High Performance Computing in order to access different compiles for FORTRAN (e.g. pgf77, pgf90). Additionally, several of the subroutines included in the SAND-II-SNL package have required debugging efforts to allow for proper compiling ofmore » the code.« less
Electrical failure debug using interlayer profiling method
NASA Astrophysics Data System (ADS)
Yang, Thomas; Shen, Yang; Zhang, Yifan; Sweis, Jason; Lai, Ya-Chieh
2017-03-01
It is very well known that as technology nodes move to smaller sizes, the number of design rules increases while design structures become more regular and the process manufacturing steps have increased as well. Normal inspection tools can only monitor hard failures on a single layer. For electrical failures that happen due to inter layers misalignments, we can only detect them through testing. This paper will present a working flow for using pattern analysis interlayer profiling techniques to turn multiple layer physical info into group linked parameter values. Using this data analysis flow combined with an electrical model allows us to find critical regions on a layout for yield learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandwisch, D W
1995-11-01
This report describes work performed by Solar Cells, Inc. (SCI), under a 3-year subcontract to advance SCI`s PV manufacturing technologies, reduce module production costs, increase module performance, and provide the groundwork for SCI to expand its commercial production capacities. SCI will meet these objectives in three phases by designing, debugging, and operating a 20-MW/year, automated, continuous PV manufacturing line that produces 60-cm {times} 120-cm thin-film CdTe PV modules. This report describes tasks completed under Phase 1 of the US Department of Energy`s PV Manufacturing Technology program.
Solar heating and cooling system design and development
NASA Technical Reports Server (NTRS)
1978-01-01
The progress of the program during the sixth program quarter is reported. The program calls for the development and delivery of eight prototype solar heating and cooling systems for installation and operational test. The William O'Brien single-family heating system was installed and is operational. The New Castle single-family heating residence is under construction. The Kansas University (KU) system is in the final design stages. The 25 ton cooling subsystem for KU is the debugging stage. Pressure drops that were greater than anticipated were encountered. The 3 ton simulation work is being finalized and the design parameters for the Rankine system were determined from simulation output.
Functional programming interpreter. M. S. thesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robison, A.D.
1987-03-01
Functional Programming (FP) sup BAC87 is an alternative to conventional imperative programming languages. This thesis describes an FP interpreter implementation. Superficially, FP appears to be a simple, but very inefficient language. Its simplicity, however, allows it to be interpreted quickly. Much of the inefficiency can be removed by simple interpreter techniques. This thesis describes the Illinois Functional Programming (IFP) interpreter, an interactive functional programming implementation which runs under both MS-DOS and UNIX. The IFP interpreter allows functions to be created, executed, and debugged in an environment very similar to UNIX. IFP's speed is competitive with other interpreted languages such asmore » BASIC.« less
NASA Technical Reports Server (NTRS)
Svalbonas, V.; Levine, H.; Ogilvie, P.
1975-01-01
Engineering programming information is presented for the STARS-2P (shell theory automated for rotational structures-2P (plasticity)) digital computer program, and FORTRAN 4 was used in writing the various subroutines. The execution of this program requires the use of thirteen temporary storage units. The program was initially written and debugged on the IBM 370-165 computer and converted to the UNIVAC 1108 computer, where it utilizes approximately 60,000 words of core. Only basic FORTRAN library routines are required by the program: sine, cosine, absolute value, and square root.
Construct Validation of the Physics Metacognition Inventory
NASA Astrophysics Data System (ADS)
Taasoobshirazi, Gita; Farley, John
2013-02-01
The 24-item Physics Metacognition Inventory was developed to measure physics students' metacognition for problem solving. Items were classified into eight subcomponents subsumed under two broader components: knowledge of cognition and regulation of cognition. The students' scores on the inventory were found to be reliable and related to students' physics motivation and physics grade. An exploratory factor analysis provided evidence of construct validity, revealing six components of students' metacognition when solving physics problems including: knowledge of cognition, planning, monitoring, evaluation, debugging, and information management. Although women and men differed on the components, they had equivalent overall metacognition for problem solving. The implications of these findings for future research are discussed.
Design of two wheel self balancing car
NASA Astrophysics Data System (ADS)
He, Chun-hong; Ren, Bin
2018-02-01
This paper proposes a design scheme of the two-wheel self-balancing dolly, the integration of the gyroscope and accelerometer MPU6050 constitutes the car position detection device.System selects 32-bit MCU stmicroelectronics company as the control core, completed the processing of sensor signals, the realization of the filtering algorithm, motion control and human-computer interaction. Produced and debugging in the whole system is completed, the car can realize the independent balance under the condition of no intervention. The introduction of a suitable amount of interference, the car can adjust quickly to recover and steady state. Through remote control car bluetooth module complete forward, backward, turn left and other basic action..
ProjectQ: Compiling quantum programs for various backends
NASA Astrophysics Data System (ADS)
Haener, Thomas; Steiger, Damian S.; Troyer, Matthias
In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.
MHD Instability and Turbulence in the Tachocline
NASA Technical Reports Server (NTRS)
Werne, Joseph
2001-01-01
In this quarter we have begun simulations on the Cray T3E at PSC and we are debugging our code on the TSC. The PSC simulations are examining stratified shear turbulence with a flow-aligned magnetic field and passive tracer particles. We have conducted analysis of neutral simulations to establish a firm basis of comparison. Second-order structure functions have been computed, fit, and compared to theoretical expressions relating the dissipation fields and the structure-function-fit parameters. Agreement with high-Reynolds number observations is excellent, giving us confidence that the lower-Re simulations are relevant to higher-Re flows. We have also evaluated the neutral layer anisotropy.
Application of real-time engine simulations to the development of propulsion system controls
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1975-01-01
The development of digital controls for turbojet and turbofan engines is presented by the use of real-time computer simulations of the engines. The engine simulation provides a test-bed for evaluating new control laws and for checking and debugging control software and hardware prior to engine testing. The development and use of real-time, hybrid computer simulations of the Pratt and Whitney TF30-P-3 and F100-PW-100 augmented turbofans are described in support of a number of controls research programs at the Lewis Research Center. The role of engine simulations in solving the propulsion systems integration problem is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Gregory L.; Arnold, Dorian; LeGendre, Matthew
STAT is a light weight debugging tool that gathers and merges stack traces from all of the processes in a parallel application. STAT uses the MRNet tree based overlay network to broadcast commands from the tool front-end to the STAT daemons and for the front-end to gather the traces from the STAT daemons. As the traces propagate through the MRNet network tree, they are merged across all tasks to form a similar function call patterns and to delineate a small set of equivalence classes. A representative task from each of these classes can then be fed into a full featuremore » debugger like TolalView for root cause analysis.« less
The TOTEM T1 read out card motherboard
NASA Astrophysics Data System (ADS)
Minutoli, S.; Lo Vetere, M.; Robutti, E.
2010-12-01
This article describes the Read Out Card (ROC) motherboard, which is the main component of the T1 forward telescope front-end electronic system. The ROC main objectives are to acquire tracking data and trigger information from the detector. It performs data conversion from electrical to optical format and transfers the data streams to the next level of the system and it implements Slow Control modules which are able to receive, decode and distribute the LHC machine low jitter clock and fast command. The ROC also provides a spy mezzanine connection based on programmable FPGA and USB2.0 for laboratory and portable DAQ debugging system.
Optical network scaling: roles of spectral and spatial aggregation.
Arık, Sercan Ö; Ho, Keang-Po; Kahn, Joseph M
2014-12-01
As the bit rates of routed data streams exceed the throughput of single wavelength-division multiplexing channels, spectral and spatial traffic aggregation become essential for optical network scaling. These aggregation techniques reduce network routing complexity by increasing spectral efficiency to decrease the number of fibers, and by increasing switching granularity to decrease the number of switching components. Spectral aggregation yields a modest decrease in the number of fibers but a substantial decrease in the number of switching components. Spatial aggregation yields a substantial decrease in both the number of fibers and the number of switching components. To quantify routing complexity reduction, we analyze the number of multi-cast and wavelength-selective switches required in a colorless, directionless and contentionless reconfigurable optical add-drop multiplexer architecture. Traffic aggregation has two potential drawbacks: reduced routing power and increased switching component size.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
A translator and simulator for the Burroughs D machine
NASA Technical Reports Server (NTRS)
Roberts, J.
1972-01-01
The D Machine is described as a small user microprogrammable computer designed to be a versatile building block for such diverse functions as: disk file controllers, I/O controllers, and emulators. TRANSLANG is an ALGOL-like language, which allows D Machine users to write microprograms in an English-like format as opposed to creating binary bit pattern maps. The TRANSLANG translator parses TRANSLANG programs into D Machine microinstruction bit patterns which can be executed on the D Machine simulator. In addition to simulation and translation, the two programs also offer several debugging tools, such as: a full set of diagnostic error messages, register dumps, simulated memory dumps, traces on instructions and groups of instructions, and breakpoints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, S.H.; Oxoby, G.J.; Trang, Q.H.
The advent of the personal microcomputer provides a new tool for the debugging, calibration and monitoring of small scale physics apparatus; e.g., a single detector being developed for a larger physics apparatus. With an appropriate interface these microcomputer systems provide a low cost (1/3 the cost of a comparable minicomputer system), convenient, dedicated, portable system which can be used in a fashion similar to that of portable oscilloscopes. Here we describe an interface between the Apple computer and CAMAC which is now being used to study the detector for a Cerenkov ring-imaging device. The Apple is particularly well-suited to thismore » application because of its ease of use, hi-resolution graphics peripheral buss and documentation support.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxoby, G.J.; Trang, Q.H.; Williams, S.H.
The advent of the personal microcomputer provides a new tool for the debugging, calibration and monitoring of small scale physics apparatus, e.g., a single detector being developed for a larger physics apparatus. With an appropriate interface these microcomputer systems provide a low cost (1/3 the cost of a comparable minicomputer system), convenient, dedicated, portable system which can be used in a fashion similar to that of portable oscilloscopes. Here, an interface between the Apple computer and CAMAC which is now being used to study the detector for a Cerenkov ring-imaging device is described. The Apple is particularly well-suited to thismore » application because of its ease of use, hi-resolution graphics, peripheral bus and documentation support.« less
A ECG Signal Gathering and Displaying System Based on AVR
NASA Astrophysics Data System (ADS)
Ning, Li; Ruilan, Zhang; Jian, Liu; Xiaochen, Wang; Shuying, Chen; Zhuolin, Lang
2017-12-01
This article introduces a kind of system which is based on the AVR to acquire the data of ECG. Such system using the A/D function of ATmega8 chip and the lattice graph LCD to design ECG heart acquisition satisfies the demands above. This design gives a composition of hardware and programming of software about the system in detail which has mainly realized the real-time gathering, the amplifier, the filter, the A/D transformation and the LCD display. Since the AVR includes A/D transformation function and support embedded C language programming, it reduces the peripheral circuit, further more it also decreases the time to design and debug this system.
Software Development Technologies for Reactive, Real-Time, and Hybrid Systems
NASA Technical Reports Server (NTRS)
Manna, Zohar
1996-01-01
The research is directed towards the design and implementation of a comprehensive deductive environment for the development of high-assurance systems, especially reactive (concurrent, real-time, and hybrid) systems. Reactive systems maintain an ongoing interaction with their environment, and are among the most difficult to design and verify. The project aims to provide engineers with a wide variety of tools within a single, general, formal framework in which the tools will be most effective. The entire development process is considered, including the construction, transformation, validation, verification, debugging, and maintenance of computer systems. The goal is to automate the process as much as possible and reduce the errors that pervade hardware and software development.
Designing and implementing transparency for real time inspection of autonomous robots
NASA Astrophysics Data System (ADS)
Theodorou, Andreas; Wortham, Robert H.; Bryson, Joanna J.
2017-07-01
The EPSRC's Principles of Robotics advises the implementation of transparency in robotic systems, however research related to AI transparency is in its infancy. This paper introduces the reader of the importance of having transparent inspection of intelligent agents and provides guidance for good practice when developing such agents. By considering and expanding upon other prominent definitions found in literature, we provide a robust definition of transparency as a mechanism to expose the decision-making of a robot. The paper continues by addressing potential design decisions developers need to consider when designing and developing transparent systems. Finally, we describe our new interactive intelligence editor, designed to visualise, develop and debug real-time intelligence.
NASA Technical Reports Server (NTRS)
Tennille, Geoffrey M.; Howser, Lona M.
1993-01-01
The use of the CONVEX computers that are an integral part of the Supercomputing Network Subsystems (SNS) of the Central Scientific Computing Complex of LaRC is briefly described. Features of the CONVEX computers that are significantly different than the CRAY supercomputers are covered, including: FORTRAN, C, architecture of the CONVEX computers, the CONVEX environment, batch job submittal, debugging, performance analysis, utilities unique to CONVEX, and documentation. This revision reflects the addition of the Applications Compiler and X-based debugger, CXdb. The document id intended for all CONVEX users as a ready reference to frequently asked questions and to more detailed information contained with the vendor manuals. It is appropriate for both the novice and the experienced user.
A multiarchitecture parallel-processing development environment
NASA Technical Reports Server (NTRS)
Townsend, Scott; Blech, Richard; Cole, Gary
1993-01-01
A description is given of the hardware and software of a multiprocessor test bed - the second generation Hypercluster system. The Hypercluster architecture consists of a standard hypercube distributed-memory topology, with multiprocessor shared-memory nodes. By using standard, off-the-shelf hardware, the system can be upgraded to use rapidly improving computer technology. The Hypercluster's multiarchitecture nature makes it suitable for researching parallel algorithms in computational field simulation applications (e.g., computational fluid dynamics). The dedicated test-bed environment of the Hypercluster and its custom-built software allows experiments with various parallel-processing concepts such as message passing algorithms, debugging tools, and computational 'steering'. Such research would be difficult, if not impossible, to achieve on shared, commercial systems.
NASA Technical Reports Server (NTRS)
Drake, R. L.; Duvoisin, P. F.; Asthana, A.; Mather, T. W.
1971-01-01
High speed automated identification and design of dynamic systems, both linear and nonlinear, are discussed. Special emphasis is placed on developing hardware and techniques which are applicable to practical problems. The basic modeling experiment and new results are described. Using the improvements developed successful identification of several systems, including a physical example as well as simulated systems, was obtained. The advantages of parameter signature analysis over signal signature analysis in go-no go testing of operational systems were demonstrated. The feasibility of using these ideas in failure mode prediction in operating systems was also investigated. An improved digital controlled nonlinear function generator was developed, de-bugged, and completely documented.
USMC Ground Surveillance Robot (GSR): Lessons Learned
NASA Astrophysics Data System (ADS)
Harmon, S. Y.
1987-02-01
This paper describes the design of an autonomous vehicle and the lessons learned during the implementation of that complex robot. The major problems encountered to which solutions were found include sensor processing bandwidth limitations, coordination of the interactions between major subsystems, sensor data fusion and system knowledge representation. Those problems remaining unresolved include system complexity management, the lack of powerful system monitoring and debugging tools, exploratory implementation of a complex system and safety and testing issues. Many of these problems arose from working with underdeveloped and continuously evolving technology and will probably be resolved as the technological resources mature and stabilize. Unfortunately, other problems will continue to plague developers throughout the evolution of autonomous system technology.
Design on the x-ray oral digital image display card
NASA Astrophysics Data System (ADS)
Wang, Liping; Gu, Guohua; Chen, Qian
2009-10-01
According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the circuit components have been adjusted. And high-quality images have been obtained.
Virtually-synchronous communication based on a weak failure suspector
NASA Technical Reports Server (NTRS)
Schiper, Andre; Ricciardi, Aleta
1993-01-01
Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.
Load balancing for massively-parallel soft-real-time systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hailperin, M.
1988-09-01
Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less
Efficient Assignment of Multiple E-MBMS Sessions towards LTE
NASA Astrophysics Data System (ADS)
Alexiou, Antonios; Bouras, Christos; Kokkinos, Vasileios
One of the major prerequisites for Long Term Evolution (LTE) networks is the mass provision of multimedia services to mobile users. To this end, Evolved - Multimedia Broadcast/Multicast Service (E-MBMS) is envisaged to play an instrumental role during LTE standardization process and ensure LTE’s proliferation in mobile market. E-MBMS targets at the economic delivery, in terms of power and spectral efficiency, of multimedia data from a single source entity to multiple destinations. This paper proposes a novel mechanism for efficient radio bearer selection during E-MBMS transmissions in LTE networks. The proposed mechanism is based on the concept of transport channels combination in any cell of the network. Most significantly, the mechanism manages to efficiently deliver multiple E-MBMS sessions. The performance of the proposed mechanism is evaluated and compared with several radio bearer selection mechanisms in order to highlight the enhancements that it provides.
Enabling Optical Network Test Bed for 5G Tests
NASA Astrophysics Data System (ADS)
Giuntini, Marco; Grazioso, Paolo; Matera, Francesco; Valenti, Alessandro; Attanasio, Vincenzo; Di Bartolo, Silvia; Nastri, Emanuele
2017-03-01
In this work, we show some experimental approaches concerning optical network design dedicated to 5G infrastructures. In particular, we show some implementations of network slicing based on Carrier Ethernet forwarding, which will be very suitable in the context of 5G heterogeneous networks, especially looking at services for vertical enterprises. We also show how to adopt a central unit (orchestrator) to automatically manage such logical paths according to quality-of-service requirements, which can be monitored at the user location. We also illustrate how novel all-optical processes, such as the ones based on all-optical wavelength conversion, can be used for multicasting, enabling development of TV broadcasting based on 4G-5G terminals. These managing and forwarding techniques, operating on optical links, are tested in a wireless environment on Wi-Fi cells and emulating LTE and WiMAX systems by means of the NS-3 code.
Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY
2009-09-08
Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
Polling-Based High-Bit-Rate Packet Transfer in a Microcellular Network to Allow Fast Terminals
NASA Astrophysics Data System (ADS)
Hoa, Phan Thanh; Lambertsen, Gaute; Yamada, Takahiko
A microcellular network will be a good candidate for the future broadband mobile network. It is expected to support high-bit-rate connection for many fast mobile users if the handover is processed fast enough to lessen its impact on QoS requirements. One of the promising techniques is believed to use for the wireless interface in such a microcellular network is the WLAN (Wireless LAN) technique due to its very high wireless channel rate. However, the less capability of mobility support of this technique must be improved to be able to expand its utilization for the microcellular environment. The reason of its less support mobility is large handover latency delay caused by contention-based handover to the new BS (base station) and delay of re-forwarding data from the old to new BS. This paper presents a proposal of multi-polling and dynamic LMC (Logical Macro Cell) to reduce mentioned above delays. Polling frame for an MT (Mobile Terminal) is sent from every BS belonging to the same LMC — a virtual single macro cell that is a multicast group of several adjacent micro-cells in which an MT is communicating. Instead of contending for the medium of a new BS during handover, the MT responds to the polling sent from that new BS to enable the transition. Because only one BS of the LMC receives the polling ACK (acknowledgement) directly from the MT, this ACK frame has to be multicast to all BSs of the same LMC through the terrestrial network to continue sending the next polling cycle at each BS. Moreover, when an MT hands over to a new cell, its current LMC is switched over to a newly corresponding LMC to prevent the future contending for a new LMC. By this way, an MT can do handover between micro-cells of an LMC smoothly because the redundant resource is reserved for it at neighboring cells, no need to contend with others. Our simulation results using the OMNeT++ simulator illustrate the performance achievements of the multi-polling and dynamic LMC scheme in eliminating handover latency, packet loss and keeping mobile users' throughput stable in the high traffic load condition though it causes somewhat overhead on the neighboring cells.
Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs
NASA Astrophysics Data System (ADS)
RIngenburg, Michael F.
Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.
A Change Impact Analysis to Characterize Evolving Program Behaviors
NASA Technical Reports Server (NTRS)
Rungta, Neha Shyam; Person, Suzette; Branchaud, Joshua
2012-01-01
Change impact analysis techniques estimate the potential effects of changes made to software. Directed Incremental Symbolic Execution (DiSE) is an intraprocedural technique for characterizing the impact of software changes on program behaviors. DiSE first estimates the impact of the changes on the source code using program slicing techniques, and then uses the impact sets to guide symbolic execution to generate path conditions that characterize impacted program behaviors. DiSE, however, cannot reason about the flow of impact between methods and will fail to generate path conditions for certain impacted program behaviors. In this work, we present iDiSE, an extension to DiSE that performs an interprocedural analysis. iDiSE combines static and dynamic calling context information to efficiently generate impacted program behaviors across calling contexts. Information about impacted program behaviors is useful for testing, verification, and debugging of evolving programs. We present a case-study of our implementation of the iDiSE algorithm to demonstrate its efficiency at computing impacted program behaviors. Traditional notions of coverage are insufficient for characterizing the testing efforts used to validate evolving program behaviors because they do not take into account the impact of changes to the code. In this work we present novel definitions of impacted coverage metrics that are useful for evaluating the testing effort required to test evolving programs. We then describe how the notions of impacted coverage can be used to configure techniques such as DiSE and iDiSE in order to support regression testing related tasks. We also discuss how DiSE and iDiSE can be configured for debugging finding the root cause of errors introduced by changes made to the code. In our empirical evaluation we demonstrate that the configurations of DiSE and iDiSE can be used to support various software maintenance tasks
Permanent magnet synchronous motor servo system control based on μC/OS
NASA Astrophysics Data System (ADS)
Shi, Chongyang; Chen, Kele; Chen, Xinglong
2015-10-01
When Opto-Electronic Tracking system operates in complex environments, every subsystem must operate efficiently and stably. As a important part of Opto-Electronic Tracking system, the performance of PMSM(Permanent Magnet Synchronous Motor) servo system affects the Opto-Electronic Tracking system's accuracy and speed greatly[1][2]. This paper applied embedded real-time operating system μC/OS to the control of PMSM servo system, implemented SVPWM(Space Vector Pulse Width Modulation) algorithm in PMSM servo system, optimized the stability of PMSM servo system. Pointing on the characteristics of the Opto-Electronic Tracking system, this paper expanded μC/OS with software redundancy processes, remote debugging and upgrading. As a result, the Opto- Electronic Tracking system performs efficiently and stably.
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report is one of a series discussing configuration management (CM) topics for Space Station ground systems software development. It provides a description of the Software Support Environment (SSE)-developed Software Test Management (STM) capability, and discusses the possible use of this capability for management of developed software during testing performed on target platforms. This is intended to supplement the formal documentation of STM provided by the SEE Project. How STM can be used to integrate contractor CM and formal CM for software before delivery to operations is described. STM provides a level of control that is flexible enough to support integration and debugging, but sufficiently rigorous to insure the integrity of the testing process.
Flowfield computation of entry vehicles
NASA Technical Reports Server (NTRS)
Prabhu, Dinesh K.
1990-01-01
The equations governing the multidimensional flow of a reacting mixture of thermally perfect gasses were derived. The modeling procedures for the various terms of the conservation laws are discussed. A numerical algorithm, based on the finite-volume approach, to solve these conservation equations was developed. The advantages and disadvantages of the present numerical scheme are discussed from the point of view of accuracy, computer time, and memory requirements. A simple one-dimensional model problem was solved to prove the feasibility and accuracy of the algorithm. A computer code implementing the above algorithm was developed and is presently being applied to simple geometries and conditions. Once the code is completely debugged and validated, it will be used to compute the complete unsteady flow field around the Aeroassist Flight Experiment (AFE) body.
Efficient monitoring of CRAB jobs at CMS
NASA Astrophysics Data System (ADS)
Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.
1989-11-26
e(rrors.only x - F(ie..only -.errs x -f(ormat x -G(raph x x -g(eierate..ofr x -l(nline x -i(nhibit x -k(eep x x -L( ist x -M(ap x -m(airi x -N(ame x...erbose -e(rrors-.only C -CmIe to object -c/pu.type 20 -d(ebug -kieep -O(ptiznize <key+,> - S(ource..asm -u(pdate-lib <key.-> -x(ecution-.profile -L( ist ...generate execution-profile code. 2.1.12 Listing options: -C(ontext <value> Include 1 line of context with error 2.1.13.1 message. -L( ist Do not
Research of subdivision driving technology for brushless DC motors in optical fiber positioning
NASA Astrophysics Data System (ADS)
Kan, Yi; Gu, Yonggang; Zhu, Ye; Zhai, Chao
2016-07-01
In fiber spectroscopic telescopes, optical fiber positioning units are used to position thousands of fibers on the focal plane quickly and precisely. Stepper motors are used in existing units, however, it has some inherent deficiencies, such as serious heating and low efficiency. In this work, the universally adopted subdivision driving technology for stepper motors is transplanted to brushless DC motors. It keeps the advantages of stepper motors such as high positioning accuracy and resolution, while overcomes the disadvantages mentioned above. Thus, this research mainly focuses on develop a novel subdivision driving technology for brushless DC motor. By the proving of experiments of online debug and subdivision speed and position, the proposed brushless DC motor subdivision technology can achieve the expected functions.