Final Report for Project DE-FC02-06ER25755 [Pmodels2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panda, Dhabaleswar; Sadayappan, P.
2014-03-12
In this report, we describe the research accomplished by the OSU team under the Pmodels2 project. The team has worked on various angles: designing high performance MPI implementations on modern networking technologies (Mellanox InfiniBand (including the new ConnectX2 architecture and Quad Data Rate), QLogic InfiniPath, the emerging 10GigE/iWARP and RDMA over Converged Enhanced Ethernet (RoCE) and Obsidian IB-WAN), studying MPI scalability issues for multi-thousand node clusters using XRC transport, scalable job start-up, dynamic process management support, efficient one-sided communication, protocol offloading and designing scalable collective communication libraries for emerging multi-core architectures. New designs conforming to the Argonne’s Nemesis interface havemore » also been carried out. All of these above solutions have been integrated into the open-source MVAPICH/MVAPICH2 software. This software is currently being used by more than 2,100 organizations worldwide (in 71 countries). As of January ’14, more than 200,000 downloads have taken place from the OSU Web site. In addition, many InfiniBand vendors, server vendors, system integrators and Linux distributors have been incorporating MVAPICH/MVAPICH2 into their software stacks and distributing it. Several InfiniBand systems using MVAPICH/MVAPICH2 have obtained positions in the TOP500 ranking of supercomputers in the world. The latest November ’13 ranking include the following systems: 7th ranked Stampede system at TACC with 462,462 cores; 11th ranked Tsubame 2.5 system at Tokyo Institute of Technology with 74,358 cores; 16th ranked Pleiades system at NASA with 81,920 cores; Work on PGAS models has proceeded on multiple directions. The Scioto framework, which supports task-parallelism in one-sided and global-view parallel programming, has been extended to allow multi-processor tasks that are executed by processor groups. A quantum Monte Carlo application is being ported onto the extended Scioto framework. A public release of Global Trees (GT) has been made, along with the Global Chunks (GC) framework on which GT is built. The Global Chunks (GC) layer is also being used as the basis for the development of a higher level Global Graphs (GG) layer. The Global Graphs (GG) system will provide a global address space view of distributed graph data structures on distributed memory systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grant, Ryan E.; Barrett, Brian W.; Pedretti, Kevin
The Portals reference implementation is based on the Portals 4.X API, published by Sandia National Laboratories as a freely available public document. It is designed to be an implementation of the Portals Networking Application Programming Interface and is used by several other upper layer protocols like SHMEM, GASNet and MPI. It is implemented over existing networks, specifically Ethernet and InfiniBand networks. This implementation provides Portals networks functionality and serves as a software emulation of Portals compliant networking hardware. It can be used to develop software using the Portals API prior to the debut of Portals networking hardware, such as Bull’smore » BXI interconnect, as well as a substitute for portals hardware on development platforms that do not have Portals compliant hardware. The reference implementation provides new capabilities beyond that of a typical network, namely the ability to have messages matched in hardware in a way compatible with upper layer software such as MPI or SHMEM. It also offers methods of offloading network operations via triggered operations, which can be used to create offloaded collective operations. Specific details on the Portals API can be found at http://portals4.org.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele
2014-11-11
has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less
Cloud services on an astronomy data center
NASA Astrophysics Data System (ADS)
Solar, Mauricio; Araya, Mauricio; Farias, Humberto; Mardones, Diego; Wang, Zhong
2016-08-01
The research on computational methods for astronomy performed by the first phase of the Chilean Virtual Observatory (ChiVO) led to the development of functional prototypes, implementing state-of-the-art computational methods and proposing new algorithms and techniques. The ChiVO software architecture is based on the use of the IVOA protocols and standards. These protocols and standards are grouped in layers, with emphasis on the application and data layers, because their basic standards define the minimum operation that a VO should conduct. As momentary verification, the current implementation works with a set of data, with 1 TB capacity, which comes from the reduction of the cycle 0 of ALMA. This research was mainly focused on spectroscopic data cubes coming from the cycle 0 ALMA's public data. As the dataset size increases when the cycle 1 ALMA's public data is also increasing every month, data processing is becoming a major bottleneck for scientific research in astronomy. When designing the ChiVO, we focused on improving both computation and I/ O costs, and this led us to configure a data center with 424 high speed cores of 2,6 GHz, 1 PB of storage (distributed in hard disk drives-HDD and solid state drive-SSD) and high speed communication Infiniband. We are developing a cloud based e-infrastructure for ChiVO services, in order to have a coherent framework for developing novel web services for on-line data processing in the ChiVO. We are currently parallelizing these new algorithms and techniques using HPC tools to speed up big data processing, and we will report our results in terms of data size, data distribution, number of cores and response time, in order to compare different processing and storage configurations.
Peregrine System Configuration | High-Performance Computing | NREL
nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a
Comparing the Performance of Blue Gene/Q with Leading Cray XE6 and InfiniBand Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav
2013-01-21
Abstract—Three types of systems dominate the current High Performance Computing landscape: the Cray XE6, the IBM Blue Gene, and commodity clusters using InfiniBand. These systems have quite different characteristics making the choice for a particular deployment difficult. The XE6 uses Cray’s proprietary Gemini 3-D torus interconnect with two nodes at each network endpoint. The latest IBM Blue Gene/Q uses a single socket integrating processor and communication in a 5-D torus network. InfiniBand provides the flexibility of using nodes from many vendors connected in many possible topologies. The performance characteristics of each vary vastly along with their utilization model. In thismore » work we compare the performance of these three systems using a combination of micro-benchmarks and a set of production applications. In particular we discuss the causes of variability in performance across the systems and also quantify where performance is lost using a combination of measurements and models. Our results show that significant performance can be lost in normal production operation of the Cray XT6 and InfiniBand Clusters in comparison to Blue Gene/Q.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performancemore » of the event-building system.« less
A New Event Builder for CMS Run II
NASA Astrophysics Data System (ADS)
Albertsson, K.; Andre, J.-M.; Andronidis, A.; Behrens, U.; Branson, J.; Chaze, O.; Cittolin, S.; Darlea, G.-L.; Deldicque, C.; Dobson, M.; Dupont, A.; Erhan, S.; Gigi, D.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; Nunez-Barranco-Fernandez, C.; O'Dell, V.; Orsini, L.; Paus, C.; Petrucci, A.; Pieri, M.; Racz, A.; Roberts, P.; Sakulin, H.; Schwick, C.; Stieger, B.; Sumorok, K.; Veverka, J.; Zaza, S.; Zejdl, P.
2015-12-01
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB/s to the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013/14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbps Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.
High-Throughput and Low-Latency Network Communication with NetIO
NASA Astrophysics Data System (ADS)
Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer
2017-10-01
HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.
Performance of the CMS Event Builder
NASA Astrophysics Data System (ADS)
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. F.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Zejdl, P.
2017-10-01
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of {\\mathscr{O}}(100 {{GB}}/{{s}}) to the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system.
Comparison of High Performance Network Options: EDR InfiniBand vs.100Gb RDMA Capable Ethernet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kachelmeier, Luke Anthony; Van Wig, Faith Virginia; Erickson, Kari Natania
These are the slides for a presentation at the HPC Mini Showcase. This is a comparison of two different high performance network options: EDR InfiniBand and 100Gb RDMA capable ethernet. The conclusion of this comparison is the following: there is good potential, as shown with the direct results; 100Gb technology is too new and not standardized, thus deployment effort is complex for both options; different companies are not necessarily compatible; if you want 100Gb/s, you must get it all from one place.
A performance comparison of current HPC systems: Blue Gene/Q, Cray XE6 and InfiniBand systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerbyson, Darren J.; Barker, Kevin J.; Vishnu, Abhinav
2014-01-01
We present here a performance analysis of three of current architectures that have become commonplace in the High Performance Computing world. Blue Gene/Q is the third generation of systems from IBM that use modestly performing cores but at large-scale in order to achieve high performance. The XE6 is the latest in a long line of Cray systems that use a 3-D topology but the first to use its Gemini interconnection network. InfiniBand provides the flexibility of using compute nodes from many vendors that can be connected in many possible topologies. The performance characteristics of each vary vastly, and the waymore » in which nodes are allocated in each type of system can significantly impact on achieved performance. In this work we compare these three systems using a combination of micro-benchmarks and a set of production applications. In addition we also examine the differences in performance variability observed on each system and quantify the lost performance using a combination of both empirical measurements and performance models. Our results show that significant performance can be lost in normal production operation of the Cray XE6 and InfiniBand Clusters in comparison to Blue Gene/Q.« less
An Application-Based Performance Characterization of the Columbia Supercluster
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash
2005-01-01
Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.
Scalable Algorithms for Parallel Discrete Event Simulation Systems in Multicore Environments
2013-05-01
consolidated at the sender side. At the receiver side, the messages are deconsolidated and delivered to the appropriate thread. This approach bears some...Jiang, S. Kini, W. Yu, D. Buntinas, P. Wyckoff, and D. Panda . Performance comparison of mpi implementations over infiniband, myrinet and quadrics
High Resolution Aerospace Applications using the NASA Columbia Supercomputer
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha
2005-01-01
This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages. are industrial-level codes designed for complex geometry and incorpor.ats. CuStomized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAIink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.
NASA Astrophysics Data System (ADS)
Niwase, Hiroaki; Takada, Naoki; Araki, Hiromitsu; Maeda, Yuki; Fujiwara, Masato; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2016-09-01
Parallel calculations of large-pixel-count computer-generated holograms (CGHs) are suitable for multiple-graphics processing unit (multi-GPU) cluster systems. However, it is not easy for a multi-GPU cluster system to accomplish fast CGH calculations when CGH transfers between PCs are required. In these cases, the CGH transfer between the PCs becomes a bottleneck. Usually, this problem occurs only in multi-GPU cluster systems with a single spatial light modulator. To overcome this problem, we propose a simple method using the InfiniBand network. The computational speed of the proposed method using 13 GPUs (NVIDIA GeForce GTX TITAN X) was more than 3000 times faster than that of a CPU (Intel Core i7 4770) when the number of three-dimensional (3-D) object points exceeded 20,480. In practice, we achieved ˜40 tera floating point operations per second (TFLOPS) when the number of 3-D object points exceeded 40,960. Our proposed method was able to reconstruct a real-time movie of a 3-D object comprising 95,949 points.
NASA Technical Reports Server (NTRS)
Iannicca, Dennis; Hylton, Alan; Ishac, Joseph
2012-01-01
Delay-Tolerant Networking (DTN) is an active area of research in the space communications community. DTN uses a standard layered approach with the Bundle Protocol operating on top of transport layer protocols known as convergence layers that actually transmit the data between nodes. Several different common transport layer protocols have been implemented as convergence layers in DTN implementations including User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Licklider Transmission Protocol (LTP). The purpose of this paper is to evaluate several stand-alone implementations of negative-acknowledgment based transport layer protocols to determine how they perform in a variety of different link conditions. The transport protocols chosen for this evaluation include Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), Licklider Transmission Protocol (LTP), NACK-Oriented Reliable Multicast (NORM), and Saratoga. The test parameters that the protocols were subjected to are characteristic of common communications links ranging from terrestrial to cis-lunar and apply different levels of delay, line rate, and error.
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O.
2017-01-01
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks. PMID:28555023
A Protocol Layer Trust-Based Intrusion Detection Scheme for Wireless Sensor Networks.
Wang, Jian; Jiang, Shuai; Fapojuwo, Abraham O
2017-05-27
This article proposes a protocol layer trust-based intrusion detection scheme for wireless sensor networks. Unlike existing work, the trust value of a sensor node is evaluated according to the deviations of key parameters at each protocol layer considering the attacks initiated at different protocol layers will inevitably have impacts on the parameters of the corresponding protocol layers. For simplicity, the paper mainly considers three aspects of trustworthiness, namely physical layer trust, media access control layer trust and network layer trust. The per-layer trust metrics are then combined to determine the overall trust metric of a sensor node. The performance of the proposed intrusion detection mechanism is then analyzed using the t-distribution to derive analytical results of false positive and false negative probabilities. Numerical analytical results, validated by simulation results, are presented in different attack scenarios. It is shown that the proposed protocol layer trust-based intrusion detection scheme outperforms a state-of-the-art scheme in terms of detection probability and false probability, demonstrating its usefulness for detecting cross-layer attacks.
Single Sided Messaging v. 0.6.6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curry, Matthew Leon; Farmer, Matthew Shane; Hassani, Amin
Single-Sided Messaging (SSM) is a portable, multitransport networking library that enables applications to leverage potential one-sided capabilities of underlying network transports. It also provides desirable semantics that services for highperformance, massively parallel computers can leverage, such as an explicit cancel operation for pending transmissions, as well as enhanced matching semantics favoring large numbers of buffers attached to a single match entry. This release supports TCP/IP, shared memory, and Infiniband.
Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jian; Hamidouche, Khaled; Zheng, Jie
2015-08-05
Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Lin, Paul Tinphone
2009-01-01
This preliminary study considers the scaling and performance of a finite element (FE) semiconductor device simulator on a capacity cluster with 272 compute nodes based on a homogeneous multicore node architecture utilizing 16 cores. The inter-node communication backbone for this Tri-Lab Linux Capacity Cluster (TLCC) machine is comprised of an InfiniBand interconnect. The nonuniform memory access (NUMA) nodes consist of 2.2 GHz quad socket/quad core AMD Opteron processors. The performance results for this study are obtained with a FE semiconductor device simulation code (Charon) that is based on a fully-coupled Newton-Krylov solver with domain decomposition and multilevel preconditioners. Scaling andmore » multicore performance results are presented for large-scale problems of 100+ million unknowns on up to 4096 cores. A parallel scaling comparison is also presented with the Cray XT3/4 Red Storm capability platform. The results indicate that an MPI-only programming model for utilizing the multicore nodes is reasonably efficient on all 16 cores per compute node. However, the results also indicated that the multilevel preconditioner, which is critical for large-scale capability type simulations, scales better on the Red Storm machine than the TLCC machine.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-02
... Protection regulations, the science of ozone layer depletion, and related topics. SUPPLEMENTARY INFORMATION... compliance with the Montreal Protocol on Substances that Deplete the Ozone Layer (Protocol) and the CAA.... obligations under Article 2H of the Montreal Protocol on Substances that Deplete the Ozone Layer (Protocol...
Modelling the protocol stack in NCS with deterministic and stochastic petri net
NASA Astrophysics Data System (ADS)
Hui, Chen; Chunjie, Zhou; Weifeng, Zhu
2011-06-01
Protocol stack is the basis of the networked control systems (NCS). Full or partial reconfiguration of protocol stack offers both optimised communication service and system performance. Nowadays, field testing is unrealistic to determine the performance of reconfigurable protocol stack; and the Petri net formal description technique offers the best combination of intuitive representation, tool support and analytical capabilities. Traditionally, separation between the different layers of the OSI model has been a common practice. Nevertheless, such a layered modelling analysis framework of protocol stack leads to the lack of global optimisation for protocol reconfiguration. In this article, we proposed a general modelling analysis framework for NCS based on the cross-layer concept, which is to establish an efficiency system scheduling model through abstracting the time constraint, the task interrelation, the processor and the bus sub-models from upper and lower layers (application, data link and physical layer). Cross-layer design can help to overcome the inadequacy of global optimisation based on information sharing between protocol layers. To illustrate the framework, we take controller area network (CAN) as a case study. The simulation results of deterministic and stochastic Petri-net (DSPN) model can help us adjust the message scheduling scheme and obtain better system performance.
The Xpress Transfer Protocol (XTP): A tutorial (expanded version)
NASA Technical Reports Server (NTRS)
Sanders, Robert M.; Weaver, Alfred C.
1990-01-01
The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4.
NASA Astrophysics Data System (ADS)
Phister, P. W., Jr.
1983-12-01
Development of the Air Force Institute of Technology's Digital Engineering Laboratory Network (DELNET) was continued with the development of an initial draft of a protocol standard for all seven layers as specified by the International Standards Organization's (ISO) Reference Model for Open Systems Interconnections. This effort centered on the restructuring of the Network Layer to perform Datagram routing and to conform to the developed protocol standards and actual software module development of the upper four protocol layers residing within the DELNET Monitor (Zilog MCZ 1/25 Computer System). Within the guidelines of the ISO Reference Model the Transport Layer was developed utilizing the Internet Header Format (IHF) combined with the Transport Control Protocol (TCP) to create a 128-byte Datagram. Also a limited Application Layer was created to pass the Gettysburg Address through the DELNET. This study formulated a first draft for the DELNET Protocol Standard and designed, implemented, and tested the Network, Transport, and Application Layers to conform to these protocol standards.
OSI Upper Layers Support for Applications.
ERIC Educational Resources Information Center
Davison, Wayne
1990-01-01
Discusses how various Open Systems Interconnection (OSI) application layer protocols can be used together, along with the Presentation and Session protocols, to support the interconnection requirements of applications. Application layer protocol standards that are currently available or under development are reviewed, and the File, Transfer,…
Compositional Verification of a Communication Protocol for a Remotely Operated Vehicle
NASA Technical Reports Server (NTRS)
Goodloe, Alwyn E.; Munoz, Cesar A.
2009-01-01
This paper presents the specification and verification in the Prototype Verification System (PVS) of a protocol intended to facilitate communication in an experimental remotely operated vehicle used by NASA researchers. The protocol is defined as a stack-layered com- position of simpler protocols. It can be seen as the vertical composition of protocol layers, where each layer performs input and output message processing, and the horizontal composition of different processes concurrently inhabiting the same layer, where each process satisfies a distinct requirement. It is formally proven that the protocol components satisfy certain delivery guarantees. Compositional techniques are used to prove these guarantees also hold in the composed system. Although the protocol itself is not novel, the methodology employed in its verification extends existing techniques by automating the tedious and usually cumbersome part of the proof, thereby making the iterative design process of protocols feasible.
Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel
String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less
Modeling Techniques for High Dependability Protocols and Architecture
NASA Technical Reports Server (NTRS)
LaValley, Brian; Ellis, Peter; Walter, Chris J.
2012-01-01
This report documents an investigation into modeling high dependability protocols and some specific challenges that were identified as a result of the experiments. The need for an approach was established and foundational concepts proposed for modeling different layers of a complex protocol and capturing the compositional properties that provide high dependability services for a system architecture. The approach centers around the definition of an architecture layer, its interfaces for composability with other layers and its bindings to a platform specific architecture model that implements the protocols required for the layer.
NASA Astrophysics Data System (ADS)
Raju, Kota Solomon; Merugu, Naresh Babu; Neetu, Babu, E. Ram
2016-03-01
ZigBee is well-accepted industrial standard for wireless sensor networks based on IEEE 802.15.4 standard. Wireless Sensor Networks is the major concern of communication these days. These Wireless Sensor Networks investigate the properties of networks of small battery-powered sensors with wireless communication. The communication between any two wireless nodes of wireless sensor networks is carried out through a protocol stack. This protocol stack has been designed by different vendors in various ways. Every custom vendor possesses his own protocol stack and algorithms especially at the MAC layer. But, many applications require modifications in their algorithms at various layers as per their requirements, especially energy efficient protocols at MAC layer that are simulated in Wireless sensor Network Simulators which are not being tested in real time systems because vendors do not allow the programmability of each layer in their protocol stack. This problem can be quoted as Vendor-Interoperability. The solution is to develop the programmable protocol stack where we can design our own application as required. As a part of the task first we tried implementing physical layer and transmission of data using physical layer. This paper describes about the transmission of the total number of bytes of Frame according to the IEEE 802.15.4 standard using Physical Layer.
A software defined RTU multi-protocol automatic adaptation data transmission method
NASA Astrophysics Data System (ADS)
Jin, Huiying; Xu, Xingwu; Wang, Zhanfeng; Ma, Weijun; Li, Sheng; Su, Yong; Pan, Yunpeng
2018-02-01
Remote terminal unit (RTU) is the core device of the monitor system in hydrology and water resources. Different devices often have different communication protocols in the application layer, which results in the difficulty in information analysis and communication networking. Therefore, we introduced the idea of software defined hardware, and abstracted the common feature of mainstream communication protocols of RTU application layer, and proposed a uniformed common protocol model. Then, various communication protocol algorithms of application layer are modularized according to the model. The executable codes of these algorithms are labeled by the virtual functions and stored in the flash chips of embedded CPU to form the protocol stack. According to the configuration commands to initialize the RTU communication systems, it is able to achieve dynamic assembling and loading of various application layer communication protocols of RTU and complete the efficient transport of sensor data from RTU to central station when the data acquisition protocol of sensors and various external communication terminals remain unchanged.
Haystack Observatory VLBI Correlator
NASA Technical Reports Server (NTRS)
Titus, Mike; Cappallo, Roger; Corey, Brian; Dudevoir, Kevin; Niell, Arthur; Whitney, Alan
2013-01-01
This report summarizes the activities of the Haystack Correlator during 2012. Highlights include finding a solution to the DiFX InfiniBand timeout problem and other DiFX software development, conducting a DBE comparison test following the First International VLBI Technology Workshop, conducting a Mark IV and DiFX correlator comparison, more broadband delay experiments, more u- VLBI Galactic Center observations, and conversion of RDV session processing to the Mark IV/HOPS path. Non-real-time e-VLBI transfers and engineering support of other correlators continued.
2014-09-18
radios in a cognitive radio network using a radio frequency fingerprinting based method. In IEEE International Conference on Communications (ICC...IMPROVEDWIRELESS SECURITY THROUGH PHYSICAL LAYER PROTOCOL MANIPULATION AND RADIO FREQUENCY FINGERPRINTING DISSERTATION Benjamin W. Ramsey, Captain...PHYSICAL LAYER PROTOCOL MANIPULATION AND RADIO FREQUENCY FINGERPRINTING DISSERTATION Presented to the Faculty Graduate School of Engineering and
NASA Astrophysics Data System (ADS)
Wang, Xiao-Jun; An, Long-Xi; Yu, Xu-Tao; Zhang, Zai-Chen
2017-10-01
A multilayer quantum secret sharing protocol based on GHZ state is proposed. Alice has the secret carried by quantum state and wants to distribute this secret to multiple agent nodes in the network. In this protocol, the secret is transmitted and shared layer by layer from root Alice to layered agents. The number of agents in each layer is a geometric sequence with a specific common ratio. By sharing GHZ maximally entangled states and making generalized Bell basis measurement, one qubit state can be distributed to multiparty agents and the secret is shared. Only when all agents at the last layer cooperate together, the secret can be recovered. Compared with other protocols based on the entangled state, this protocol adopts layered construction so that secret can be distributed to more agents with fewer particles GHZ state. This quantum secret sharing protocol can be used in wireless network to ensure the security of information delivery.
Cross-layer protocol design for QoS optimization in real-time wireless sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2010-04-01
The metrics of quality of service (QoS) for each sensor type in a wireless sensor network can be associated with metrics for multimedia that describe the quality of fused information, e.g., throughput, delay, jitter, packet error rate, information correlation, etc. These QoS metrics are typically set at the highest, or application, layer of the protocol stack to ensure that performance requirements for each type of sensor data are satisfied. Application-layer metrics, in turn, depend on the support of the lower protocol layers: session, transport, network, data link (MAC), and physical. The dependencies of the QoS metrics on the performance of the higher layers of the Open System Interconnection (OSI) reference model of the WSN protocol, together with that of the lower three layers, are the basis for a comprehensive approach to QoS optimization for multiple sensor types in a general WSN model. The cross-layer design accounts for the distributed power consumption along energy-constrained routes and their constituent nodes. Following the author's previous work, the cross-layer interactions in the WSN protocol are represented by a set of concatenated protocol parameters and enabling resource levels. The "best" cross-layer designs to achieve optimal QoS are established by applying the general theory of martingale representations to the parameterized multivariate point processes (MVPPs) for discrete random events occurring in the WSN. Adaptive control of network behavior through the cross-layer design is realized through the parametric factorization of the stochastic conditional rates of the MVPPs. The cross-layer protocol parameters for optimal QoS are determined in terms of solutions to stochastic dynamic programming conditions derived from models of transient flows for heterogeneous sensor data and aggregate information over a finite time horizon. Markov state processes, embedded within the complex combinatorial history of WSN events, are more computationally tractable and lead to simplifications for any simulated or analytical performance evaluations of the cross-layer designs.
L2-LBMT: A Layered Load Balance Routing Protocol for underwater multimedia data transmission
NASA Astrophysics Data System (ADS)
Lv, Ze; Tang, Ruichun; Tao, Ye; Sun, Xin; Xu, Xiaowei
2017-12-01
Providing highly efficient underwater transmission of mass multimedia data is challenging due to the particularities of the underwater environment. Although there are many schemes proposed to optimize the underwater acoustic network communication protocols, from physical layer, data link layer, network layer to transport layer, the existing routing protocols for underwater wireless sensor network (UWSN) still cannot well deal with the problems in transmitting multimedia data because of the difficulties involved in high energy consumption, low transmission reliability or high transmission delay. It prevents us from applying underwater multimedia data to real-time monitoring of marine environment in practical application, especially in emergency search, rescue operation and military field. Therefore, the inefficient transmission of marine multimedia data has become a serious problem that needs to be solved urgently. In this paper, A Layered Load Balance Routing Protocol (L2-LBMT) is proposed for underwater multimedia data transmission. In L2-LBMT, we use layered and load-balance Ad Hoc Network to transmit data, and adopt segmented data reliable transfer (SDRT) protocol to improve the data transport reliability. And a 3-node variant of tornado (3-VT) code is also combined with the Ad Hoc Network to transmit little emergency data more quickly. The simulation results show that the proposed protocol can balance energy consumption of each node, effectively prolong the network lifetime and reduce transmission delay of marine multimedia data.
Space Wire Upper Layer Protocols
NASA Technical Reports Server (NTRS)
Rakow, Glenn; Schnurr, Richard; Gilley, Daniel; Parkes, Steve
2004-01-01
This viewgraph presentation addresses efforts to provide a streamlined approach for developing SpaceWire Upper layer protocols which allows industry to drive standardized communication solutions for real projects. The presentation proposes a simple packet header that will allow flexibility in implementing a diverse range of protocols.
A review on transport layer protocol performance for delivering video on an adhoc network
NASA Astrophysics Data System (ADS)
Suherman; Suwendri; Al-Akaidi, Marwan
2017-09-01
The transport layer protocol is responsible for the end to end data transmission. Transmission control protocol (TCP) provides a reliable connection and user datagram protocol (UDP) offers fast but unguaranteed data transfer. Meanwhile, the 802.11 (wireless fidelity/WiFi) networks have been widely used as internet hotspots. This paper evaluates TCP, TCP variants and UDP performances for video transmission on an adhoc network. The transport protocol - medium access cross-layer is proposed by prioritizing TCP acknowledgement to reduce delay. The NS-2 evaluations show that the average delays increase linearly for all the evaluated protocols and the average packet losses grow logarithmically. UDP produces the lowest transmission delay; 5.4% and 5.8% lower than TCP and TCP variant, but experiences the highest packet loss. Both TCP and TCP Vegas maintain packet loss as low as possible. The proposed cross-layer successfully decreases TCP and TCP Vegas delay about 0.12 % and 0.15%, although losses remain similar.
FELIX: The new detector readout system for the ATLAS experiment
NASA Astrophysics Data System (ADS)
Ryu, Soo; ATLAS TDAQ Collaboration
2017-10-01
After the Phase-I upgrades (2019) of the ATLAS experiment, the Front-End Link eXchange (FELIX) system will be the interface between the data acquisition system and the detector front-end and trigger electronics. FELIX will function as a router between custom serial links and a commodity switch network using standard technologies (Ethernet or Infiniband) to communicate with commercial data collecting and processing components. The system architecture of FELIX will be described and the status of the firmware implementation and hardware development currently in progress will be presented.
NASA Astrophysics Data System (ADS)
Georg, Peter; Richtmann, Daniel; Wettig, Tilo
2018-03-01
We describe our experience porting the Regensburg implementation of the DD-αAMG solver from QPACE 2 to QPACE 3. We first review how the code was ported from the first generation Intel Xeon Phi processor (Knights Corner) to its successor (Knights Landing). We then describe the modifications in the communication library necessitated by the switch from InfiniBand to Omni-Path. Finally, we present the performance of the code on a single processor as well as the scaling on many nodes, where in both cases the speedup factor is close to the theoretical expectations.
NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report
NASA Technical Reports Server (NTRS)
Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ
2013-01-01
The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities
Analytical approach to cross-layer protocol optimization in wireless sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2008-04-01
In the distributed operations of route discovery and maintenance, strong interaction occurs across mobile ad hoc network (MANET) protocol layers. Quality of service (QoS) requirements of multimedia service classes must be satisfied by the cross-layer protocol, along with minimization of the distributed power consumption at nodes and along routes to battery-limited energy constraints. In previous work by the author, cross-layer interactions in the MANET protocol are modeled in terms of a set of concatenated design parameters and associated resource levels by multivariate point processes (MVPPs). Determination of the "best" cross-layer design is carried out using the optimal control of martingale representations of the MVPPs. In contrast to the competitive interaction among nodes in a MANET for multimedia services using limited resources, the interaction among the nodes of a wireless sensor network (WSN) is distributed and collaborative, based on the processing of data from a variety of sensors at nodes to satisfy common mission objectives. Sensor data originates at the nodes at the periphery of the WSN, is successively transported to other nodes for aggregation based on information-theoretic measures of correlation and ultimately sent as information to one or more destination (decision) nodes. The "multimedia services" in the MANET model are replaced by multiple types of sensors, e.g., audio, seismic, imaging, thermal, etc., at the nodes; the QoS metrics associated with MANETs become those associated with the quality of fused information flow, i.e., throughput, delay, packet error rate, data correlation, etc. Significantly, the essential analytical approach to MANET cross-layer optimization, now based on the MVPPs for discrete random events occurring in the WSN, can be applied to develop the stochastic characteristics and optimality conditions for cross-layer designs of sensor network protocols. Functional dependencies of WSN performance metrics are described in terms of the concatenated protocol parameters. New source-to-destination routes are sought that optimize cross-layer interdependencies to achieve the "best available" performance in the WSN. The protocol design, modified from a known reactive protocol, adapts the achievable performance to the transient network conditions and resource levels. Control of network behavior is realized through the conditional rates of the MVPPs. Optimal cross-layer protocol parameters are determined by stochastic dynamic programming conditions derived from models of transient packetized sensor data flows. Moreover, the defining conditions for WSN configurations, grouping sensor nodes into clusters and establishing data aggregation at processing nodes within those clusters, lead to computationally tractable solutions to the stochastic differential equations that describe network dynamics. Closed-form solution characteristics provide an alternative to the "directed diffusion" methods for resource-efficient WSN protocols published previously by other researchers. Performance verification of the resulting cross-layer designs is found by embedding the optimality conditions for the protocols in actual WSN scenarios replicated in a wireless network simulation environment. Performance tradeoffs among protocol parameters remain for a sequel to the paper.
Cross-Layer Protocol Combining Tree Routing and TDMA Slotting in Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Bai, Ronggang; Ji, Yusheng; Lin, Zhiting; Wang, Qinghua; Zhou, Xiaofang; Qu, Yugui; Zhao, Baohua
Being different from other networks, the load and direction of data traffic for wireless sensor networks are rather predictable. The relationships between nodes are cooperative rather than competitive. These features allow the design approach of a protocol stack to be able to use the cross-layer interactive way instead of a hierarchical structure. The proposed cross-layer protocol CLWSN optimizes the channel allocation in the MAC layer using the information from the routing tables, reduces the conflicting set, and improves the throughput. Simulations revealed that it outperforms SMAC and MINA in terms of delay and energy consumption.
Cislan-2 extension final document by University of Twente (Netherlands)
NASA Astrophysics Data System (ADS)
Niemegeers, Ignas; Baumann, Frank; Beuwer, Wim; Jordense, Marcel; Pras, Aiko; Schutte, Leon; Tracey, Ian
1992-01-01
Results of worked performed under the so called Cislan extension contract are presented. The adaptation of the Cislan 2 prototype design to an environment of interconnected Local Area Networks (LAN's) instead of a single 802.5 token ring LAN is considered. In order to extend the network architecture, the Interconnection Function (IF) protocol layer was subdivided into two protocol layers: a new IF layer, and below the Medium Enhancement (ME) protocol layer. Some small enhancements to the distributed bandwidth allocation protocol were developed, which in fact are also applicable to the 'normal' Cislan 2 system. The new services and protocols are described together with some scenarios and requirements for the new internetting Cislan 2 system. How to overcome the degradation of the quality of speech due to packet loss on the LAN subsystem was studied. Experiments were planned in order to measure this speech quality degradation. Simulations were performed of two Cislan subsystems, the bandwidth allocation protocol and the clock synchronization mechanism. Results on both simulations, performed on SUN workstations using QNAP as a simulation tool, are given. Results of the simulations of the clock synchronization mechanism, and results of the simulation of the distributed bandwidth allocation protocol are given.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-24
... Stratospheric Ozone Protection regulations, the science of ozone layer depletion, and related topics... Layer (Protocol) and the CAA. Entities applying for this exemption are asked to submit to EPA... Substances that Deplete the Ozone Layer (Protocol). The information collection request is required to obtain...
An operational open-end file transfer protocol for mobile satellite communications
NASA Technical Reports Server (NTRS)
Wang, Charles; Cheng, Unjeng; Yan, Tsun-Yee
1988-01-01
This paper describes an operational open-end file transfer protocol which includes the connecting procedure, data transfer, and relinquishment procedure for mobile satellite communications. The protocol makes use of the frame level and packet level formats of the X.25 standard for the data link layer and network layer, respectively. The structure of a testbed for experimental simulation of this protocol over a mobile fading channel is also introduced.
Issues in designing transport layer multicast facilities
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined.
NASA Technical Reports Server (NTRS)
Burleigh, Scott C.
2011-01-01
Zero-Copy Objects System software enables application data to be encapsulated in layers of communication protocol without being copied. Indirect referencing enables application source data, either in memory or in a file, to be encapsulated in place within an unlimited number of protocol headers and/or trailers. Zero-copy objects (ZCOs) are abstract data access representations designed to minimize I/O (input/output) in the encapsulation of application source data within one or more layers of communication protocol structure. They are constructed within the heap space of a Simple Data Recorder (SDR) data store to which all participating layers of the stack must have access. Each ZCO contains general information enabling access to the core source data object (an item of application data), together with (a) a linked list of zero or more specific extents that reference portions of this source data object, and (b) linked lists of protocol header and trailer capsules. The concatenation of the headers (in ascending stack sequence), the source data object extents, and the trailers (in descending stack sequence) constitute the transmitted data object constructed from the ZCO. This scheme enables a source data object to be encapsulated in a succession of protocol layers without ever having to be copied from a buffer at one layer of the protocol stack to an encapsulating buffer at a lower layer of the stack. For large source data objects, the savings in copy time and reduction in memory consumption may be considerable.
ACR/NEMA Digital Image Interface Standard (An Illustrated Protocol Overview)
NASA Astrophysics Data System (ADS)
Lawrence, G. Robert
1985-09-01
The American College of Radiologists (ACR) and the National Electrical Manufacturers Association (NEMA) have sponsored a joint standards committee mandated to develop a universal interface standard for the transfer of radiology images among a variety of PACS imaging devicesl. The resulting standard interface conforms to the ISO/OSI standard reference model for network protocol layering. The standard interface specifies the lower layers of the reference model (Physical, Data Link, Transport and Session) and implies a requirement of the Network Layer should a requirement for a network exist. The message content has been considered and a flexible message and image format specified. The following Imaging Equipment modalities are supported by the standard interface... CT Computed Tomograpy DS Digital Subtraction NM Nuclear Medicine US Ultrasound MR Magnetic Resonance DR Digital Radiology The following data types are standardized over the transmission interface media.... IMAGE DATA DIGITIZED VOICE HEADER DATA RAW DATA TEXT REPORTS GRAPHICS OTHERS This paper consists of text supporting the illustrated protocol data flow. Each layer will be individually treated. Particular emphasis will be given to the Data Link layer (Frames) and the Transport layer (Packets). The discussion utilizes a finite state sequential machine model for the protocol layers.
Method of Performance-Aware Security of Unicast Communication in Hybrid Satellite Networks
NASA Technical Reports Server (NTRS)
Baras, John S. (Inventor); Roy-Chowdhury, Ayan (Inventor)
2014-01-01
A method and apparatus utilizes Layered IPSEC (LES) protocol as an alternative to IPSEC for network-layer security including a modification to the Internet Key Exchange protocol. For application-level security of web browsing with acceptable end-to-end delay, the Dual-mode SSL protocol (DSSL) is used instead of SSL. The LES and DSSL protocols achieve desired end-to-end communication security while allowing the TCP and HTTP proxy servers to function correctly.
Global Futures: a multithreaded execution model for Global Arrays-based applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Krishnamoorthy, Sriram; Vishnu, Abhinav
2012-05-31
We present Global Futures (GF), an execution model extension to Global Arrays, which is based on a PGAS-compatible Active Message-based paradigm. We describe the design and implementation of Global Futures and illustrate its use in a computational chemistry application benchmark (Hartree-Fock matrix construction using the Self-Consistent Field method). Our results show how we used GF to increase the scalability of the Hartree-Fock matrix build to up to 6,144 cores of an Infiniband cluster. We also show how GF's multithreaded execution has comparable performance to the traditional process-based SPMD model.
Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks
NASA Technical Reports Server (NTRS)
Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias;
2006-01-01
The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.
The Effects of Cognitive Jamming on Wireless Sensor Networks Used for Geolocation
2012-03-01
continuously sends out random bits to the channel without following any MAC-layer etiquette [31]. Normally, the underlying MAC protocol allows...23 UDP User Datagram Protocol . . . . . . . . . . . . . . . . . . . 30 MIMO Multiple Input Multiple Output . . . . . . . . . . . . . . . 70...information is packaged and distributed on the network layer, only the physical measurements are considered. This protocol is used to detect faulty nodes
Analytical Models of Cross-Layer Protocol Optimization in Real-Time Wireless Sensor Ad Hoc Networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
The real-time interactions among the nodes of a wireless sensor network (WSN) to cooperatively process data from multiple sensors are modeled. Quality-of-service (QoS) metrics are associated with the quality of fused information: throughput, delay, packet error rate, etc. Multivariate point process (MVPP) models of discrete random events in WSNs establish stochastic characteristics of optimal cross-layer protocols. Discrete-event, cross-layer interactions in mobile ad hoc network (MANET) protocols have been modeled using a set of concatenated design parameters and associated resource levels by the MVPPs. Characterization of the "best" cross-layer designs for a MANET is formulated by applying the general theory of martingale representations to controlled MVPPs. Performance is described in terms of concatenated protocol parameters and controlled through conditional rates of the MVPPs. Modeling limitations to determination of closed-form solutions versus explicit iterative solutions for ad hoc WSN controls are examined.
High-Performance CCSDS Encapsulation Service Implementation in FPGA
NASA Technical Reports Server (NTRS)
Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson
2010-01-01
The Consultative Committee for Space Data Systems (CCSDS) Encapsulation Service is a convergence layer between lower-layer space data link framing protocols, such as CCSDS Advanced Orbiting System (AOS), and higher-layer networking protocols, such as CFDP (CCSDS File Delivery Protocol) and Internet Protocol Extension (IPE). CCSDS Encapsulation Service is considered part of the data link layer. The CCSDS AOS implementation is described in the preceding article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS Encapsulation Service needs to be optimized to both reduce energy consumption and operate at a high rate. CCSDS Encapsulation Service has been implemented as an intellectual property core so that the aforementioned problems are solved by way of operating the CCSDS Encapsulation Service inside an FPGA. The CCSDS En capsula tion Service in FPGA implementation consists of both packetizing and de-packetizing features
A Scenario-Based Protocol Checker for Public-Key Authentication Scheme
NASA Astrophysics Data System (ADS)
Saito, Takamichi
Security protocol provides communication security for the internet. One of the important features of it is authentication with key exchange. Its correctness is a requirement of the whole of the communication security. In this paper, we introduce three attack models realized as their attack scenarios, and provide an authentication-protocol checker for applying three attack-scenarios based on the models. We also utilize it to check two popular security protocols: Secure SHell (SSH) and Secure Socket Layer/Transport Layer Security (SSL/TLS).
Protocol and Topology Issues for Wide-Area Satellite Interconnection of Terrestrial Optical LANs
NASA Astrophysics Data System (ADS)
Parraga, N.
2002-01-01
Apart from broadcasting, the satellite business is targeting niche markets. Wide area interconnection is considered as one of these niche markets, since it addresses operators and business LANs (B2B, business to business) in remote areas where terrestrial infrastructure is not available. These LANs - if high-speed - are typically based on optical networks such as SONET. One of the advantages of SONET is its architecture flexibility and capacity to transport all kind of applications including multimedia with a range of different transmission rates. The applications can be carried by different protocols among which the Internet Protocol (IP) or the Asynchronous Transfer Mode (ATM) are the most prominent ones. Thus, the question arises how these protocols can be interconnected via the satellite segment. The paper addresses several solutions for interworking with different protocols. For this investigation we distinguish first of all between the topology and the switching technology of the satellites. In case of a star network with transparent satellite, the satellite protocol consists of physical layer and data layer which can be directly interconnected with layer 2 interworking function to their terrestrial counterparts in the SONET backbone. For regenerative satellites the situation is more complex: here we need to distinguish the types of transport protocols being used in the terrestrial and satellite segment. Whereas IP, ATM, MPEG dominate in the terrestrial networks, satellite systems usually do not follow these standards. Some might employ minor additions (for instance, satellite specific packet headers), some might be completely proprietary. In general, interworking must be done for the data plane on top of layer 2 (data link layer), whereas for the signaling plane the interworking is on top of layer 3. In the paper we will discuss the protocol stacks for ATM, IP, and MPEG with a regenerative satellite system. As an example we will use the EuroSkyWay satellite system for multimedia services. EuroSkyWay uses a GEO satellite with onboard switching. It has its own proprietary protocol stack for data link control (DLC), logical link control (LLC) and layer 3 functions such as resource management, call admission control and authentication. Special attention is paid to the IP interworking with Layer 3 function since IP does not support connection set-up and session protocols, thus proper interworking functions with IP signaling protocols for resource reservation routing such as RSVP, BGP, and ICMP need to be developed. Whereas the EuroSkyWay system is an representative for a meshed topology, DVB-RCS systems have usually star configuration with a central hub station. Different data streams are distinguished by program identifiers (PIDs). Recent proposals aim at the evolution of DVB-RCS towards a fully meshed structure. The paper will also discuss the protocol architecture for interconnect SONET LANs over these systems. Finally, a performance comparison of the different solutions will be given in terms of cell overhead rate and signalling effort for selected scenarios.
Study on Cloud Security Based on Trust Spanning Tree Protocol
NASA Astrophysics Data System (ADS)
Lai, Yingxu; Liu, Zenghui; Pan, Qiuyue; Liu, Jing
2015-09-01
Attacks executed on Spanning Tree Protocol (STP) expose the weakness of link layer protocols and put the higher layers in jeopardy. Although the problems have been studied for many years and various solutions have been proposed, many security issues remain. To enhance the security and credibility of layer-2 network, we propose a trust-based spanning tree protocol aiming at achieving a higher credibility of LAN switch with a simple and lightweight authentication mechanism. If correctly implemented in each trusted switch, the authentication of trust-based STP can guarantee the credibility of topology information that is announced to other switch in the LAN. To verify the enforcement of the trusted protocol, we present a new trust evaluation method of the STP using a specification-based state model. We implement a prototype of trust-based STP to investigate its practicality. Experiment shows that the trusted protocol can achieve security goals and effectively avoid STP attacks with a lower computation overhead and good convergence performance.
Tuning collective communication for Partitioned Global Address Space programming models
Nishtala, Rajesh; Zheng, Yili; Hargrove, Paul H.; ...
2011-06-12
Partitioned Global Address Space (PGAS) languages offer programmers the convenience of a shared memory programming style combined with locality control necessary to run on large-scale distributed memory systems. Even within a PGAS language programmers often need to perform global communication operations such as broadcasts or reductions, which are best performed as collective operations in which a group of threads work together to perform the operation. In this study we consider the problem of implementing collective communication within PGAS languages and explore some of the design trade-offs in both the interface and implementation. In particular, PGAS collectives have semantic issues thatmore » are different than in send–receive style message passing programs, and different implementation approaches that take advantage of the one-sided communication style in these languages. We present an implementation framework for PGAS collectives as part of the GASNet communication layer, which supports shared memory, distributed memory and hybrids. The framework supports a broad set of algorithms for each collective, over which the implementation may be automatically tuned. In conclusion, we demonstrate the benefit of optimized GASNet collectives using application benchmarks written in UPC, and demonstrate that the GASNet collectives can deliver scalable performance on a variety of state-of-the-art parallel machines including a Cray XT4, an IBM BlueGene/P, and a Sun Constellation system with InfiniBand interconnect.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Richard L; Poole, Stephen W; Shamis, Pavel
2010-01-01
This paper introduces the newly developed Infini-Band (IB) Management Queue capability, used by the Host Channel Adapter (HCA) to manage network task data flow dependancies, and progress the communications associated with such flows. These tasks include sends, receives, and the newly supported wait task, and are scheduled by the HCA based on a data dependency description provided by the user. This functionality is supported by the ConnectX-2 HCA, and provides the means for delegating collective communication management and progress to the HCA, also known as collective communication offload. This provides a means for overlapping collective communications managed by the HCAmore » and computation on the Central Processing Unit (CPU), thus making it possible to reduce the impact of system noise on parallel applications using collective operations. This paper further describes how this new capability can be used to implement scalable Message Passing Interface (MPI) collective operations, describing the high level details of how this new capability is used to implement the MPI Barrier collective operation, focusing on the latency sensitive performance aspects of this new capability. This paper concludes with small scale benchmark experiments comparing implementations of the barrier collective operation, using the new network offload capabilities, with established point-to-point based implementations of these same algorithms, which manage the data flow using the central processing unit. These early results demonstrate the promise this new capability provides to improve the scalability of high performance applications using collective communications. The latency of the HCA based implementation of the barrier is similar to that of the best performing point-to-point based implementation managed by the central processing unit, starting to outperform these as the number of processes involved in the collective operation increases.« less
High-Performance CCSDS AOS Protocol Implementation in FPGA
NASA Technical Reports Server (NTRS)
Clare, Loren P.; Torgerson, Jordan L.; Pang, Jackson
2010-01-01
The Consultative Committee for Space Data Systems (CCSDS) Advanced Orbiting Systems (AOS) space data link protocol provides a framing layer between channel coding such as LDPC (low-density parity-check) and higher-layer link multiplexing protocols such as CCSDS Encapsulation Service, which is described in the following article. Recent advancement in RF modem technology has allowed multi-megabit transmission over space links. With this increase in data rate, the CCSDS AOS protocol implementation needs to be optimized to both reduce energy consumption and operate at a high rate.
NASA Astrophysics Data System (ADS)
Hortos, William S.
2003-07-01
Mobile ad hoc networking (MANET) supports self-organizing, mobile infrastructures and enables an autonomous network of mobile nodes that can operate without a wired backbone. Ad hoc networks are characterized by multihop, wireless connectivity via packet radios and by the need for efficient dynamic protocols. All routers are mobile and can establish connectivity with other nodes only when they are within transmission range. Importantly, ad hoc wireless nodes are resource-constrained, having limited processing, memory, and battery capacity. Delivery of high quality-ofservice (QoS), real-time multimedia services from Internet-based applications over a MANET is a challenge not yet achieved by proposed Internet Engineering Task Force (IETF) ad hoc network protocols in terms of standard performance metrics such as end-to-end throughput, packet error rate, and delay. In the distributed operations of route discovery and maintenance, strong interaction occurs across MANET protocol layers, in particular, the physical, media access control (MAC), network, and application layers. The QoS requirements are specified for the service classes by the application layer. The cross-layer design must also satisfy the battery-limited energy constraints, by minimizing the distributed power consumption at the nodes and of selected routes. Interactions across the layers are modeled in terms of the set of concatenated design parameters including associated energy costs. Functional dependencies of the QoS metrics are described in terms of the concatenated control parameters. New cross-layer designs are sought that optimize layer interdependencies to achieve the "best" QoS available in an energy-constrained, time-varying network. The protocol design, based on a reactive MANET protocol, adapts the provisioned QoS to dynamic network conditions and residual energy capacities. The cross-layer optimization is based on stochastic dynamic programming conditions derived from time-dependent models of MANET packet flows. Regulation of network behavior is modeled by the optimal control of the conditional rates of multivariate point processes (MVPPs); these rates depend on the concatenated control parameters through a change of probability measure. The MVPP models capture behavior of many service applications, e.g., voice, video and the self-similar behavior of Internet data sessions. Performance verification of the cross-layer protocols, derived from the dynamic programming conditions, can be achieved by embedding the conditions in a reactive routing protocol for MANETs, in a simulation environment, such as the wireless extension of ns-2. A canonical MANET scenario consists of a distributed collection of battery-powered laptops or hand-held terminals, capable of hosting multimedia applications. Simulation details and performance tradeoffs, not presented, remain for a sequel to the paper.
Multimedia-Based Integration of Cross-Layer Techniques
2014-06-01
wireless networks play a critical role in net-centric warfare, including the sharing of the time-sensitive battlefield information among military nodes for...layer protocols are key enablers in effectively deploying the military wireless network. This report discusses the design of cross-layer protocols...2 1.0 INTRODUCTION 1.1 Motivation The Air Force (AF) Wireless Networks (also denoted as military networks in this report) must be capable of
Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman
2008-08-04
Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.
Shahzad, Aamir; Lee, Malrey; Xiong, Neal Naixue; Jeong, Gisung; Lee, Young-Keun; Choi, Jae-Young; Mahesar, Abdul Wheed; Ahmad, Iftikhar
2016-01-01
In Industrial systems, Supervisory control and data acquisition (SCADA) system, the pseudo-transport layer of the distributed network protocol (DNP3) performs the functions of the transport layer and network layer of the open systems interconnection (OSI) model. This study used a simulation design of water pumping system, in-which the network nodes are directly and wirelessly connected with sensors, and are monitored by the main controller, as part of the wireless SCADA system. This study also intends to focus on the security issues inherent in the pseudo-transport layer of the DNP3 protocol. During disassembly and reassembling processes, the pseudo-transport layer keeps track of the bytes sequence. However, no mechanism is available that can verify the message or maintain the integrity of the bytes in the bytes received/transmitted from/to the data link layer or in the send/respond from the main controller/sensors. To properly and sequentially keep track of the bytes, a mechanism is required that can perform verification while bytes are received/transmitted from/to the lower layer of the DNP3 protocol or the send/respond to/from field sensors. For security and byte verification purposes, a mechanism needs to be proposed for the pseudo-transport layer, by employing cryptography algorithm. A dynamic choice security buffer (SB) is designed and employed during the security development. To achieve the desired goals of the proposed study, a pseudo-transport layer stack model is designed using the DNP3 protocol open library and the security is deployed and tested, without changing the original design. PMID:26950129
Shahzad, Aamir; Lee, Malrey; Xiong, Neal Naixue; Jeong, Gisung; Lee, Young-Keun; Choi, Jae-Young; Mahesar, Abdul Wheed; Ahmad, Iftikhar
2016-03-03
In Industrial systems, Supervisory control and data acquisition (SCADA) system, the pseudo-transport layer of the distributed network protocol (DNP3) performs the functions of the transport layer and network layer of the open systems interconnection (OSI) model. This study used a simulation design of water pumping system, in-which the network nodes are directly and wirelessly connected with sensors, and are monitored by the main controller, as part of the wireless SCADA system. This study also intends to focus on the security issues inherent in the pseudo-transport layer of the DNP3 protocol. During disassembly and reassembling processes, the pseudo-transport layer keeps track of the bytes sequence. However, no mechanism is available that can verify the message or maintain the integrity of the bytes in the bytes received/transmitted from/to the data link layer or in the send/respond from the main controller/sensors. To properly and sequentially keep track of the bytes, a mechanism is required that can perform verification while bytes are received/transmitted from/to the lower layer of the DNP3 protocol or the send/respond to/from field sensors. For security and byte verification purposes, a mechanism needs to be proposed for the pseudo-transport layer, by employing cryptography algorithm. A dynamic choice security buffer (SB) is designed and employed during the security development. To achieve the desired goals of the proposed study, a pseudo-transport layer stack model is designed using the DNP3 protocol open library and the security is deployed and tested, without changing the original design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hjelm, Nathan Thomas; Pritchard, Howard Porter
These are a series of slides for a presentation for ExxonMobil's visit to Los Alamos National Laboratory. Topics covered are: Open MPI - The Release Story, MPI-3 RMA in Open MPI, MPI dynamic process management and Open MPI, and new options with CLE 6. Open MPI RMA features are: since v2.0.0 full support for the MPI-3.1 specification, support for non-contiguous datatypes, support for direct use of the RDMA capabilities of high performance networks (Cray Gemini/Aries, Infiniband), starting in v2.1.0 will have support for using network atomic operations for MPI_Fetch_and_op and MPI_Compare_and_swap, tested with MPI_THREAD_MULTIPLE.
Research and implementation of SATA protocol link layer based on FPGA
NASA Astrophysics Data System (ADS)
Liu, Wen-long; Liu, Xue-bin; Qiang, Si-miao; Yan, Peng; Wen, Zhi-gang; Kong, Liang; Liu, Yong-zheng
2018-02-01
In order to solve the problem high-performance real-time, high-speed the image data storage generated by the detector. In this thesis, it choose an suitable portable image storage hard disk of SATA interface, it is relative to the existing storage media. It has a large capacity, high transfer rate, inexpensive, power-down data which is not lost, and many other advantages. This paper focuses on the link layer of the protocol, analysis the implementation process of SATA2.0 protocol, and build state machines. Then analyzes the characteristics resources of Kintex-7 FPGA family, builds state machines according to the agreement, write Verilog implement link layer modules, and run the simulation test. Finally, the test is on the Kintex-7 development board platform. It meets the requirements SATA2.0 protocol basically.
ARINC 818 express for high-speed avionics video and power over coax
NASA Astrophysics Data System (ADS)
Keller, Tim; Alexander, Jon
2012-06-01
CoaXPress is a new standard for high-speed video over coax cabling developed for the machine vision industry. CoaXPress includes both a physical layer and a video protocol. The physical layer has desirable features for aerospace and defense applications: it allows 3Gbps (up to 6Gbps) communication, includes 21Mbps return path allowing for bidirectional communication, and provides up to 13W of power, all over a single coax connection. ARINC 818, titled "Avionics Digital Video Bus" is a protocol standard developed specifically for high speed, mission critical aerospace video systems. ARINC 818 is being widely adopted for new military and commercial display and sensor applications. The ARINC 818 protocol combined with the CoaXPress physical layer provide desirable characteristics for many aerospace systems. This paper presents the results of a technology demonstration program to marry the physical layer from CoaXPress with the ARINC 818 protocol. ARINC 818 is a protocol, not a physical layer. Typically, ARINC 818 is implemented over fiber or copper for speeds of 1 to 2Gbps, but beyond 2Gbps, it has been implemented exclusively over fiber optic links. In many rugged applications, a copper interface is still desired, by implementing ARINC 818 over the CoaXPress physical layer, it provides a path to 3 and 6 Gbps copper interfaces for ARINC 818. Results of the successful technology demonstration dubbed ARINC 818 Express are presented showing 3Gbps communication while powering a remote module over a single coax cable. The paper concludes with suggested next steps for bring this technology to production readiness.
A Survey on Underwater Acoustic Sensor Network Routing Protocols.
Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina
2016-03-22
Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research.
A Survey on Underwater Acoustic Sensor Network Routing Protocols
Li, Ning; Martínez, José-Fernán; Meneses Chaus, Juan Manuel; Eckert, Martina
2016-01-01
Underwater acoustic sensor networks (UASNs) have become more and more important in ocean exploration applications, such as ocean monitoring, pollution detection, ocean resource management, underwater device maintenance, etc. In underwater acoustic sensor networks, since the routing protocol guarantees reliable and effective data transmission from the source node to the destination node, routing protocol design is an attractive topic for researchers. There are many routing algorithms have been proposed in recent years. To present the current state of development of UASN routing protocols, we review herein the UASN routing protocol designs reported in recent years. In this paper, all the routing protocols have been classified into different groups according to their characteristics and routing algorithms, such as the non-cross-layer design routing protocol, the traditional cross-layer design routing protocol, and the intelligent algorithm based routing protocol. This is also the first paper that introduces intelligent algorithm-based UASN routing protocols. In addition, in this paper, we investigate the development trends of UASN routing protocols, which can provide researchers with clear and direct insights for further research. PMID:27011193
NASA Technical Reports Server (NTRS)
Benbenek, Daniel; Soloff, Jason; Lieb, Erica
2010-01-01
Selecting a communications and network architecture for future manned space flight requires an evaluation of the varying goals and objectives of the program, development of communications and network architecture evaluation criteria, and assessment of critical architecture trades. This paper uses Cx Program proposed exploration activities as a guideline; lunar sortie, outpost, Mars, and flexible path options are described. A set of proposed communications network architecture criteria are proposed and described. They include: interoperability, security, reliability, and ease of automating topology changes. Finally a key set of architecture options are traded including (1) multiplexing data at a common network layer vs. at the data link layer, (2) implementing multiple network layers vs. a single network layer, and (3) the use of a particular network layer protocol, primarily IPv6 vs. Delay Tolerant Networking (DTN). In summary, the protocol options are evaluated against the proposed exploration activities and their relative performance with respect to the criteria are assessed. An architectural approach which includes (a) the capability of multiplexing at both the network layer and the data link layer and (b) a single network layer for operations at each program phase, as these solutions are best suited to respond to the widest array of program needs and meet each of the evaluation criteria.
Mobile Virtual Private Networking
NASA Astrophysics Data System (ADS)
Pulkkis, Göran; Grahn, Kaj; Mårtens, Mathias; Mattsson, Jonny
Mobile Virtual Private Networking (VPN) solutions based on the Internet Security Protocol (IPSec), Transport Layer Security/Secure Socket Layer (SSL/TLS), Secure Shell (SSH), 3G/GPRS cellular networks, Mobile IP, and the presently experimental Host Identity Protocol (HIP) are described, compared and evaluated. Mobile VPN solutions based on HIP are recommended for future networking because of superior processing efficiency and network capacity demand features. Mobile VPN implementation issues associated with the IP protocol versions IPv4 and IPv6 are also evaluated. Mobile VPN implementation experiences are presented and discussed.
Fingerprinting Software Defined Networks and Controllers
2015-03-01
24 2.5.3 Intrusion Prevention System with SDN . . . . . . . . . . . . . . . 25 2.5.4 Modular Security Services...Control Message Protocol IDS Intrusion Detection System IPS Intrusion Prevention System ISP Internet Service Provider LLDP Link Layer Discovery Protocol...layer functions (e.g., web proxies, firewalls, intrusion detection/prevention, load balancers, etc.). The increase in switch capabilities combined
Introduction to multiprotocol over ATM (MPOA)
NASA Astrophysics Data System (ADS)
Fredette, Andre N.
1997-10-01
Multiprotocol over ATM (MPOA) is a new protocol specified by the ATM Forum. MPOA provides a framework for effectively synthesizing bridging and routing with ATM in an environment of diverse protocols and network technologies. The primary goal of MPOA is the efficient transfer of inter-subnet unicast data in a LAN Emulation (LANE) environment. MPOA integrates LANE and the next hop resolution protocol (NHRP) to preserve the benefits of LAN Emulation, while allowing inter-subnet, internetwork layer protocol communication over ATM VCCs without requiring routers in the data path. It reduces latency and the internetwork layer forwarding load on backbone routers by enabling direct connectivity between ATM-attached edge devices (i.e., shortcuts). To establish these shortcuts, MPOA uses both routing and bridging information to locate the edge device closest to the addressed end station. By integrating LANE and NHRP, MPOA allows the physical separation of internetwork layer route calculation and forwarding, a technique known as virtual routing. This separation provides a number of key benefits including enhanced manageability and reduced complexity of internetwork layer capable edge devices. This paper provides an overview of MPOA that summarizes the goals, architecture, and key attributes of the protocol. In presenting this overview, the salient attributes of LANE and NHRP are described as well.
The reliable multicast protocol application programming interface
NASA Technical Reports Server (NTRS)
Montgomery , Todd; Whetten, Brian
1995-01-01
The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.
LANES - LOCAL AREA NETWORK EXTENSIBLE SIMULATOR
NASA Technical Reports Server (NTRS)
Gibson, J.
1994-01-01
The Local Area Network Extensible Simulator (LANES) provides a method for simulating the performance of high speed local area network (LAN) technology. LANES was developed as a design and analysis tool for networking on board the Space Station. The load, network, link and physical layers of a layered network architecture are all modeled. LANES models to different lower-layer protocols, the Fiber Distributed Data Interface (FDDI) and the Star*Bus. The load and network layers are included in the model as a means of introducing upper-layer processing delays associated with message transmission; they do not model any particular protocols. FDDI is an American National Standard and an International Organization for Standardization (ISO) draft standard for a 100 megabit-per-second fiber-optic token ring. Specifications for the LANES model of FDDI are taken from the Draft Proposed American National Standard FDDI Token Ring Media Access Control (MAC), document number X3T9.5/83-16 Rev. 10, February 28, 1986. This is a mature document describing the FDDI media-access-control protocol. Star*Bus, also known as the Fiber Optic Demonstration System, is a protocol for a 100 megabit-per-second fiber-optic star-topology LAN. This protocol, along with a hardware prototype, was developed by Sperry Corporation under contract to NASA Goddard Space Flight Center as a candidate LAN protocol for the Space Station. LANES can be used to analyze performance of a networking system based on either FDDI or Star*Bus under a variety of loading conditions. Delays due to upper-layer processing can easily be nullified, allowing analysis of FDDI or Star*Bus as stand-alone protocols. LANES is a parameter-driven simulation; it provides considerable flexibility in specifying both protocol an run-time parameters. Code has been optimized for fast execution and detailed tracing facilities have been included. LANES was written in FORTRAN 77 for implementation on a DEC VAX under VMS 4.6. It consists of two programs, a simulation program and a user-interface program. The simulation program requires the SLAM II simulation library from Pritsker and Associates, W. Lafayette IN; the user interface is implemented using the Ingres database manager from Relational Technology, Inc. Information about running the simulation program without the user-interface program is contained in the documentation. The memory requirement is 129,024 bytes. LANES was developed in 1988.
A proposed group management scheme for XTP multicast
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Weaver, Alfred C.
1990-01-01
The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.
Emulation Platform for Cyber Analysis of Wireless Communication Network Protocols
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Leeuwen, Brian P.; Eldridge, John M.
Wireless networking and mobile communications is increasing around the world and in all sectors of our lives. With increasing use, the density and complexity of the systems increase with more base stations and advanced protocols to enable higher data throughputs. The security of data transported over wireless networks must also evolve with the advances in technologies enabling more capable wireless networks. However, means for analysis of the effectiveness of security approaches and implementations used on wireless networks are lacking. More specifically a capability to analyze the lower-layer protocols (i.e., Link and Physical layers) is a major challenge. An analysis approachmore » that incorporates protocol implementations without the need for RF emissions is necessary. In this research paper several emulation tools and custom extensions that enable an analysis platform to perform cyber security analysis of lower layer wireless networks is presented. A use case of a published exploit in the 802.11 (i.e., WiFi) protocol family is provided to demonstrate the effectiveness of the described emulation platform.« less
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-08-08
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.
Protocol for Communication Networking for Formation Flying
NASA Technical Reports Server (NTRS)
Jennings, Esther; Okino, Clayton; Gao, Jay; Clare, Loren
2009-01-01
An application-layer protocol and a network architecture have been proposed for data communications among multiple autonomous spacecraft that are required to fly in a precise formation in order to perform scientific observations. The protocol could also be applied to other autonomous vehicles operating in formation, including robotic aircraft, robotic land vehicles, and robotic underwater vehicles. A group of spacecraft or other vehicles to which the protocol applies could be characterized as a precision-formation- flying (PFF) network, and each vehicle could be characterized as a node in the PFF network. In order to support precise formation flying, it would be necessary to establish a corresponding communication network, through which the vehicles could exchange position and orientation data and formation-control commands. The communication network must enable communication during early phases of a mission, when little positional knowledge is available. Particularly during early mission phases, the distances among vehicles may be so large that communication could be achieved only by relaying across multiple links. The large distances and need for omnidirectional coverage would limit communication links to operation at low bandwidth during these mission phases. Once the vehicles were in formation and distances were shorter, the communication network would be required to provide high-bandwidth, low-jitter service to support tight formation-control loops. The proposed protocol and architecture, intended to satisfy the aforementioned and other requirements, are based on a standard layered-reference-model concept. The proposed application protocol would be used in conjunction with conventional network, data-link, and physical-layer protocols. The proposed protocol includes the ubiquitous Institute of Electrical and Electronics Engineers (IEEE) 802.11 medium access control (MAC) protocol to be used in the datalink layer. In addition to its widespread and proven use in diverse local-area networks, this protocol offers both (1) a random- access mode needed for the early PFF deployment phase and (2) a time-bounded-services mode needed during PFF-maintenance operations. Switching between these two modes could be controlled by upper-layer entities using standard link-management mechanisms. Because the early deployment phase of a PFF mission can be expected to involve multihop relaying to achieve network connectivity (see figure), the proposed protocol includes the open shortest path first (OSPF) network protocol that is commonly used in the Internet. Each spacecraft in a PFF network would be in one of seven distinct states as the mission evolved from initial deployment, through coarse formation, and into precise formation. Reconfiguration of the formation to perform different scientific observations would also cause state changes among the network nodes. The application protocol provides for recognition and tracking of the seven states for each node and for protocol changes under specified conditions to adapt the network and satisfy communication requirements associated with the current PFF mission phase. Except during early deployment, when peer-to-peer random access discovery methods would be used, the application protocol provides for operation in a centralized manner.
Efficiently passing messages in distributed spiking neural network simulation.
Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan
2013-01-01
Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.
Gonzalez Caldito, Natalia; Antony, Bhavna; He, Yufan; Lang, Andrew; Nguyen, James; Rothman, Alissa; Ogbuokiri, Esther; Avornu, Ama; Balcer, Laura; Frohman, Elliot; Frohman, Teresa C; Bhargava, Pavan; Prince, Jerry; Calabresi, Peter A; Saidha, Shiv
2018-03-01
Optical coherence tomography (OCT) is a reliable method used to quantify discrete layers of the retina. Spectralis OCT is a device used for this purpose. Spectralis OCT macular scan imaging acquisition can be obtained on either the horizontal or vertical plane. The vertical protocol has been proposed as favorable, due to postulated reduction in confound of Henle's fibers on segmentation-derived metrics. Yet, agreement of the segmentation measures of horizontal and vertical macular scans remains unexplored. Our aim was to determine this agreement. Horizontal and vertical macular scans on Spectralis OCT were acquired in 20 healthy controls (HCs) and 20 multiple sclerosis (MS) patients. All scans were segmented using Heidelberg software and a Johns Hopkins University (JHU)-developed method. Agreement was analyzed using Bland-Altman analyses and intra-class correlation coefficients (ICCs). Using both segmentation techniques, mean differences (agreement at the cohort level) in the thicknesses of all macular layers derived from both acquisition protocols in MS patients and HCs were narrow (<1 µm), while the limits of agreement (LOA) (agreement at the individual level) were wider. Using JHU segmentation mean differences (and LOA) for the macular retinal nerve fiber layer (RNFL) and ganglion cell layer + inner plexiform layer (GCIP) in MS were 0.21 µm (-1.57-1.99 µm) and -0.36 µm (-1.44-1.37 µm), respectively. OCT segmentation measures of discrete retinal-layer thicknesses derived from both vertical and horizontal protocols on Spectralis OCT agree excellently at the cohort level (narrow mean differences), but only moderately at the individual level (wide LOA). This suggests patients scanned using either protocol should continue to be scanned with the same protocol. However, due to excellent agreement at the cohort level, measures derived from both acquisitions can be pooled for outcome purposes in clinical trials.
Developing a Standard Method for Link-Layer Security of CCSDS Space Communications
NASA Technical Reports Server (NTRS)
Biggerstaff, Craig
2009-01-01
Communications security for space systems has been a specialized field generally far removed from considerations of mission interoperability and cross-support in fact, these considerations often have been viewed as intrinsically opposed to security objectives. The space communications protocols defined by the Consultative Committee for Space Data Systems (CCSDS) have a twenty-five year history of successful use in over 400 missions. While the CCSDS Telemetry, Telecommand, and Advancing Orbiting Systems protocols for use at OSI Layer 2 are operationally mature, there has been no direct support within these protocols for communications security techniques. Link-layer communications security has been successfully implemented in the past using mission-unique methods, but never before with an objective of facilitating cross-support and interoperability. This paper discusses the design of a standard method for cryptographic authentication, encryption, and replay protection at the data link layer that can be integrated into existing CCSDS protocols without disruption to legacy communications services. Integrating cryptographic operations into existing data structures and processing sequences requires a careful assessment of the potential impediments within spacecraft, ground stations, and operations centers. The objective of this work is to provide a sound method for cryptographic encapsulation of frame data that also facilitates Layer 2 virtual channel switching, such that a mission may procure data transport services as needed without involving third parties in the cryptographic processing, or split independent data streams for separate cryptographic processing.
The Network Protocol Analysis Technique in Snort
NASA Astrophysics Data System (ADS)
Wu, Qing-Xiu
Network protocol analysis is a network sniffer to capture data for further analysis and understanding of the technical means necessary packets. Network sniffing is intercepted by packet assembly binary format of the original message content. In order to obtain the information contained. Required based on TCP / IP protocol stack protocol specification. Again to restore the data packets at protocol format and content in each protocol layer. Actual data transferred, as well as the application tier.
Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud
NASA Astrophysics Data System (ADS)
Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.
2014-12-01
The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.
A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks
Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang
2017-01-01
Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs. PMID:28786915
Application Transparent HTTP Over a Disruption Tolerant Smartnet
2014-09-01
American Standard Code for Information Interchange BP Bundle Protocol BPA bundle protocol agent CLA convergence layer adapters CPU central processing...forwarding them through the plugin pipeline. The initial version of the DTNInput plugin uses the BBN Spindle bundle protocol agent ( BPA ) implementation
Enhancing Image Processing Performance for PCID in a Heterogeneous Network of Multi-code Processors
NASA Astrophysics Data System (ADS)
Linderman, R.; Spetka, S.; Fitzgerald, D.; Emeny, S.
The Physically-Constrained Iterative Deconvolution (PCID) image deblurring code is being ported to heterogeneous networks of multi-core systems, including Intel Xeons and IBM Cell Broadband Engines. This paper reports results from experiments using the JAWS supercomputer at MHPCC (60 TFLOPS of dual-dual Xeon nodes linked with Infiniband) and the Cell Cluster at AFRL in Rome, NY. The Cell Cluster has 52 TFLOPS of Playstation 3 (PS3) nodes with IBM Cell Broadband Engine multi-cores and 15 dual-quad Xeon head nodes. The interconnect fabric includes Infiniband, 10 Gigabit Ethernet and 1 Gigabit Ethernet to each of the 336 PS3s. The results compare approaches to parallelizing FFT executions across the Xeons and the Cell's Synergistic Processing Elements (SPEs) for frame-level image processing. The experiments included Intel's Performance Primitives and Math Kernel Library, FFTW3.2, and Carnegie Mellon's SPIRAL. Optimization of FFTs in the PCID code led to a decrease in relative processing time for FFTs. Profiling PCID version 6.2, about one year ago, showed the 13 functions that accounted for the highest percentage of processing were all FFT processing functions. They accounted for over 88% of processing time in one run on Xeons. FFT optimizations led to improvement in the current PCID version 8.0. A recent profile showed that only two of the 19 functions with the highest processing time were FFT processing functions. Timing measurements showed that FFT processing for PCID version 8.0 has been reduced to less than 19% of overall processing time. We are working toward a goal of scaling to 200-400 cores per job (1-2 imagery frames/core). Running a pair of cores on each set of frames reduces latency by implementing parallel FFT processing. Our current results show scaling well out to 100 pairs of cores. These results support the next higher level of parallelism in PCID, where groups of several hundred frames each producing one resolved image are sent to cliques of several hundred cores in a round robin fashion. Current efforts toward further performance enhancement for PCID are shifting toward using the Playstations in conjunction with the Xeons to take advantage of outstanding price/performance as well as the Flops/Watt cost advantage. We are fine-tuning the PCID parallization strategy to balance processing over Xeons and Cell BEs to find an optimal partitioning of PCID over the heterogeneous processors. A high performance information management system that exploits native Infiniband multicast is used to improve latency among the head nodes. Using a publication/subscription oriented information management system to implement a unified communications platform makes runs on large HPCs with thousands of intercommunicating cores more flexible and more fault tolerant. It features a loose couplingof publishers to subscribers through intervening brokers. We are also working on enhancing performance for both Xeons and Cell BEs, buy moving selected operations to single precision. Techniques for adapting the code to single precision and performance results are reported.
Remote direct memory access over datagrams
Grant, Ryan Eric; Rashti, Mohammad Javad; Balaji, Pavan; Afsahi, Ahmad
2014-12-02
A communication stack for providing remote direct memory access (RDMA) over a datagram network is disclosed. The communication stack has a user level interface configured to accept datagram related input and communicate with an RDMA enabled network interface card (NIC) via an NIC driver. The communication stack also has an RDMA protocol layer configured to supply one or more data transfer primitives for the datagram related input of the user level. The communication stack further has a direct data placement (DDP) layer configured to transfer the datagram related input from a user storage to a transport layer based on the one or more data transfer primitives by way of a lower layer protocol (LLP) over the datagram network.
Design and Development of Layered Security: Future Enhancements and Directions in Transmission
Shahzad, Aamir; Lee, Malrey; Kim, Suntae; Kim, Kangmin; Choi, Jae-Young; Cho, Younghwa; Lee, Keun-Kwang
2016-01-01
Today, security is a prominent issue when any type of communication is being undertaken. Like traditional networks, supervisory control and data acquisition (SCADA) systems suffer from a number of vulnerabilities. Numerous end-to-end security mechanisms have been proposed for the resolution of SCADA-system security issues, but due to insecure real-time protocol use and the reliance upon open protocols during Internet-based communication, these SCADA systems can still be compromised by security challenges. This study reviews the security challenges and issues that are commonly raised during SCADA/protocol transmissions and proposes a secure distributed-network protocol version 3 (DNP3) design, and the implementation of the security solution using a cryptography mechanism. Due to the insecurities found within SCADA protocols, the new development consists of a DNP3 protocol that has been designed as a part of the SCADA system, and the cryptographically derived security is deployed within the application layer as a part of the DNP3 stack. PMID:26751443
Design and Development of Layered Security: Future Enhancements and Directions in Transmission.
Shahzad, Aamir; Lee, Malrey; Kim, Suntae; Kim, Kangmin; Choi, Jae-Young; Cho, Younghwa; Lee, Keun-Kwang
2016-01-06
Today, security is a prominent issue when any type of communication is being undertaken. Like traditional networks, supervisory control and data acquisition (SCADA) systems suffer from a number of vulnerabilities. Numerous end-to-end security mechanisms have been proposed for the resolution of SCADA-system security issues, but due to insecure real-time protocol use and the reliance upon open protocols during Internet-based communication, these SCADA systems can still be compromised by security challenges. This study reviews the security challenges and issues that are commonly raised during SCADA/protocol transmissions and proposes a secure distributed-network protocol version 3 (DNP3) design, and the implementation of the security solution using a cryptography mechanism. Due to the insecurities found within SCADA protocols, the new development consists of a DNP3 protocol that has been designed as a part of the SCADA system, and the cryptographically derived security is deployed within the application layer as a part of the DNP3 stack.
Demonstrating a Realistic IP Mission Prototype
NASA Technical Reports Server (NTRS)
Rash, James; Ferrer, Arturo B.; Goodman, Nancy; Ghazi-Tehrani, Samira; Polk, Joe; Johnson, Lorin; Menke, Greg; Miller, Bill; Criscuolo, Ed; Hogie, Keith
2003-01-01
Flight software and hardware and realistic space communications environments were elements of recent demonstrations of the Internet Protocol (IP) mission concept in the lab. The Operating Missions as Nodes on the Internet (OMNI) Project and the Flight Software Branch at NASA/GSFC collaborated to build the prototype of a representative space mission that employed unmodified off-the-shelf Internet protocols and technologies for end-to-end communications between the spacecraft/instruments and the ground system/users. The realistic elements used in the prototype included an RF communications link simulator and components of the TRIANA mission flight software and ground support system. A web-enabled camera connected to the spacecraft computer via an Ethernet LAN represented an on-board instrument creating image data. In addition to the protocols at the link layer (HDLC), transport layer (UDP, TCP), and network (IP) layer, a reliable file delivery protocol (MDP) at the application layer enabled reliable data delivery both to and from the spacecraft. The standard Network Time Protocol (NTP) performed on-board clock synchronization with a ground time standard. The demonstrations of the prototype mission illustrated some of the advantages of using Internet standards and technologies for space missions, but also helped identify issues that must be addressed. These issues include applicability to embedded real-time systems on flight-qualified hardware, range of applicability of TCP, and liability for and maintenance of commercial off-the-shelf (COTS) products. The NASA Earth Science Technology Office (ESTO) funded the collaboration to build and demonstrate the prototype IP mission.
Protocol independent transmission method in software defined optical network
NASA Astrophysics Data System (ADS)
Liu, Yuze; Li, Hui; Hou, Yanfang; Qiu, Yajun; Ji, Yuefeng
2016-10-01
With the development of big data and cloud computing technology, the traditional software-defined network is facing new challenges (e.i., ubiquitous accessibility, higher bandwidth, more flexible management and greater security). Using a proprietary protocol or encoding format is a way to improve information security. However, the flow, which carried by proprietary protocol or code, cannot go through the traditional IP network. In addition, ultra- high-definition video transmission service once again become a hot spot. Traditionally, in the IP network, the Serial Digital Interface (SDI) signal must be compressed. This approach offers additional advantages but also bring some disadvantages such as signal degradation and high latency. To some extent, HD-SDI can also be regard as a proprietary protocol, which need transparent transmission such as optical channel. However, traditional optical networks cannot support flexible traffics . In response to aforementioned challenges for future network, one immediate solution would be to use NFV technology to abstract the network infrastructure and provide an all-optical switching topology graph for the SDN control plane. This paper proposes a new service-based software defined optical network architecture, including an infrastructure layer, a virtualization layer, a service abstract layer and an application layer. We then dwell on the corresponding service providing method in order to implement the protocol-independent transport. Finally, we experimentally evaluate that proposed service providing method can be applied to transmit the HD-SDI signal in the software-defined optical network.
Rani, Anupama; Sharma, Vivek; Arora, Sumit; Lal, Darshan; Kumar, Anil
2015-04-01
Detection of milk fat adulteration with foreign fats/oils continues to be a challenge for the dairy industry as well as food testing laboratories, especially in the present scenario of rampant adulteration using the scientific knowledge by unscrupulous persons involved in the trade. In the present investigation a rapid reversed-phase thin layer chromatographic (RP-TLC) protocol was standardized to ascertain the purity of milk fat. RP-TLC protocol did not show any false positive results in the genuine ghee (clarified butter fat) samples of known origin. Adulteration of ghee with coconut oil up to 7. 5 %, soybean oil, sunflower oil and groundnut oil up to 1 %, while, designer oil up to 2 % level could be detected using the standardized RP-TLC protocol. The protocol standardized is rapid and convenient to use.
A Cross-Layer Duty Cycle MAC Protocol Supporting a Pipeline Feature for Wireless Sensor Networks
Tong, Fei; Xie, Rong; Shu, Lei; Kim, Young-Chon
2011-01-01
Although the conventional duty cycle MAC protocols for Wireless Sensor Networks (WSNs) such as RMAC perform well in terms of saving energy and reducing end-to-end delivery latency, they were designed independently and require an extra routing protocol in the network layer to provide path information for the MAC layer. In this paper, we propose a new cross-layer duty cycle MAC protocol with data forwarding supporting a pipeline feature (P-MAC) for WSNs. P-MAC first divides the whole network into many grades around the sink. Each node identifies its grade according to its logical hop distance to the sink and simultaneously establishes a sleep/wakeup schedule using the grade information. Those nodes in the same grade keep the same schedule, which is staggered with the schedule of the nodes in the adjacent grade. Then a variation of the RTS/CTS handshake mechanism is used to forward data continuously in a pipeline fashion from the higher grade to the lower grade nodes and finally to the sink. No extra routing overhead is needed, thus increasing the network scalability while maintaining the superiority of duty-cycling. The simulation results in OPNET show that P-MAC has better performance than S-MAC and RMAC in terms of packet delivery latency and energy efficiency. PMID:22163895
Multiple Path Static Routing Protocols for Packet Switched Networks.
1983-09-01
model are: (1) Physical Layer (2) Data Link Layer (3) Network Layer (4) Transport Layer (5) Session Layer (6) Presentation Layer (7) pplication Layer The...The transport layer, also known as the host-host layer, accepts data from the session layer, splits it into smaller units if needed, passes these to...the network layer, and ensures that all the pieces arrive correctly at the other end. It creates a distinct network connection for each transport
NASA Astrophysics Data System (ADS)
Anderson, J.; Bauer, K.; Borga, A.; Boterenbrood, H.; Chen, H.; Chen, K.; Drake, G.; Dönszelmann, M.; Francis, D.; Guest, D.; Gorini, B.; Joos, M.; Lanni, F.; Lehmann Miotto, G.; Levinson, L.; Narevicius, J.; Panduro Vazquez, W.; Roich, A.; Ryu, S.; Schreuder, F.; Schumacher, J.; Vandelli, W.; Vermeulen, J.; Whiteson, D.; Wu, W.; Zhang, J.
2016-12-01
The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. The Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. The FELIX system, the design of the PCIe prototype card and the integration test results are presented in this paper.
New secure communication-layer standard for medical image management (ISCL)
NASA Astrophysics Data System (ADS)
Kita, Kouichi; Nohara, Takashi; Hosoba, Minoru; Yachida, Masuyoshi; Yamaguchi, Masahiro; Ohyama, Nagaaki
1999-07-01
This paper introduces a summary of the standard draft of ISCL 1.00 which will be published by MEDIS-DC officially. ISCL is abbreviation of Integrated Secure Communication Layer Protocols for Secure Medical Image Management Systems. ISCL is a security layer which manages security function between presentation layer and TCP/IP layer. ISCL mechanism depends on basic function of a smart IC card and symmetric secret key mechanism. A symmetry key for each session is made by internal authentication function of a smart IC card with a random number. ISCL has three functions which assure authentication, confidently and integrity. Entity authentication process is done through 3 path 4 way method using functions of internal authentication and external authentication of a smart iC card. Confidentially algorithm and MAC algorithm for integrity are able to be selected. ISCL protocols are communicating through Message Block which consists of Message Header and Message Data. ISCL protocols are evaluating by applying to regional collaboration system for image diagnosis, and On-line Secure Electronic Storage system for medical images. These projects are supported by Medical Information System Development Center. These project shows ISCL is useful to keep security.
Fast assessment of planar chromatographic layers quality using pulse thermovision method.
Suszyński, Zbigniew; Świta, Robert; Loś, Joanna; Zarzycka, Magdalena B; Kaleniecka, Aleksandra; Zarzycki, Paweł K
2014-12-19
The main goal of this paper is to demonstrate capability of pulse thermovision (thermal-wave) methodology for sensitive detection of photothermal non-uniformities within light scattering and semi-transparent planar stationary phases. Successful visualization of stationary phases defects required signal processing protocols based on wavelet filtration, correlation analysis and k-means 3D segmentation. Such post-processing data handling approach allows extremely sensitive detection of thickness and structural changes within commercially available planar chromatographic layers. Particularly, a number of TLC and HPTLC stationary phases including silica, cellulose, aluminum oxide, polyamide and octadecylsilane coated with adsorbent layer ranging from 100 to 250μm were investigated. Presented detection protocol can be used as an efficient tool for fast screening the overall heterogeneity of any layered materials. Moreover, described procedure is very fast (few seconds including acquisition and data processing) and may be applied for fabrication processes online controlling. In spite of planar chromatographic plates this protocol can be used for assessment of different planar separation tools like paper based analytical devices or micro total analysis systems, consisted of organic and non-organic layers. Copyright © 2014 Elsevier B.V. All rights reserved.
A Novel Addressing Scheme for PMIPv6 Based Global IP-WSNs
Islam, Md. Motaharul; Huh, Eui-Nam
2011-01-01
IP based Wireless Sensor Networks (IP-WSNs) are being used in healthcare, home automation, industrial control and agricultural monitoring. In most of these applications global addressing of individual IP-WSN nodes and layer-three routing for mobility enabled IP-WSN with special attention to reliability, energy efficiency and end to end delay minimization are a few of the major issues to be addressed. Most of the routing protocols in WSN are based on layer-two approaches. For reliability and end to end communication enhancement the necessity of layer-three routing for IP-WSNs is generating significant attention among the research community, but due to the hurdle of maintaining routing state and other communication overhead, it was not possible to introduce a layer-three routing protocol for IP-WSNs. To address this issue we propose in this paper a global addressing scheme and layer-three based hierarchical routing protocol. The proposed addressing and routing approach focuses on all the above mentioned issues. Simulation results show that the proposed addressing and routing approach significantly enhances the reliability, energy efficiency and end to end delay minimization. We also present architecture, message formats and different routing scenarios in this paper. PMID:22164084
A novel addressing scheme for PMIPv6 based global IP-WSNs.
Islam, Md Motaharul; Huh, Eui-Nam
2011-01-01
IP based Wireless Sensor Networks (IP-WSNs) are being used in healthcare, home automation, industrial control and agricultural monitoring. In most of these applications global addressing of individual IP-WSN nodes and layer-three routing for mobility enabled IP-WSN with special attention to reliability, energy efficiency and end to end delay minimization are a few of the major issues to be addressed. Most of the routing protocols in WSN are based on layer-two approaches. For reliability and end to end communication enhancement the necessity of layer-three routing for IP-WSNs is generating significant attention among the research community, but due to the hurdle of maintaining routing state and other communication overhead, it was not possible to introduce a layer-three routing protocol for IP-WSNs. To address this issue we propose in this paper a global addressing scheme and layer-three based hierarchical routing protocol. The proposed addressing and routing approach focuses on all the above mentioned issues. Simulation results show that the proposed addressing and routing approach significantly enhances the reliability, energy efficiency and end to end delay minimization. We also present architecture, message formats and different routing scenarios in this paper.
One Approach for Transitioning the iNET Standards into the IRIG 106 Telemetry Standards
2015-05-26
Protocol Suite. Figure 1 illustrates the Open Systems Interconnection ( OSI ) Model, the corresponding TCP/IP Model, and the major components of the TCP...IP Protocol Suite. Figure 2 represents the iNET-specific protocols layered onto the TCP/IP Model. Figure 1. OSI and TCP/IP Model with TCP/IP...Protocol Suite TCP/IP Protocol Suite Major Components IPv4 IPv6 TCP/IP Model OSI Model Application Presentation
NASA Technical Reports Server (NTRS)
Wallett, Thomas M.
2009-01-01
This paper surveys and describes some of the existing media access control and data link layer technologies for possible application in lunar surface communications and the advanced wideband Direct Sequence Code Division Multiple Access (DSCDMA) conceptual systems utilizing phased-array technology that will evolve in the next decade. Time Domain Multiple Access (TDMA) and Code Division Multiple Access (CDMA) are standard Media Access Control (MAC) techniques that can be incorporated into lunar surface communications architectures. Another novel hybrid technique that is recently being developed for use with smart antenna technology combines the advantages of CDMA with those of TDMA. The relatively new and sundry wireless LAN data link layer protocols that are continually under development offer distinct advantages for lunar surface applications over the legacy protocols which are not wireless. Also several communication transport and routing protocols can be chosen with characteristics commensurate with smart antenna systems to provide spacecraft communications for links exhibiting high capacity on the surface of the Moon. The proper choices depend on the specific communication requirements.
Suraniti, Emmanuel; Studer, Vincent; Sojic, Neso; Mano, Nicolas
2011-04-01
Immobilization and electrical wiring of enzymes is of particular importance for the elaboration of efficient biosensors and can be cumbersome. Here, we report a fast and easy protocol for enzyme immobilization, and as a proof of concept, we applied it to the immobilization of bilirubin oxidase, a labile enzyme. In the first step, bilirubin oxidase is mixed with a redox hydrogel "wiring" the enzyme reaction centers to electrodes. Then, this adduct is covered by an outer layer of PEGDA made by photoinitiated polymerization of poly(ethylene-glycol) diacrylate (PEGDA) and a photoclivable precursor, DAROCUR. This two-step protocol is 18 times faster than the current state-of-the-art protocol and leads to currents 25% higher. In addition, the outer layer of PEGDA acts as a protective layer increasing the lifetime of the electrode by 100% when operating continuously for 2000 s and by 60% when kept in dry state for 24 h. This new protocol is particularly appropriate for labile enzymes that quickly denaturate. In addition, by tuning the ratio PEGDA/DAROCUR, it is possible to make the enzyme electrodes even more active or more stable.
MTP: An atomic multicast transport protocol
NASA Technical Reports Server (NTRS)
Freier, Alan O.; Marzullo, Keith
1990-01-01
Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.
Advertisement-Based Energy Efficient Medium Access Protocols for Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Ray, Surjya Sarathi
One of the main challenges that prevents the large-scale deployment of Wireless Sensor Networks (WSNs) is providing the applications with the required quality of service (QoS) given the sensor nodes' limited energy supplies. WSNs are an important tool in supporting applications ranging from environmental and industrial monitoring, to battlefield surveillance and traffic control, among others. Most of these applications require sensors to function for long periods of time without human intervention and without battery replacement. Therefore, energy conservation is one of the main goals for protocols for WSNs. Energy conservation can be performed in different layers of the protocol stack. In particular, as the medium access control (MAC) layer can access and control the radio directly, large energy savings is possible through intelligent MAC protocol design. To maximize the network lifetime, MAC protocols for WSNs aim to minimize idle listening of the sensor nodes, packet collisions, and overhearing. Several approaches such as duty cycling and low power listening have been proposed at the MAC layer to achieve energy efficiency. In this thesis, I explore the possibility of further energy savings through the advertisement of data packets in the MAC layer. In the first part of my research, I propose Advertisement-MAC or ADV-MAC, a new MAC protocol for WSNs that utilizes the concept of advertising for data contention. This technique lets nodes listen dynamically to any desired transmission and sleep during transmissions not of interest. This minimizes the energy lost in idle listening and overhearing while maintaining an adaptive duty cycle to handle variable loads. Additionally, ADV-MAC enables energy efficient MAC-level multicasting. An analytical model for the packet delivery ratio and the energy consumption of the protocol is also proposed. The analytical model is verified with simulations and is used to choose an optimal value of the advertisement period. Simulations show that the optimized ADV-MAC provides substantial energy gains (50% to 70% less than other MAC protocols for WSNs such as T-MAC and S-MAC for the scenarios investigated) while faring as well as T-MAC in terms of packet delivery ratio and latency. Although ADV-MAC provides substantial energy gains over S-MAC and T-MAC, it is not optimal in terms of energy savings because contention is done twice -- once in the Advertisement Period and once in the Data Period. In the next part of my research, the second contention in the Data Period is eliminated and the advantages of contention-based and TDMA-based protocols are combined to form Advertisement based Time-division Multiple Access (ATMA), a distributed TDMA-based MAC protocol for WSNs. ATMA utilizes the bursty nature of the traffic to prevent energy waste through advertisements and reservations for data slots. Extensive simulations and qualitative analysis show that with bursty traffic, ATMA outperforms contention-based protocols (S-MAC, T-MAC and ADV-MAC), a TDMA based protocol (TRAMA) and hybrid protocols (Z-MAC and IEEE 802.15.4). ATMA provides energy reductions of up to 80%, while providing the best packet delivery ratio (close to 100%) and latency among all the investigated protocols. Simulations alone cannot reflect many of the challenges faced by real implementations of MAC protocols, such as clock-drift, synchronization, imperfect physical layers, and irregular interference from other transmissions. Such issues may cripple a protocol that otherwise performs very well in software simulations. Hence, to validate my research, I conclude with a hardware implementation of the ATMA protocol on SORA (Software Radio), developed by Microsoft Research Asia. SORA is a reprogrammable Software Defined Radio (SDR) platform that satisfies the throughput and timing requirements of modern wireless protocols while utilizing the rich general purpose PC development environment. Experimental results obtained from the hardware implementation of ATMA closely mirror the simulation results obtained for a single hop network with 4 nodes.
Authentication Binding between SSL/TLS and HTTP
NASA Astrophysics Data System (ADS)
Saito, Takamichi; Sekiguchi, Kiyomi; Hatsugai, Ryosuke
While the Secure Socket Layer or Transport Layer Security (SSL/TLS) is assumed to provide secure communications over the Internet, many web applications utilize basic or digest authentication of Hyper Text Transport Protocol (HTTP) over SSL/TLS. Namely, in the scheme, there are two different authentication schemes in a session. Since they are separated by a layer, these are not convenient for a web application. Moreover, the scheme may also cause problems in establishing secure communication. Then we provide a scheme of authentication binding between SSL/TLS and HTTP without modifying SSL/TLS protocols and its implementation, and we show the effectiveness of our proposed scheme.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-19
... Protocol on Substances That Deplete the Ozone Layer (Protocol) and Title VI of the Clean Air Act Amendments (CAAA) established limits on total U.S. production, import, and export of class I and class II... transformed, destroyed, or exported to developing countries. The Protocol also establishes limits and...
Sefuba, Maria; Walingo, Tom; Takawira, Fambirai
2015-09-18
This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols.
Sefuba, Maria; Walingo, Tom; Takawira, Fambirai
2015-01-01
This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols. PMID:26393608
NASA Astrophysics Data System (ADS)
Jin, Yi; Zhai, Chao; Gu, Yonggang; Zhou, Zengxiang; Gai, Xiaofeng
2010-07-01
4,000 fiber positioning units need to be positioned precisely in LAMOST(Large Sky Area Multi-object Optical Spectroscopic Telescope) optical fiber positioning & control system, and every fiber positioning unit needs two stepper motors for its driven, so 8,000 stepper motors need to be controlled in the entire system. Wireless communication mode is adopted to save the installing space on the back of the focal panel, and can save more than 95% external wires compared to the traditional cable control mode. This paper studies how to use the ZigBee technology to group these 8000 nodes, explores the pros and cons of star network and tree network in order to search the stars quickly and efficiently. ZigBee technology is a short distance, low-complexity, low power, low data rate, low-cost two-way wireless communication technology based on the IEEE 802.15.4 protocol. It based on standard Open Systems Interconnection (OSI): The 802.15.4 standard specifies the lower protocol layers-the physical layer (PHY), and the media access control (MAC). ZigBee Alliance defined on this basis, the rest layers such as the network layer and application layer, and is responsible for high-level applications, testing and marketing. The network layer used here, based on ad hoc network protocols, includes the following functions: construction and maintenance of the topological structure, nomenclature and associated businesses which involves addressing, routing and security and a self-organizing-self-maintenance functions which will minimize consumer spending and maintenance costs. In this paper, freescale's 802.15.4 protocol was used to configure the network layer. A star network and a tree network topology is realized, which can build network, maintenance network and create a routing function automatically. A concise tree network address allocate algorithm is present to assign the network ID automatically.
Lottanti, S; Gautschi, H; Sener, B; Zehnder, M
2009-04-01
To evaluate the effects of ethylenediaminetetraacetic (EDTA), etidronic (EA) and peracetic acid (PA) when used in conjunction with sodium hypochlorite (NaOCl) as root canal irrigants on calcium eluted from canals, smear layer, and root dentine demineralization after instrumentation/irrigation. Single-rooted human premolars were irrigated as follows (n = 12 per group): (1) 1% NaOCl during instrumentation, deionized water after instrumentation, (2) 1% NaOCl during, 17% EDTA after instrumentation, (3) a 1 : 1-mixture of 2% NaOCl and 18% EA during and after instrumentation, and (4) 1% NaOCl during, 2.25% PA after instrumentation. Irrigant volumes and contact times were 10 mL/15 min during and 5 mL/3 min after instrumentation. The evaluated outcomes were eluted calcium by atomic absorption spectroscopy, smear-covered areas by scanning electron microscopy in secondary electron mode and apparent canal wall decalcifications on root transsections in backscatter mode. For the smear layer analysis, sclerotic dentine was taken into consideration. Results were compared using appropriate parametric and nonparametric tests, alpha = 0.05. The statistical comparison of the protocols regarding calcium elution revealed that protocol (1) yielded less calcium than (3), which yielded less than protocols (2) and (4). Most of the instrumented canal walls treated with one of the decalcifying agents were free of smear layer. Protocols (1) and (3) caused no decalcification of root dentine, whilst (2) and (4) showed substance typical demineralization patterns. The decalcifying agents under investigation were all able to remove or prevent a smear layer. However, they eroded the dentine wall differently.
NASA Technical Reports Server (NTRS)
Pang, Jackson; Pingree, Paula J.; Torgerson, J. Leigh
2006-01-01
We present the Telecommunications protocol processing subsystem using Reconfigurable Interoperable Gate Arrays (TRIGA), a novel approach that unifies fault tolerance, error correction coding and interplanetary communication protocol off-loading to implement CCSDS File Delivery Protocol and Datalink layers. The new reconfigurable architecture offers more than one order of magnitude throughput increase while reducing footprint requirements in memory, command and data handling processor utilization, communication system interconnects and power consumption.
Layer-by-layer assembly of patchy particles as a route to nontrivial structures
NASA Astrophysics Data System (ADS)
Patra, Niladri; Tkachenko, Alexei V.
2017-08-01
We propose a strategy for robust high-quality self-assembly of nontrivial periodic structures out of patchy particles and investigate it with Brownian dynamics simulations. Its first element is the use of specific patch-patch and shell-shell interactions between the particles, which can be implemented through differential functionalization of patched and shell regions with specific DNA strands. The other key element of our approach is the use of a layer-by-layer protocol that allows one to avoid the formation of undesired random aggregates. As an example, we design and self-assemble in silico a version of a double diamond lattice in which four particle types are arranged into bcc crystal made of four fcc sublattices. The lattice can be further converted to cubic diamond by selective removal of the particles of certain types. Our results demonstrate that by combining the directionality, selectivity of interactions, and the layer-by-layer protocol, a high-quality robust self-assembly can be achieved.
Layer-by-layer assembly of patchy particles as a route to nontrivial structures
Patra, Niladri; Tkachenko, Alexei V.
2017-08-02
Here, we propose a strategy for robust high-quality self-assembly of nontrivial periodic structures out of patchy particles and investigate it with Brownian dynamics simulations. Its first element is the use of specific patch-patch and shell-shell interactions between the particles, which can be implemented through differential functionalization of patched and shell regions with specific DNA strands. The other key element of our approach is the use of a layer-by-layer protocol that allows one to avoid the formation of undesired random aggregates. As an example, we design and self-assemble in silico a version of a double diamond lattice in which four particlemore » types are arranged into bcc crystal made of four fcc sublattices. The lattice can be further converted to cubic diamond by selective removal of the particles of certain types. These results demonstrate that by combining the directionality, selectivity of interactions, and the layer-by-layer protocol, a high-quality robust self-assembly can be achieved.« less
Application Protocol, Initial Graphics Exchange Specification (IGES), Layered Electrical Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
O`Connell, L.J.
1994-12-01
An application protocol is an information systems engineering view of a specific product The view represents an agreement on the generic activities needed to design and fabricate the product the agreement on the information needed to support those activities, and the specific constructs of a product data standard for use in transferring some or all of the information required. This application protocol describes the data for electrical and electronic products in terms of a product description standard called the Initial Graphics Exchange Specification (IGES). More specifically, the Layered Electrical Product IGES Application Protocol (AP) specifies the mechanisms for defining andmore » exchanging computer-models and their associated data for those products which have been designed in two dimensional geometry so as to be produced as a series of layers in IGES format The AP defines the appropriateness of the data items for describing the geometry of the various parts of a product (shape and location), the connectivity, and the processing and material characteristics. Excluded is the behavioral requirements which the product was intended to satisfy, except as those requirements have been recorded as design rules or product testing requirements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-09-01
An application protocol is an information systems engineering view of a specific product. The view represents an agreement on the generic activities needed to design and fabricate the product, the agreement on the information needed to support those activities, and the specific constructs of a product data standard for use in transfering some or all of the information required. This applications protocol describes the data for electrical and electronic products in terms of a product description standard called the Initial Graphics Exchange Specification (IGES). More specifically, the Layered Electrical Product IGES Application Protocol (AP) specifies the mechanisms for defining andmore » exchanging computer-models and their associated data for those products which have been designed in two dimensional geometry so as to be produced as a series of layers in IGES format. The AP defines the appropriateness of the data items for describing the geometry of the various parts of a product (shape and location), the connectivity, and the processing and material characteristics. Excluded is the behavioral requirements which the product was intended to satisfy, except as those requirements have been recorded as design rules or product testing requirements.« less
Llor, Jesús; Malumbres, Manuel P
2012-01-01
Several Medium Access Control (MAC) and routing protocols have been developed in the last years for Underwater Wireless Sensor Networks (UWSNs). One of the main difficulties to compare and validate the performance of different proposals is the lack of a common standard to model the acoustic propagation in the underwater environment. In this paper we analyze the evolution of underwater acoustic prediction models from a simple approach to more detailed and accurate models. Then, different high layer network protocols are tested with different acoustic propagation models in order to determine the influence of environmental parameters on the obtained results. After several experiments, we can conclude that higher-level protocols are sensitive to both: (a) physical layer parameters related to the network scenario and (b) the acoustic propagation model. Conditions like ocean surface activity, scenario location, bathymetry or floor sediment composition, may change the signal propagation behavior. So, when designing network architectures for UWSNs, the role of the physical layer should be seriously taken into account in order to assert that the obtained simulation results will be close to the ones obtained in real network scenarios.
Llor, Jesús; Malumbres, Manuel P.
2012-01-01
Several Medium Access Control (MAC) and routing protocols have been developed in the last years for Underwater Wireless Sensor Networks (UWSNs). One of the main difficulties to compare and validate the performance of different proposals is the lack of a common standard to model the acoustic propagation in the underwater environment. In this paper we analyze the evolution of underwater acoustic prediction models from a simple approach to more detailed and accurate models. Then, different high layer network protocols are tested with different acoustic propagation models in order to determine the influence of environmental parameters on the obtained results. After several experiments, we can conclude that higher-level protocols are sensitive to both: (a) physical layer parameters related to the network scenario and (b) the acoustic propagation model. Conditions like ocean surface activity, scenario location, bathymetry or floor sediment composition, may change the signal propagation behavior. So, when designing network architectures for UWSNs, the role of the physical layer should be seriously taken into account in order to assert that the obtained simulation results will be close to the ones obtained in real network scenarios. PMID:22438712
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, J.; Bauer, K.; Borga, A.
The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. Furthermore, the Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. Here, the FELIX system, the design of the PCIe prototypemore » card and the integration test results are presented.« less
Anderson, J.; Bauer, K.; Borga, A.; ...
2016-12-13
The ATLAS Phase-I upgrade (2019) requires a Trigger and Data Acquisition (TDAQ) system able to trigger and record data from up to three times the nominal LHC instantaneous luminosity. Furthermore, the Front-End LInk eXchange (FELIX) system provides an infrastructure to achieve this in a scalable, detector agnostic and easily upgradeable way. It is a PC-based gateway, interfacing custom radiation tolerant optical links from front-end electronics, via PCIe Gen3 cards, to a commodity switched Ethernet or InfiniBand network. FELIX enables reducing custom electronics in favour of software running on commercial servers. Here, the FELIX system, the design of the PCIe prototypemore » card and the integration test results are presented.« less
Reliable WDM multicast in optical burst-switched networks
NASA Astrophysics Data System (ADS)
Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun
2000-09-01
IN this paper,l we present a reliable WDM (Wavelength-Division Multiplexing) multicast protocol in optical burst-switched (OBS) networks. Since the burst dropping (loss) probability may be potentially high in a heavily loaded OBS backbone network, reliable multicast protocols that have developed for IP networks at the transport (or application) layer may incur heavy overheads such as a large number of duplicate retransmissions. In addition, it may take a longer time for an end host to detect and then recover from burst dropping (loss) occurred at the WDM layer. For efficiency reasons, we propose burst loss recovery within the OBS backbone (i.e., at the WDM link layer). The proposed protocol requires two additional functions to be performed by the WDM switch controller: subcasting and maintaining burst states, when the WDM switch has more than one downstream on the WDM multicast tree. We show that these additional functions are simple to implement and the overhead associated with them is manageable.
SPP: A data base processor data communications protocol
NASA Technical Reports Server (NTRS)
Fishwick, P. A.
1983-01-01
The design and implementation of a data communications protocol for the Intel Data Base Processor (DBP) is defined. The protocol is termed SPP (Service Port Protocol) since it enables data transfer between the host computer and the DBP service port. The protocol implementation is extensible in that it is explicitly layered and the protocol functionality is hierarchically organized. Extensive trace and performance capabilities have been supplied with the protocol software to permit optional efficient monitoring of the data transfer between the host and the Intel data base processor. Machine independence was considered to be an important attribute during the design and implementation of SPP. The protocol source is fully commented and is included in Appendix A of this report.
Adaptation technology between IP layer and optical layer in optical Internet
NASA Astrophysics Data System (ADS)
Ji, Yuefeng; Li, Hua; Sun, Yongmei
2001-10-01
Wavelength division multiplexing (WDM) optical network provides a platform with high bandwidth capacity and is supposed to be the backbone infrastructure supporting the next-generation high-speed multi-service networks (ATM, IP, etc.). In the foreseeable future, IP will be the predominant data traffic, to make fully use of the bandwidth of the WDM optical network, many attentions have been focused on IP over WDM, which has been proposed as the most promising technology for new kind of network, so-called Optical Internet. According to OSI model, IP is in the 3rd layer (network layer) and optical network is in the 1st layer (physical layer), so the key issue is what adaptation technology should be used in the 2nd layer (data link layer). In this paper, firstly, we analyze and compare the current adaptation technologies used in backbone network nowadays. Secondly, aiming at the drawbacks of above technologies, we present a novel adaptation protocol (DONA) between IP layer and optical layer in Optical Internet and describe it in details. Thirdly, the gigabit transmission adapter (GTA) we accomplished based on the novel protocol is described. Finally, we set up an experiment platform to apply and verify the DONA and GTA, the results and conclusions of the experiment are given.
A Lightweight Protocol for Secure Video Streaming
Morkevicius, Nerijus; Bagdonas, Kazimieras
2018-01-01
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing “Fog Node-End Device” layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard. PMID:29757988
A Lightweight Protocol for Secure Video Streaming.
Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis
2018-05-14
The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.
2008-01-01
This year the Montreal Protocol celebrates its 20th Anniversary. In September 1987, 24 countries signed the Montreal Protocol on Substances that Deplete the Ozone Layer. Today 191 countries have signed and have met strict commitments on phasing out of ozone depleting substances with the result that a 95% reduction of these substances has been achieved. The Montreal Protocol has also contributed to slowing the rate of global climate change, since most of the ozone depleting substances are also effective greenhouse gases. Even though much has been achieved, the future of the stratospheric ozone layer relies on full compliance of the Montreal Protocol by all countries for the remaining substances, including methyl bromide, as well as strict monitoring of potential risks from the production of substitute chemicals. Also the ozone depleting substances existing in banks and equipment need special attention to prevent their release to the stratosphere. Since many of the ozone depleting substances already in the atmosphere are long-lived, recovery cannot be immediate and present projections estimate a return to pre-1980 levels by 2050 to 2075. It has also been predicted that the interactions of the effects of the ozone layer and that of other climate change factors will become increasingly important.
Fixed-rate layered multicast congestion control
NASA Astrophysics Data System (ADS)
Bing, Zhang; Bing, Yuan; Zengji, Liu
2006-10-01
A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.
Location Management in a Transport Layer Mobility Architecture
NASA Technical Reports Server (NTRS)
Eddy, Wesley M.; Ishac, Joseph
2005-01-01
Mobility architectures that place complexity in end nodes rather than in the network interior have many advantageous properties and are becoming popular research topics. Such architectures typically push mobility support into higher layers of the protocol stack than network layer approaches like Mobile IP. The literature is ripe with proposals to provide mobility services in the transport, session, and application layers. In this paper, we focus on a mobility architecture that makes the most significant changes to the transport layer. A common problem amongst all mobility protocols at various layers is location management, which entails translating some form of static identifier into a mobile node's dynamic location. Location management is required for mobile nodes to be able to provide globally-reachable services on-demand to other hosts. In this paper, we describe the challenges of location management in a transport layer mobility architecture, and discuss the advantages and disadvantages of various solutions proposed in the literature. Our conclusion is that, in principle, secure dynamic DNS is most desirable, although it may have current operational limitations. We note that this topic has room for further exploration, and we present this paper largely as a starting point for comparing possible solutions.
A survey of system architecture requirements for health care-based wireless sensor networks.
Egbogah, Emeka E; Fapojuwo, Abraham O
2011-01-01
Wireless Sensor Networks (WSNs) have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs) that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera). However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera) to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC) protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.
ZERO: probabilistic routing for deploy and forget Wireless Sensor Networks.
Vilajosana, Xavier; Llosa, Jordi; Pacho, Jose Carlos; Vilajosana, Ignasi; Juan, Angel A; Vicario, Jose Lopez; Morell, Antoni
2010-01-01
As Wireless Sensor Networks are being adopted by industry and agriculture for large-scale and unattended deployments, the need for reliable and energy-conservative protocols become critical. Physical and Link layer efforts for energy conservation are not mostly considered by routing protocols that put their efforts on maintaining reliability and throughput. Gradient-based routing protocols route data through most reliable links aiming to ensure 99% packet delivery. However, they suffer from the so-called "hot spot" problem. Most reliable routes waste their energy fast, thus partitioning the network and reducing the area monitored. To cope with this "hot spot" problem we propose ZERO a combined approach at Network and Link layers to increase network lifespan while conserving reliability levels by means of probabilistic load balancing techniques.
A high-speed DAQ framework for future high-level trigger and event building clusters
NASA Astrophysics Data System (ADS)
Caselle, M.; Ardila Perez, L. E.; Balzer, M.; Dritschler, T.; Kopmann, A.; Mohr, H.; Rota, L.; Vogelgesang, M.; Weber, M.
2017-03-01
Modern data acquisition and trigger systems require a throughput of several GB/s and latencies of the order of microseconds. To satisfy such requirements, a heterogeneous readout system based on FPGA readout cards and GPU-based computing nodes coupled by InfiniBand has been developed. The incoming data from the back-end electronics is delivered directly into the internal memory of GPUs through a dedicated peer-to-peer PCIe communication. High performance DMA engines have been developed for direct communication between FPGAs and GPUs using "DirectGMA (AMD)" and "GPUDirect (NVIDIA)" technologies. The proposed infrastructure is a candidate for future generations of event building clusters, high-level trigger filter farms and low-level trigger system. In this paper the heterogeneous FPGA-GPU architecture will be presented and its performance be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorentla Venkata, Manjunath; Graham, Richard L; Ladd, Joshua S
This paper describes the design and implementation of InfiniBand (IB) CORE-Direct based blocking and nonblocking broadcast operations within the Cheetah collective operation framework. It describes a novel approach that fully ofFLoads collective operations and employs only user-supplied buffers. For a 64 rank communicator, the latency of CORE-Direct based hierarchical algorithm is better than production-grade Message Passing Interface (MPI) implementations, 150% better than the default Open MPI algorithm and 115% better than the shared memory optimized MVAPICH implementation for a one kilobyte (KB) message, and for eight mega-bytes (MB) it is 48% and 64% better, respectively. Flat-topology broadcast achieves 99.9% overlapmore » in a polling based communication-computation test, and 95.1% overlap for a wait based test, compared with 92.4% and 17.0%, respectively, for a similar Central Processing Unit (CPU) based implementation.« less
Introduction to the Special Issue on Digital Signal Processing in Radio Astronomy
NASA Astrophysics Data System (ADS)
Price, D. C.; Kocz, J.; Bailes, M.; Greenhill, L. J.
2016-03-01
Advances in astronomy are intimately linked to advances in digital signal processing (DSP). This special issue is focused upon advances in DSP within radio astronomy. The trend within that community is to use off-the-shelf digital hardware where possible and leverage advances in high performance computing. In particular, graphics processing units (GPUs) and field programmable gate arrays (FPGAs) are being used in place of application-specific circuits (ASICs); high-speed Ethernet and Infiniband are being used for interconnect in place of custom backplanes. Further, to lower hurdles in digital engineering, communities have designed and released general-purpose FPGA-based DSP systems, such as the CASPER ROACH board, ASTRON Uniboard, and CSIRO Redback board. In this introductory paper, we give a brief historical overview, a summary of recent trends, and provide an outlook on future directions.
Wind tunnel experiments to study chaparral crown fires
Jeanette Cobian-Iñiguez; AmirHessam Aminfar; Joey Chong; Gloria Burke; Albertina Zuniga; David R. Weise; Marko Princevac
2017-01-01
The present protocol presents a laboratory technique designed to study chaparral crown fire ignition and spread. Experiments were conducted in a low velocity fire wind tunnel where two distinct layers of fuel were constructed to represent surface and crown fuels in chaparral. Chamise, a common chaparral shrub, comprised the live crown layer. The dead fuel surface layer...
Satellite Communications Using Commercial Protocols
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Griner, James H.; Dimond, Robert; Frantz, Brian D.; Kachmar, Brian; Shell, Dan
2000-01-01
NASA Glenn Research Center has been working with industry, academia, and other government agencies in assessing commercial communications protocols for satellite and space-based applications. In addition, NASA Glenn has been developing and advocating new satellite-friendly modifications to existing communications protocol standards. This paper summarizes recent research into the applicability of various commercial standard protocols for use over satellite and space- based communications networks as well as expectations for future protocol development. It serves as a reference point from which the detailed work can be readily accessed. Areas that will be addressed include asynchronous-transfer-mode quality of service; completed and ongoing work of the Internet Engineering Task Force; data-link-layer protocol development for unidirectional link routing; and protocols for aeronautical applications, including mobile Internet protocol routing for wireless/mobile hosts and the aeronautical telecommunications network protocol.
40 CFR 82.1 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STRATOSPHERIC OZONE Production and Consumption Controls § 82.1 Purpose and scope. (a) The purpose of the regulations in this subpart is to implement the Montreal Protocol on Substances that Deplete the Ozone Layer... ozone-depleting substances, according to specified schedules. The Protocol also requires each nation...
40 CFR 82.1 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STRATOSPHERIC OZONE Production and Consumption Controls § 82.1 Purpose and scope. (a) The purpose of the regulations in this subpart is to implement the Montreal Protocol on Substances that Deplete the Ozone Layer... ozone-depleting substances, according to specified schedules. The Protocol also requires each nation...
40 CFR 82.1 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STRATOSPHERIC OZONE Production and Consumption Controls § 82.1 Purpose and scope. (a) The purpose of the regulations in this subpart is to implement the Montreal Protocol on Substances that Deplete the Ozone Layer... ozone-depleting substances, according to specified schedules. The Protocol also requires each nation...
40 CFR 82.1 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STRATOSPHERIC OZONE Production and Consumption Controls § 82.1 Purpose and scope. (a) The purpose of the regulations in this subpart is to implement the Montreal Protocol on Substances that Deplete the Ozone Layer... ozone-depleting substances, according to specified schedules. The Protocol also requires each nation...
40 CFR 82.1 - Purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-07-01
... STRATOSPHERIC OZONE Production and Consumption Controls § 82.1 Purpose and scope. (a) The purpose of the regulations in this subpart is to implement the Montreal Protocol on Substances that Deplete the Ozone Layer... ozone-depleting substances, according to specified schedules. The Protocol also requires each nation...
Running TCP/IP over ATM Networks.
ERIC Educational Resources Information Center
Witt, Michael
1995-01-01
Discusses Internet protocol (IP) and subnets and describes how IP may operate over asynchronous transfer mode (ATM). Topics include TCP (transmission control protocol), ATM cells and adaptation layers, a basic architectural model for IP over ATM, address resolution, mapping IP to a subnet technology, and connection management strategy. (LRW)
A History of the Improvement of Internet Protocols Over Satellites Using ACTS
NASA Technical Reports Server (NTRS)
Allman, Mark; Kruse, Hans; Ostermann, Shawn
2000-01-01
This paper outlines the main results of a number of ACTS experiments on the efficacy of using standard Internet protocols over long-delay satellite channels. These experiments have been jointly conducted by NASAs Glenn Research Center and Ohio University over the last six years. The focus of our investigations has been the impact of long-delay networks with non-zero bit-error rates on the performance of the suite of Internet protocols. In particular, we have focused on the most widely used transport protocol, the Transmission Control Protocol (TCP), as well as several application layer protocols. This paper presents our main results, as well as references to more verbose discussions of our experiments.
Optimised cross-layer synchronisation schemes for wireless sensor networks
NASA Astrophysics Data System (ADS)
Nasri, Nejah; Ben Fradj, Awatef; Kachouri, Abdennaceur
2017-07-01
This paper aims at synchronisation between the sensor nodes. Indeed, in the context of wireless sensor networks, it is necessary to take into consideration the energy cost induced by the synchronisation, which can represent the majority of the energy consumed. On communication, an already identified hard point consists in imagining a fine synchronisation protocol which must be sufficiently robust to the intermittent energy in the sensors. Hence, this paper worked on aspects of performance and energy saving, in particular on the optimisation of the synchronisation protocol using cross-layer design method such as synchronisation between layers. Our approach consists in balancing the energy consumption between the sensors and choosing the cluster head with the highest residual energy in order to guarantee the reliability, integrity and continuity of communication (i.e. maximising the network lifetime).
An extended smart utilization medium access control (ESU-MAC) protocol for ad hoc wireless systems
NASA Astrophysics Data System (ADS)
Vashishtha, Jyoti; Sinha, Aakash
2006-05-01
The demand for spontaneous setup of a wireless communication system has increased in recent years for areas like battlefield, disaster relief operations etc., where a pre-deployment of network infrastructure is difficult or unavailable. A mobile ad-hoc network (MANET) is a promising solution, but poses a lot of challenges for all the design layers, specifically medium access control (MAC) layer. Recent existing works have used the concepts of multi-channel and power control in designing MAC layer protocols. SU-MAC developed by the same authors, efficiently uses the 'available' data and control bandwidth to send control information and results in increased throughput via decreasing contention on the control channel. However, SU-MAC protocol was limited for static ad-hoc network and also faced the busy-receiver node problem. We present the Extended SU-MAC (ESU-MAC) protocol which works mobile nodes. Also, we significantly improve the scheme of control information exchange in ESU-MAC to overcome the busy-receiver node problem and thus, further avoid the blockage of control channel for longer periods of time. A power control scheme is used as before to reduce interference and to effectively re-use the available bandwidth. Simulation results show that ESU-MAC protocol is promising for mobile, ad-hoc network in terms of reduced contention at the control channel and improved throughput because of channel re-use. Results show a considerable increase in throughput compared to SU-MAC which could be attributed to increased accessibility of control channel and improved utilization of data channels due to superior control information exchange scheme.
Intrusion Detection for Defense at the MAC and Routing Layers of Wireless Networks
2007-01-01
Space DoS Denial of Service DSR Dynamic Source Routing IDS Intrusion Detection System LAR Location-Aided Routing MAC Media Access Control MACA Multiple...different mobility parameters. 10 They simulate interaction between three MAC protocols ( MACA , 802.11 and CSMA) and three routing protocols (AODV, DSR
A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks
Costa, Daniel G.; Guedes, Luiz Affonso
2011-01-01
Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908
Chasqueira, Ana Filipa; Arantes-Oliveira, Sofia; Portugal, Jaime
2013-09-13
The aim of this work was to assess the shear bond strength (SBS) between a composite resin and dentin, promoted by two dental adhesive systems (one-step self-etching adhesive Easy Bond [3M ESPE], and two-step etch-and-rinse adhesive Scotchbond 1XT [3M ESPE]) with different application protocols (per manufacturer's instruction (control group); with one to four additional adhesive layers; or with an extra hydrophobic adhesive layer). Proximal enamel was removed from ninety caries-free human molars to obtain two dentin discs per tooth, which were randomly assigned to twelve experimental groups (n=15). After adhesion protocol, the composite resin (Filtek Z250 [3M ESPE]) was applied. Specimens were mounted in the Watanabe test device and shear bond test was performed in a universal testing machine with a crosshead speed of 5 mm/min. Data were analyzed with ANOVA followed by Student-Newman-Keuls tests (P<0.05). The highest SBS mean value was attained with the Easy Bond three layers group (41.23±2.71 MPa) and the lowest with Scotchbond 1XT per manufacturer's instructions (27.15±2.99 MPa). Easy Bond yielded higher SBS values than Scotchbond 1XT. There were no statistically significant differences (P>0.05) between the application protocols tested, except for the three and four layers groups, that presented higher SBS results compared to manufacturer's instruction groups (P<0.05). No statistically significant differences were detected between the three and four layers groups (P≥0.05). It is recommendable to apply three adhesive layers when using Easy Bond and Scotchbond 1XT adhesives, since it improves SBS values without consuming much time.
Cross-Layer Algorithms for QoS Enhancement in Wireless Multimedia Sensor Networks
NASA Astrophysics Data System (ADS)
Saxena, Navrati; Roy, Abhishek; Shin, Jitae
A lot of emerging applications like advanced telemedicine and surveillance systems, demand sensors to deliver multimedia content with precise level of QoS enhancement. Minimizing energy in sensor networks has been a much explored research area but guaranteeing QoS over sensor networks still remains an open issue. In this letter we propose a cross-layer approach combining Network and MAC layers, for QoS enhancement in wireless multimedia sensor networks. In the network layer a statistical estimate of sensory QoS parameters is performed and a nearoptimal genetic algorithmic solution is proposed to solve the NP-complete QoS-routing problem. On the other hand the objective of the proposed MAC algorithm is to perform the QoS-based packet classification and automatic adaptation of the contention window. Simulation results demonstrate that the proposed protocol is capable of providing lower delay and better throughput, at the cost of reasonable energy consumption, in comparison with other existing sensory QoS protocols.
CBM First-level Event Selector Input Interface Demonstrator
NASA Astrophysics Data System (ADS)
Hutter, Dirk; de Cuveland, Jan; Lindenstruth, Volker
2017-10-01
CBM is a heavy-ion experiment at the future FAIR facility in Darmstadt, Germany. Featuring self-triggered front-end electronics and free-streaming read-out, event selection will exclusively be done by the First Level Event Selector (FLES). Designed as an HPC cluster with several hundred nodes its task is an online analysis and selection of the physics data at a total input data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, potentially overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows performing this task very efficiently. The FLES Input Interface defines the linkage between the FEE and the FLES data transport framework. A custom FPGA PCIe board, the FLES Interface Board (FLIB), is used to receive data via optical links and transfer them via DMA to the host’s memory. The current prototype of the FLIB features a Kintex-7 FPGA and provides up to eight 10 GBit/s optical links. A custom FPGA design has been developed for this board. DMA transfers and data structures are optimized for subsequent timeslice building. Index tables generated by the FPGA enable fast random access to the written data containers. In addition the DMA target buffers can directly serve as InfiniBand RDMA source buffers without copying the data. The usage of POSIX shared memory for these buffers allows data access from multiple processes. An accompanying HDL module has been developed to integrate the FLES link into the front-end FPGA designs. It implements the front-end logic interface as well as the link protocol. Prototypes of all Input Interface components have been implemented and integrated into the FLES test framework. This allows the implementation and evaluation of the foreseen CBM read-out chain.
1988-08-01
Interconnection (OSI) in years. It is felt even more urgent in the past few years, with the rapid evolution of communication technologies and the...services and protocols above the transport layer are usually implemented as user- callable utilities on the host computers, it is desirable to offer them...Networks, Prentice-hall, New Jersey, 1987 [ BOND 87] Bond , John, "Parallel-Processing Concepts Finally Come together in Real Systems", Computer Design
NASA Advanced Supercomputing Facility Expansion
NASA Technical Reports Server (NTRS)
Thigpen, William W.
2017-01-01
The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.
Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
Shen, Yiwen; Hattink, Maarten H N; Samadi, Payman; Cheng, Qixiang; Hu, Ziyiz; Gazman, Alexander; Bergman, Keren
2018-04-16
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. We present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly network testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 µs control plane latency for data-center and high performance computing platforms.
NASA Technical Reports Server (NTRS)
Fatoohi, Rod; Saini, Subbash; Ciotti, Robert
2006-01-01
We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.
NASA Technical Reports Server (NTRS)
Gibson, Jim; Jordan, Joe; Grant, Terry
1990-01-01
Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.
Novel Method for Quantitative Estimation of Biofilms.
Syal, Kirtimaan
2017-10-01
Biofilm protects bacteria from stress and hostile environment. Crystal violet (CV) assay is the most popular method for biofilm determination adopted by different laboratories so far. However, biofilm layer formed at the liquid-air interphase known as pellicle is extremely sensitive to its washing and staining steps. Early phase biofilms are also prone to damage by the latter steps. In bacteria like mycobacteria, biofilm formation occurs largely at the liquid-air interphase which is susceptible to loss. In the proposed protocol, loss of such biofilm layer was prevented. In place of inverting and discarding the media which can lead to the loss of the aerobic biofilm layer in CV assay, media was removed from the formed biofilm with the help of a syringe and biofilm layer was allowed to dry. The staining and washing steps were avoided, and an organic solvent-tetrahydrofuran (THF) was deployed to dissolve the biofilm, and the absorbance was recorded at 595 nm. The protocol was tested for biofilm estimation of E. coli, B. subtilis and M. smegmatis, and compared with the traditional CV assays. Isoniazid drug molecule, a known inhibitor of M. smegmatis biofilm, was tested and its inhibitory effects were quantified by the proposed protocol. For ease in referring, this method has been described as the Syal method for biofilm quantification. This new method was found to be useful for the estimation of early phase biofilm and aerobic biofilm layer formed at the liquid-air interphase. The biofilms formed by all three tested bacteria-B. subtilis, E. coli and M. smegmatis, were precisely quantified.
The increasing threat to stratospheric ozone from dichloromethane.
Hossaini, Ryan; Chipperfield, Martyn P; Montzka, Stephen A; Leeson, Amber A; Dhomse, Sandip S; Pyle, John A
2017-06-27
It is well established that anthropogenic chlorine-containing chemicals contribute to ozone layer depletion. The successful implementation of the Montreal Protocol has led to reductions in the atmospheric concentration of many ozone-depleting gases, such as chlorofluorocarbons. As a consequence, stratospheric chlorine levels are declining and ozone is projected to return to levels observed pre-1980 later this century. However, recent observations show the atmospheric concentration of dichloromethane-an ozone-depleting gas not controlled by the Montreal Protocol-is increasing rapidly. Using atmospheric model simulations, we show that although currently modest, the impact of dichloromethane on ozone has increased markedly in recent years and if these increases continue into the future, the return of Antarctic ozone to pre-1980 levels could be substantially delayed. Sustained growth in dichloromethane would therefore offset some of the gains achieved by the Montreal Protocol, further delaying recovery of Earth's ozone layer.
The increasing threat to stratospheric ozone from dichloromethane
NASA Astrophysics Data System (ADS)
Hossaini, Ryan; Chipperfield, Martyn P.; Montzka, Stephen A.; Leeson, Amber A.; Dhomse, Sandip S.; Pyle, John A.
2017-06-01
It is well established that anthropogenic chlorine-containing chemicals contribute to ozone layer depletion. The successful implementation of the Montreal Protocol has led to reductions in the atmospheric concentration of many ozone-depleting gases, such as chlorofluorocarbons. As a consequence, stratospheric chlorine levels are declining and ozone is projected to return to levels observed pre-1980 later this century. However, recent observations show the atmospheric concentration of dichloromethane--an ozone-depleting gas not controlled by the Montreal Protocol--is increasing rapidly. Using atmospheric model simulations, we show that although currently modest, the impact of dichloromethane on ozone has increased markedly in recent years and if these increases continue into the future, the return of Antarctic ozone to pre-1980 levels could be substantially delayed. Sustained growth in dichloromethane would therefore offset some of the gains achieved by the Montreal Protocol, further delaying recovery of Earth's ozone layer.
NASA Technical Reports Server (NTRS)
Fischer, Daniel; Aguilar-Sanchez, Ignacio; Saba, Bruno; Moury, Gilles; Biggerstaff, Craig; Bailey, Brandon; Weiss, Howard; Pilgram, Martin; Richter, Dorothea
2015-01-01
The protection of data transmitted over the space-link is an issue of growing importance also for civilian space missions. Through the Consultative Committee for Space Data Systems (CCSDS), space agencies have reacted to this need by specifying the Space Data-Link Layer Security (SDLS) protocol which provides confidentiality and integrity services for the CCSDS Telemetry (TM), Telecommand (TC) and Advanced Orbiting Services (AOS) space data-link protocols. This paper describes the approach of the CCSDS SDLS working group to specify and execute the necessary interoperability tests. It first details the individual SDLS implementations that have been produced by ESA, NASA, and CNES and then the overall architecture that allows the interoperability tests between them. The paper reports on the results of the interoperability tests and identifies relevant aspects for the evolution of the test environment.
Design and Implementation of Replicated Object Layer
NASA Technical Reports Server (NTRS)
Koka, Sudhir
1996-01-01
One of the widely used techniques for construction of fault tolerant applications is the replication of resources so that if one copy fails sufficient copies may still remain operational to allow the application to continue to function. This thesis involves the design and implementation of an object oriented framework for replicating data on multiple sites and across different platforms. Our approach, called the Replicated Object Layer (ROL) provides a mechanism for consistent replication of data over dynamic networks. ROL uses the Reliable Multicast Protocol (RMP) as a communication protocol that provides for reliable delivery, serialization and fault tolerance. Besides providing type registration, this layer facilitates distributed atomic transactions on replicated data. A novel algorithm called the RMP Commit Protocol, which commits transactions efficiently in reliable multicast environment is presented. ROL provides recovery procedures to ensure that site and communication failures do not corrupt persistent data, and male the system fault tolerant to network partitions. ROL will facilitate building distributed fault tolerant applications by performing the burdensome details of replica consistency operations, and making it completely transparent to the application.Replicated databases are a major class of applications which could be built on top of ROL.
Xi, Jun; Wu, Zhaoxin; Jiao, Bo; Dong, Hua; Ran, Chenxin; Piao, Chengcheng; Lei, Ting; Song, Tze-Bin; Ke, Weijun; Yokoyama, Takamichi; Hou, Xun; Kanatzidis, Mercouri G
2017-06-01
Tin (Sn)-based perovskites are increasingly attractive because they offer lead-free alternatives in perovskite solar cells. However, depositing high-quality Sn-based perovskite films is still a challenge, particularly for low-temperature planar heterojunction (PHJ) devices. Here, a "multichannel interdiffusion" protocol is demonstrated by annealing stacked layers of aqueous solution deposited formamidinium iodide (FAI)/polymer layer followed with an evaporated SnI 2 layer to create uniform FASnI 3 films. In this protocol, tiny FAI crystals, significantly inhibited by the introduced polymer, can offer multiple interdiffusion pathways for complete reaction with SnI 2 . What is more, water, rather than traditional aprotic organic solvents, is used to dissolve the precursors. The best-performing FASnI 3 PHJ solar cell assembled by this protocol exhibits a power conversion efficiency (PCE) of 3.98%. In addition, a flexible FASnI 3 -based flexible solar cell assembled on a polyethylene naphthalate-indium tin oxide flexible substrate with a PCE of 3.12% is demonstrated. This novel interdiffusion process can help to further boost the performance of lead-free Sn-based perovskites. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The CCSDS Next Generation Space Data Link Protocol (NGSLP)
NASA Technical Reports Server (NTRS)
Kazz, Greg J.; Greenberg, Edward
2014-01-01
The CCSDS space link protocols i.e., Telemetry (TM), Telecommand (TC), Advanced Orbiting Systems (AOS) were developed in the early growth period of the space program. They were designed to meet the needs of the early missions, be compatible with the available technology and focused on the specific link environments. Digital technology was in its infancy and spacecraft power and mass issues enforced severe constraints on flight implementations. Therefore the Telecommand protocol was designed around a simple Bose, Hocquenghem, Chaudhuri (BCH) code that provided little coding gain and limited error detection but was relatively simple to decode on board. The infusion of the concatenated Convolutional and Reed-Solomon codes5 for telemetry was a major milestone and transformed telemetry applications by providing them the ability to more efficiently utilize the telemetry link and its ability to deliver user data. The ability to significantly lower the error rates on the telemetry links enabled the use of packet telemetry and data compression. The infusion of the high performance codes for telemetry was enabled by the advent of digital processing, but it was limited to earth based systems supporting telemetry. The latest CCSDS space link protocol, Proximity-1 was developed in early 2000 to meet the needs of short-range, bi-directional, fixed or mobile radio links characterized by short time delays, moderate but not weak signals, and short independent sessions. Proximity-1 has been successfully deployed on both NASA and ESA missions at Mars and is planned to be utilized by all Mars missions in development. A new age has arisen, one that now provides the means to perform advanced digital processing in spacecraft systems enabling the use of improved transponders, digital correlators, and high performance forward error correcting codes for all communications links. Flight transponders utilizing digital technology have emerged and can efficiently provide the means to make the next leap in performance for space link communications. Field Programmable Gate Arrays (FPGAs) provide the capability to incorporate high performance forward error correcting codes implemented within software transponders providing improved performance in data transfer, ranging, link security, and time correlation. Given these synergistic technological breakthroughs, the time has come to take advantage of them in applying them to both on going (e.g., command, telemetry) and emerging (e.g., space link security, optical communication) space link applications. However one of the constraining factors within the Data Link Layer in realizing these performance gains is the lack of a generic transfer frame format and common supporting services amongst the existing CCSDS link layer protocols. Currently each of the four CCSDS link layer protocols (TM, TC, AOS, and Proximity-1) have unique formats and services which prohibits their reuse across the totality of all space link applications of CCSDS member space agencies. For example, Mars missions. These missions implement their proximity data link layer using the Proximity-1 frame format and the services it supports but is still required to support the direct from Earth (TC) protocols and the Direct To Earth (AOS/TM) protocols. The prime purpose of this paper, is to describe a new general purpose CCSDS Data Link layer protocol, the NGSLP that will provide the required services along with a common transfer frame format for all the CCSDS space links (ground to/from space and space to space links) targeted for emerging missions after a CCSDS agency-wide coordinated date. This paper will also describe related options that can be included for the Coding and Synchronization sub-layer of the Data Link layer to extend the capacities of the link and additionally provide an independence of the transfer frame sub-layer from the coding sublayer. This feature will provide missions the option of running either the currently performed synchronous coding and transfer frame data link or an asynchronous coding/frame data link, in which the transfer frame length is independent of the block size of the code. The benefits from the elimination of this constraint (frame synchronized to the code block) will simplify the interface between the transponder and the data handling equipment and reduce implementation costs and complexities. The benefits include: inclusion of encoders/decoders into transmitters and receivers without regard to data link protocols, providing the ability to insert latency sensitive messages into the link to support launch, landing/docking, telerobotics. and Variable Coded Modulation (VCM). In addition the ability to transfer different sized frames can provide a backup for delivering stored anomaly engineering data simultaneously with real time data, or relaying of frames from various sources onto a trunk line for delivery to Earth.
Internetting tactical security sensor systems
NASA Astrophysics Data System (ADS)
Gage, Douglas W.; Bryan, W. D.; Nguyen, Hoa G.
1998-08-01
The Multipurpose Surveillance and Security Mission Platform (MSSMP) is a distributed network of remote sensing packages and control stations, designed to provide a rapidly deployable, extended-range surveillance capability for a wide variety of military security operations and other tactical missions. The baseline MSSMP sensor suite consists of a pan/tilt unit with video and FLIR cameras and laser rangefinder. With an additional radio transceiver, MSSMP can also function as a gateway between existing security/surveillance sensor systems such as TASS, TRSS, and IREMBASS, and IP-based networks, to support the timely distribution of both threat detection and threat assessment information. The MSSMP system makes maximum use of Commercial Off The Shelf (COTS) components for sensing, processing, and communications, and of both established and emerging standard communications networking protocols and system integration techniques. Its use of IP-based protocols allows it to freely interoperate with the Internet -- providing geographic transparency, facilitating development, and allowing fully distributed demonstration capability -- and prepares it for integration with the IP-based tactical radio networks that will evolve in the next decade. Unfortunately, the Internet's standard Transport layer protocol, TCP, is poorly matched to the requirements of security sensors and other quasi- autonomous systems in being oriented to conveying a continuous data stream, rather than discrete messages. Also, its canonical 'socket' interface both conceals short losses of communications connectivity and simply gives up and forces the Application layer software to deal with longer losses. For MSSMP, a software applique is being developed that will run on top of User Datagram Protocol (UDP) to provide a reliable message-based Transport service. In addition, a Session layer protocol is being developed to support the effective transfer of control of multiple platforms among multiple control stations.
Experimental high-speed network
NASA Astrophysics Data System (ADS)
McNeill, Kevin M.; Klein, William P.; Vercillo, Richard; Alsafadi, Yasser H.; Parra, Miguel V.; Dallas, William J.
1993-09-01
Many existing local area networking protocols currently applied in medical imaging were originally designed for relatively low-speed, low-volume networking. These protocols utilize small packet sizes appropriate for text based communication. Local area networks of this type typically provide raw bandwidth under 125 MHz. These older network technologies are not optimized for the low delay, high data traffic environment of a totally digital radiology department. Some current implementations use point-to-point links when greater bandwidth is required. However, the use of point-to-point communications for a total digital radiology department network presents many disadvantages. This paper describes work on an experimental multi-access local area network called XFT. The work includes the protocol specification, and the design and implementation of network interface hardware and software. The protocol specifies the Physical and Data Link layers (OSI layers 1 & 2) for a fiber-optic based token ring providing a raw bandwidth of 500 MHz. The protocol design and implementation of the XFT interface hardware includes many features to optimize image transfer and provide flexibility for additional future enhancements which include: a modular hardware design supporting easy portability to a variety of host system buses, a versatile message buffer design providing 16 MB of memory, and the capability to extend the raw bandwidth of the network to 3.0 GHz.
Device USB interface and software development for electric parameter measuring instrument
NASA Astrophysics Data System (ADS)
Li, Deshi; Chen, Jian; Wu, Yadong
2003-09-01
Aimed at general devices development, this paper discussed the development of USB interface and software development. With an example, using PDIUSBD12 which support parallel interface, the paper analyzed its technical characteristics. Designed different interface circuit with 80C52 singlechip microcomputer and TMS320C54 series digital signal processor, analyzed the address allocation, register access. According to USB1.1 standard protocol, designed the device software and application layer protocol. The paper designed the data exchange protocol, and carried out system functions.
System approach to distributed sensor management
NASA Astrophysics Data System (ADS)
Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid
2010-04-01
Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.
DOT National Transportation Integrated Search
2001-09-01
This document presents an example of mechanistic design and analysis using a mix design and : testing protocol. More specifically, it addresses the structural properties of lime-treated subgrade, : subbase, and base layers through mechanistic design ...
cadcVOFS: A FUSE Based File System Layer for VOSpace
NASA Astrophysics Data System (ADS)
Kavelaars, J.; Dowler, P.; Jenkins, D.; Hill, N.; Damian, A.
2012-09-01
The CADC is now making extensive use of the VOSpace protocol for user managed storage. The VOSpace standard allows a diverse set of rich data services to be delivered to users via a simple protocol. We have recently developed the cadcVOFS, a FUSE based file-system layer for VOSpace. cadcVOFS provides a filesystem layer on-top of VOSpace so that standard Unix tools (such as ‘find’, ‘emacs’, ‘awk’ etc) can be used directly on the data objects stored in VOSpace. Once mounted the VOSpace appears as a network storage volume inside the operating system. Within the CADC Cloud Computing project (CANFAR) we have used VOSpace as the method for retrieving and storing processing inputs and products. The abstraction of storage is an important component of Cloud Computing and the high use level of our VOSpace service reflects this.
You, Ilsun; Kwon, Soonhyun; Choudhary, Gaurav; Sharma, Vishal; Seo, Jung Taek
2018-06-08
The Internet of Things (IoT) utilizes algorithms to facilitate intelligent applications across cities in the form of smart-urban projects. As the majority of devices in IoT are battery operated, their applications should be facilitated with a low-power communication setup. Such facility is possible through the Low-Power Wide-Area Network (LPWAN), but at a constrained bit rate. For long-range communication over LPWAN, several approaches and protocols are adopted. One such protocol is the Long-Range Wide Area Network (LoRaWAN), which is a media access layer protocol for long-range communication between the devices and the application servers via LPWAN gateways. However, LoRaWAN comes with fewer security features as a much-secured protocol consumes more battery because of the exorbitant computational overheads. The standard protocol fails to support end-to-end security and perfect forward secrecy while being vulnerable to the replay attack that makes LoRaWAN limited in supporting applications where security (especially end-to-end security) is important. Motivated by this, an enhanced LoRaWAN security protocol is proposed, which not only provides the basic functions of connectivity between the application server and the end device, but additionally averts these listed security issues. The proposed protocol is developed with two options, the Default Option (DO) and the Security-Enhanced Option (SEO). The protocol is validated through Burrows⁻Abadi⁻Needham (BAN) logic and the Automated Validation of Internet Security Protocols and Applications (AVISPA) tool. The proposed protocol is also analyzed for overheads through system-based and low-power device-based evaluations. Further, a case study on a smart factory-enabled parking system is considered for its practical application. The results, in terms of network latency with reliability fitting and signaling overheads, show paramount improvements and better performance for the proposed protocol compared with the two handshake options, Pre-Shared Key (PSK) and Elliptic Curve Cryptography (ECC), of Datagram Transport Layer Security (DTLS).
Effect of different analyte diffusion/adsorption protocols on SERS signals
NASA Astrophysics Data System (ADS)
Li, Ruoping; Petschek, Rolfe G.; Han, Junhe; Huang, Mingju
2018-07-01
The effect of different analyte diffusion/adsorption protocols was studied which is often overlooked in surface-enhanced Raman scattering (SERS) technique. Three protocols: highly concentrated dilution (HCD) protocol, half-half dilution (HHD) protocol and layered adsorption (LA) protocol were studied and the SERS substrates were monolayer films of 80 nm Ag nanoparticles (NPs) which were modified by polyvinylpyrrolidone. The diffusion/adsorption mechanisms were modelled using the diffusion equation and the electromagnetic field distribution of two adjacent Ag NPs was simulated by the finite-different time-domain method. All experimental data and theoretical analysis suggest that different diffusion/adsorption behaviour of analytes will cause different SERS signal enhancements. HHD protocol could produce the most uniform and reproducible samples, and the corresponding signal intensity of the analyte is the strongest. This study will help to understand and promote the use of SERS technique in quantitative analysis.
NASA Technical Reports Server (NTRS)
Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio
1992-01-01
This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.
Enhancement of Beaconless Location-Based Routing with Signal Strength Assistance for Ad-Hoc Networks
NASA Astrophysics Data System (ADS)
Chen, Guowei; Itoh, Kenichi; Sato, Takuro
Routing in Ad-hoc networks is unreliable due to the mobility of the nodes. Location-based routing protocols, unlike other protocols which rely on flooding, excel in network scalability. Furthermore, new location-based routing protocols, like, e. g. BLR [1], IGF [2], & CBF [3] have been proposed, with the feature of not requiring beacons in MAC-layer, which improve more in terms of scalability. Such beaconless routing protocols can work efficiently in dense network areas. However, these protocols' algorithms have no ability to avoid from routing into sparse areas. In this article, historical signal strength has been added as a factor into the BLR algorithm, which avoids routing into sparse area, and consequently improves the global routing efficiency.
Performance Evaluation of FAST TCP Traffic-Flows in Multihomed MANETs
NASA Astrophysics Data System (ADS)
Mudassir, Mumajjed Ul; Akram, Adeel
In Mobile Ad hoc Networks (MANETs) an efficient communication protocol is required at the transport layer. Mobile nodes moving around will have temporary and rather short-lived connectivity with each other and the Internet, thus requiring efficient utilization of network resources. Moreover the problems arising due to high mobility, collision and congestion must also be considered. Multihoming allows higher reliability and enhancement of network throughput. FAST TCP is a new promising transport layer protocol developed for high-speed high-latency networks. In this paper, we have analyzed the performance of FAST TCP traffic flows in multihomed MANETs and compared it with standard TCP (TCP Reno) traffic flows in non-multihomed MANETs.
NASA Technical Reports Server (NTRS)
Powell, John D.
2003-01-01
This document discusses the verification of the Secure Socket Layer (SSL) communication protocol as a demonstration of the Model Based Verification (MBV) portion of the verification instrument set being developed under the Reducing Software Security Risk (RSSR) Trough an Integrated Approach research initiative. Code Q of the National Aeronautics and Space Administration (NASA) funds this project. The NASA Goddard Independent Verification and Validation (IV&V) facility manages this research program at the NASA agency level and the Assurance Technology Program Office (ATPO) manages the research locally at the Jet Propulsion Laboratory (California institute of Technology) where the research is being carried out.
Real time UNIX in embedded control-a case study within the context of LynxOS
NASA Astrophysics Data System (ADS)
Kleines, H.; Zwoll, K.
1996-02-01
Intelligent communication controllers for a layered protocol profile are a typical example of an embedded control application, where the classical approach for the software development is based on a proprietary real-time operating system kernel under which the individual layers are implemented as tasks. Based on the exemplary implementation of a derivative of MAP 3.0, an unusual and innovative approach is presented, where the protocol software is implemented under the UNIX-compatible real-time operating system LynxOS. The overall design of the embedded control application is presented under a more general view and economical implications as well as aspects of the development environment and performance are discussed
Guo, Xiangjun; Miao, Hui; Li, Lei; Zhang, Shasha; Zhou, Dongyan; Lu, Yan; Wu, Ligeng
2014-09-08
Efforts to improve the efficacy of smear layer removal by applying irrigant activation at the final irrigation or by elevating the temperature of the irrigant have been reported. However, the combination of such activation protocols with 60 °C 3% sodium hypochlorite (NaOCl) has seldom been mentioned. The aim of this study was to compare the efficacy in smear layer removal of four different irrigation techniques combined with 60 °C 3% NaOCl and 17% EDTA. Fifty single-rooted teeth were randomly divided into five groups (n = 10) according to the irrigant agitation protocols used during chemomechanical preparation(Dentsply Maillefer, Ballaigues, Switzerland): a side-vented needle group, a ultrasonic irrigation (UI) group, a NaviTip FX group, an EndoActivator group, and a control group (no agitation). After each instrumentation, the root canals were irrigated with 1 mL of 3% NaOCl at 60 °C for 1 minute, and after the whole instrumentation, the root canals were rinsed with 1 mL of 17% EDTA for 1 minute. Both NaOCl and EDTA were activated with one of the five irrigation protocols. The efficacy of smear layer removal was scored at the apical, middle and coronal thirds. The Data were statistically analyzed using SAS version 9.2 for Windows (rank sum test for a randomised block design and ANOVA). No significant differences among the NaviTip FX group, EndoActivator group and control groups, and each of these groups showed a lower score than that of UI group (P < 0.05). Within each group, all three thirds were ranked in the following order: coronal > middle > apical (P < 0.05). In the coronal third, the NaviTip FX group was better than UI group. In the middle and apical third, the differences were not significant among any of the groups. Even without any activation, the combination of 60 °C 3% NaOCl and 17% EDTA could remove the smear layer effectively, similar to NaviTip FX or EndoActivator, and these three protocols were more effective than UI. However, regardless of different types of irrigation technique applied, complete removal of the smear layer was not achieved, particularly in the apical third.
Privacy Preserved and Secured Reliable Routing Protocol for Wireless Mesh Networks.
Meganathan, Navamani Thandava; Palanichamy, Yogesh
2015-01-01
Privacy preservation and security provision against internal attacks in wireless mesh networks (WMNs) are more demanding than in wired networks due to the open nature and mobility of certain nodes in the network. Several schemes have been proposed to preserve privacy and provide security in WMNs. To provide complete privacy protection in WMNs, the properties of unobservability, unlinkability, and anonymity are to be ensured during route discovery. These properties can be achieved by implementing group signature and ID-based encryption schemes during route discovery. Due to the characteristics of WMNs, it is more vulnerable to many network layer attacks. Hence, a strong protection is needed to avoid these attacks and this can be achieved by introducing a new Cross-Layer and Subject Logic based Dynamic Reputation (CLSL-DR) mechanism during route discovery. In this paper, we propose a new Privacy preserved and Secured Reliable Routing (PSRR) protocol for WMNs. This protocol incorporates group signature, ID-based encryption schemes, and CLSL-DR mechanism to ensure strong privacy, security, and reliability in WMNs. Simulation results prove this by showing better performance in terms of most of the chosen parameters than the existing protocols.
Protocol for a Delay-Tolerant Data-Communication Network
NASA Technical Reports Server (NTRS)
Torgerson, Jordan; Hooke, Adrian; Burleigh, Scott; Fall, Kevin
2004-01-01
As its name partly indicates, the Delay-Tolerant Networking (DTN) Bundle Protocol is a protocol for delay-tolerant transmission of data via communication networks. This protocol was conceived as a result of studies of how to adapt Internet protocols so that Internet-like services could be provided across interplanetary distances in support of deep-space exploration. The protocol, and software to implement the protocol, is being developed in collaboration among experts at NASA's Jet Propulsion Laboratory and other institutions. No current Internet protocols can accommodate long transmission delay times or intermittent link connectivity. The DTN Bundle Protocol represents a departure from the standard Internet assumption that a continuous path is available from a host computer to a client computer: It provides for routing of data through networks that may be disjointed and may be characterized by long transmission delays. In addition to networks that include deepspace communication links, examples of such networks include terrestrial ones within which branches are temporarily disconnected. The protocol is based partly on the definition of a message-based overlay above the transport layers of the networks on which it is hosted.
Hughes, Laurie; Wang, Xinheng; Chen, Tao
2012-01-01
The issues inherent in caring for an ever-increasing aged population has been the subject of endless debate and continues to be a hot topic for political discussion. The use of hospital-based facilities for the monitoring of chronic physiological conditions is expensive and ties up key healthcare professionals. The introduction of wireless sensor devices as part of a Wireless Body Area Network (WBAN) integrated within an overall eHealth solution could bring a step change in the remote management of patient healthcare. Sensor devices small enough to be placed either inside or on the human body can form a vital part of an overall health monitoring network. An effectively designed energy efficient WBAN should have a minimal impact on the mobility and lifestyle of the patient. WBAN technology can be deployed within a hospital, care home environment or in the patient's own home. This study is a review of the existing research in the area of WBAN technology and in particular protocol adaptation and energy efficient cross-layer design. The research reviews the work carried out across various layers of the protocol stack and highlights how the latest research proposes to resolve the various challenges inherent in remote continual healthcare monitoring. PMID:23202185
The In Situ Enzymatic Screening (ISES) Approach to Reaction Discovery and Catalyst Identification.
Swyka, Robert A; Berkowitz, David B
2017-12-14
The importance of discovering new chemical transformations and/or optimizing catalytic combinations has led to a flurry of activity in reaction screening. The in situ enzymatic screening (ISES) approach described here utilizes biological tools (enzymes/cofactors) to advance chemistry. The protocol interfaces an organic reaction layer with an adjacent aqueous layer containing reporting enzymes that act upon the organic reaction product, giving rise to a spectroscopic signal. ISES allows the experimentalist to rapidly glean information on the relative rates of a set of parallel organic/organometallic reactions under investigation, without the need to quench the reactions or draw aliquots. In certain cases, the real-time enzymatic readout also provides information on sense and magnitude of enantioselectivity and substrate specificity. This article contains protocols for single-well (relative rate) and double-well (relative rate/enantiomeric excess) ISES, in addition to a colorimetric ISES protocol and a miniaturized double-well procedure. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Physical layer simulation study for the coexistence of WLAN standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howlader, M. K.; Keiger, C.; Ewing, P. D.
This paper presents the results of a study on the performance of wireless local area network (WLAN) devices in the presence of interference from other wireless devices. To understand the coexistence of these wireless protocols, simplified physical-layer-system models were developed for the Bluetooth, Wireless Fidelity (WiFi), and Zigbee devices, all of which operate within the 2.4-GHz frequency band. The performances of these protocols were evaluated using Monte-Carlo simulations under various interference and channel conditions. The channel models considered were basic additive white Gaussian noise (AWGN), Rayleigh fading, and site-specific fading. The study also incorporated the basic modulation schemes, multiple accessmore » techniques, and channel allocations of the three protocols. This research is helping the U.S. Nuclear Regulatory Commission (NRC) understand the coexistence issues associated with deploying wireless devices and could prove useful in the development of a technical basis for guidance to address safety-related issues with the implementation of wireless systems in nuclear facilities. (authors)« less
Fast formation cycling for lithium ion batteries
An, Seong Jin; Li, Jianlin; Du, Zhijia; ...
2017-01-09
The formation process for lithium ion batteries typically takes several days or more, and it is necessary for providing a stable solid electrolyte interphase on the anode (at low potentials vs. Li/Li +) for preventing irreversible consumption of electrolyte and lithium ions. An analogous layer known as the cathode electrolyte interphase layer forms at the cathode at high potentials vs. Li/Li +. However, several days, or even up to a week, of these processes result in either lower LIB production rates or a prohibitively large size of charging-discharging equipment and space (i.e. excessive capital cost). In this study, a fastmore » and effective electrolyte interphase formation protocol is proposed and compared with an Oak Ridge National Laboratory baseline protocol. Graphite, NMC 532, and 1.2 M LiPF 6 in ethylene carbonate: diethyl carbonate were used as anodes, cathodes, and electrolytes, respectively. Finally, results from electrochemical impedance spectroscopy show the new protocol reduced surface film (electrolyte interphase) resistances, and 1300 aging cycles show an improvement in capacity retention.« less
SCTP as scalable video coding transport
NASA Astrophysics Data System (ADS)
Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.
2013-12-01
This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.
NASA Astrophysics Data System (ADS)
Schiavon, Nick; de Palmas, Anna; Bulla, Claudio; Piga, Giampaolo; Brunetti, Antonio
2016-09-01
A spectrometric protocol combining Energy Dispersive X-Ray Fluorescence Spectrometry with Monte Carlo simulations of experimental spectra using the XRMC code package has been applied for the first time to characterize the elemental composition of a series of famous Iron Age small scale archaeological bronze replicas of ships (known as the ;Navicelle;) from the Nuragic civilization in Sardinia, Italy. The proposed protocol is a useful, nondestructive and fast analytical tool for Cultural Heritage sample. In Monte Carlo simulations, each sample was modeled as a multilayered object composed by two or three layers depending on the sample: when all present, the three layers are the original bronze substrate, the surface corrosion patina and an outermost protective layer (Paraloid) applied during past restorations. Monte Carlo simulations were able to account for the presence of the patina/corrosion layer as well as the presence of the Paraloid protective layer. It also accounted for the roughness effect commonly found at the surface of corroded metal archaeological artifacts. In this respect, the Monte Carlo simulation approach adopted here was, to the best of our knowledge, unique and enabled to determine the bronze alloy composition together with the thickness of the surface layers without the need for previously removing the surface patinas, a process potentially threatening preservation of precious archaeological/artistic artifacts for future generations.
Network protocols for real-time applications
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1987-01-01
The Fiber Distributed Data Interface (FDDI) and the SAE AE-9B High Speed Ring Bus (HSRB) are emerging standards for high-performance token ring local area networks. FDDI was designed to be a general-purpose high-performance network. HSRB was designed specifically for military real-time applications. A workshop was conducted at NASA Ames Research Center in January, 1987 to compare and contrast these protocols with respect to their ability to support real-time applications. This report summarizes workshop presentations and includes an independent comparison of the two protocols. A conclusion reached at the workshop was that current protocols for the upper layers of the Open Systems Interconnection (OSI) network model are inadequate for real-time applications.
A novel interacting multiple model based network intrusion detection scheme
NASA Astrophysics Data System (ADS)
Xin, Ruichi; Venkatasubramanian, Vijay; Leung, Henry
2006-04-01
In today's information age, information and network security are of primary importance to any organization. Network intrusion is a serious threat to security of computers and data networks. In internet protocol (IP) based network, intrusions originate in different kinds of packets/messages contained in the open system interconnection (OSI) layer 3 or higher layers. Network intrusion detection and prevention systems observe the layer 3 packets (or layer 4 to 7 messages) to screen for intrusions and security threats. Signature based methods use a pre-existing database that document intrusion patterns as perceived in the layer 3 to 7 protocol traffics and match the incoming traffic for potential intrusion attacks. Alternately, network traffic data can be modeled and any huge anomaly from the established traffic pattern can be detected as network intrusion. The latter method, also known as anomaly based detection is gaining popularity for its versatility in learning new patterns and discovering new attacks. It is apparent that for a reliable performance, an accurate model of the network data needs to be established. In this paper, we illustrate using collected data that network traffic is seldom stationary. We propose the use of multiple models to accurately represent the traffic data. The improvement in reliability of the proposed model is verified by measuring the detection and false alarm rates on several datasets.
DoD Message Protocol Report. Volume I. Message Protocol Specification.
1981-12-15
26L 2.6 STATUS-REPORTING SERVICES ........................................ 26 2.6.1 Acknowledgements and Processing Status...and data. Envelopes give processing instructions and/or descriptions of their contents. Data are not altered (as regards content) by the CBMS except...tailored to an individual user’s requirements, we view them as application-layer processes . The potential diversity of UAs makes verifi- cation difficult
The importance of the Montreal Protocol in protecting climate.
Velders, Guus J M; Andersen, Stephen O; Daniel, John S; Fahey, David W; McFarland, Mack
2007-03-20
The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer is a landmark agreement that has successfully reduced the global production, consumption, and emissions of ozone-depleting substances (ODSs). ODSs are also greenhouse gases that contribute to the radiative forcing of climate change. Using historical ODSs emissions and scenarios of potential emissions, we show that the ODS contribution to radiative forcing most likely would have been much larger if the ODS link to stratospheric ozone depletion had not been recognized in 1974 and followed by a series of regulations. The climate protection already achieved by the Montreal Protocol alone is far larger than the reduction target of the first commitment period of the Kyoto Protocol. Additional climate benefits that are significant compared with the Kyoto Protocol reduction target could be achieved by actions under the Montreal Protocol, by managing the emissions of substitute fluorocarbon gases and/or implementing alternative gases with lower global warming potentials.
The importance of the Montreal Protocol in protecting climate
Velders, Guus J. M.; Andersen, Stephen O.; Daniel, John S.; Fahey, David W.; McFarland, Mack
2007-01-01
The 1987 Montreal Protocol on Substances that Deplete the Ozone Layer is a landmark agreement that has successfully reduced the global production, consumption, and emissions of ozone-depleting substances (ODSs). ODSs are also greenhouse gases that contribute to the radiative forcing of climate change. Using historical ODSs emissions and scenarios of potential emissions, we show that the ODS contribution to radiative forcing most likely would have been much larger if the ODS link to stratospheric ozone depletion had not been recognized in 1974 and followed by a series of regulations. The climate protection already achieved by the Montreal Protocol alone is far larger than the reduction target of the first commitment period of the Kyoto Protocol. Additional climate benefits that are significant compared with the Kyoto Protocol reduction target could be achieved by actions under the Montreal Protocol, by managing the emissions of substitute fluorocarbon gases and/or implementing alternative gases with lower global warming potentials. PMID:17360370
Packet-Based Protocol Efficiency for Aeronautical and Satellite Communications
NASA Technical Reports Server (NTRS)
Carek, David A.
2005-01-01
This paper examines the relation between bit error ratios and the effective link efficiency when transporting data with a packet-based protocol. Relations are developed to quantify the impact of a protocol s packet size and header size relative to the bit error ratio of the underlying link. These relations are examined in the context of radio transmissions that exhibit variable error conditions, such as those used in satellite, aeronautical, and other wireless networks. A comparison of two packet sizing methodologies is presented. From these relations, the true ability of a link to deliver user data, or information, is determined. Relations are developed to calculate the optimal protocol packet size forgiven link error characteristics. These relations could be useful in future research for developing an adaptive protocol layer. They can also be used for sizing protocols in the design of static links, where bit error ratios have small variability.
Real time UNIX in embedded control -- A case study within context of LynxOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleines, H.; Zwoll, K.
1996-02-01
Intelligent communication controllers for a layered protocol profile are a typical example of an embedded control application, where the classical approach for the software development is based on a proprietary real-time operating system kernel under which the individual layers are implemented as tasks. Based on the exemplary implementation of a derivative of MAP 3.0, an unusual and innovative approach is presented, where the protocol software is implemented under the UNIX-compatible real-time operating system LynxOS. The overall design of the embedded control application is presented under a more general view and economical implications as well as aspects of the development environmentmore » and performance are discussed.« less
Yuan, Chengzhi; Licht, Stephen; He, Haibo
2017-09-26
In this paper, a new concept of formation learning control is introduced to the field of formation control of multiple autonomous underwater vehicles (AUVs), which specifies a joint objective of distributed formation tracking control and learning/identification of nonlinear uncertain AUV dynamics. A novel two-layer distributed formation learning control scheme is proposed, which consists of an upper-layer distributed adaptive observer and a lower-layer decentralized deterministic learning controller. This new formation learning control scheme advances existing techniques in three important ways: 1) the multi-AUV system under consideration has heterogeneous nonlinear uncertain dynamics; 2) the formation learning control protocol can be designed and implemented by each local AUV agent in a fully distributed fashion without using any global information; and 3) in addition to the formation control performance, the distributed control protocol is also capable of accurately identifying the AUVs' heterogeneous nonlinear uncertain dynamics and utilizing experiences to improve formation control performance. Extensive simulations have been conducted to demonstrate the effectiveness of the proposed results.
Immobilization of enzymes by bioaffinity layering.
Singh, Veena; Sardar, Meryam; Gupta, Munishwar Nath
2013-01-01
Bioaffinity immobilization exploits the affinity of the enzyme to a macro-(affinity ligand). Such a macro-(affinity ligand) could be a lectin, a water-soluble polymer, or a bioconjugate of a water-soluble polymer and the appropriate affinity ligand. Successive layering of the enzyme and the macro-(affinity ligand) on a matrix allows deposition of a large amount of enzyme activity on a small surface. Illustrative protocols show affinity layering of a pectinase and horseradish peroxidase on Concanavalin A-agarose and Concanavalin A-Sephadex matrices, respectively.
Cross Layered Multi-Meshed Tree Scheme for Cognitive Networks
2011-06-01
Meshed Tree Routing protocol wireless ad hoc networks ,” Second IEEE International Workshop on Enabling Technologies and Standards for Wireless Mesh ...and Sensor Networks , 2004 43. Chen G.; Stojmenovic I., “Clustering and routing in mobile wireless networks ,” Technical Report TR-99-05, SITE, June...Cross-layer optimization, intra-cluster routing , packet forwarding, inter-cluster routing , mesh network communications,
Yokohama, Noriya; Tsuchimoto, Tadashi; Oishi, Masamichi; Itou, Katsuya
2007-01-20
It has been noted that the downtime of medical informatics systems is often long. Many systems encounter downtimes of hours or even days, which can have a critical effect on daily operations. Such systems remain especially weak in the areas of database and medical imaging data. The scheme design shows the three-layer architecture of the system: application, database, and storage layers. The application layer uses the DICOM protocol (Digital Imaging and Communication in Medicine) and HTTP (Hyper Text Transport Protocol) with AJAX (Asynchronous JavaScript+XML). The database is designed to decentralize in parallel using cluster technology. Consequently, restoration of the database can be done not only with ease but also with improved retrieval speed. In the storage layer, a network RAID (Redundant Array of Independent Disks) system, it is possible to construct exabyte-scale parallel file systems that exploit storage spread. Development and evaluation of the test-bed has been successful in medical information data backup and recovery in a network environment. This paper presents a schematic design of the new medical informatics system that can be accommodated from a recovery and the dynamic Web application for medical imaging distribution using AJAX.
Backup key generation model for one-time password security protocol
NASA Astrophysics Data System (ADS)
Jeyanthi, N.; Kundu, Sourav
2017-11-01
The use of one-time password (OTP) has ushered new life into the existing authentication protocols used by the software industry. It introduced a second layer of security to the traditional username-password authentication, thus coining the term, two-factor authentication. One of the drawbacks of this protocol is the unreliability of the hardware token at the time of authentication. This paper proposes a simple backup key model that can be associated with the real world applications’user database, which would allow a user to circumvent the second authentication stage, in the event of unavailability of the hardware token.
A Taxonomy of Attacks on the DNP3 Protocol
NASA Astrophysics Data System (ADS)
East, Samuel; Butts, Jonathan; Papa, Mauricio; Shenoi, Sujeet
Distributed Network Protocol (DNP3) is the predominant SCADA protocol in the energy sector - more than 75% of North American electric utilities currently use DNP3 for industrial control applications. This paper presents a taxonomy of attacks on the protocol. The attacks are classified based on targets (control center, outstation devices and network/communication paths) and threat categories (interception, interruption, modification and fabrication). To facilitate risk analysis and mitigation strategies, the attacks are associated with the specific DNP3 protocol layers they exploit. Also, the operational impact of the attacks is categorized in terms of three key SCADA objectives: process confi- dentiality, process awareness and process control. The attack taxonomy clarifies the nature and scope of the threats to DNP3 systems, and can provide insights into the relative costs and benefits of implementing mitigation strategies.
Shen, Yiwen; Hattink, Maarten; Samadi, Payman; ...
2018-04-13
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yiwen; Hattink, Maarten; Samadi, Payman
Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. Here, we present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly networkmore » testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 microsecond control plane latency for data-center and high performance computing platforms.« less
Large Scale Frequent Pattern Mining using MPI One-Sided Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Agarwal, Khushbu
In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newman, G.A.; Commer, M.
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less
Extending the length and time scales of Gram-Schmidt Lyapunov vector computations
NASA Astrophysics Data System (ADS)
Costa, Anthony B.; Green, Jason R.
2013-08-01
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.
Roessler, Florian C; Teichert, Andrea; Ohlrich, Marcus; Marxsen, Jan H; Stellmacher, Florian; Tanislav, Christian; Seidel, Günter
2014-11-30
Agreement about the most suitable clot formation protocol for sonothrombolysis investigations is lacking. Lysis rates vary strongly owing to different test conditions and, thus, cannot be compared. We aim to establish a simple but physiologically grounded protocol for in vitro coagulation to enable standardized sonothrombolysis investigations. Clots were generated from platelet-rich plasma (PRP) obtained by centrifugation (10 min, 180 × g) of human venous blood (VB). PRP was mixed with the boundary layer formed between the supernatant and the erythrocyte layer. To achieve clots with different platelet counts, PRP was gradually substituted with platelet-free plasma (PFP), harvested from the supernatant of VB after centrifugation (10 min, 2570 × g). Clot types were examined for histological appearance, hydrodynamic resistance under physiological flows, and lysis rate measured by weight loss after a 2-h treatment with recombinant tissue plasminogen activator (rt-PA) (60 kU/ml). Lysis rates of the most suitable clot were measured after a 1-h treatment with rt-PA (60 kU/ml), and combined treatment with rt-PA and 2-MHz transcranial color-coded sonography (TCCS) (0.179 W/cm(2)) or 2-MHz transcranial Doppler (TCD) (0.457 W/cm(2)). With increased platelet count, the hydrodynamic resistance of the artificial clots increased, their histological appearance became more physiological, and lysis rates decreased. The most suitable clots consisted of 1.5-ml PRP, 2.0-ml PFP, and 0.5-ml boundary layer. Their lysis rates were 36.7 ± 7.8% (rt-PA), 40.8 ± 8.6% (rt-PA+TCCS), and 40.4 ± 8.3% (rt-PA+TCD). These systemic investigations were conducted for the first time. This protocol should be used for standardized sonothrombolysis investigations. Copyright © 2014 Elsevier B.V. All rights reserved.
Lightweight SIP/SDP compression scheme (LSSCS)
NASA Astrophysics Data System (ADS)
Wu, Jian J.; Demetrescu, Cristian
2001-10-01
In UMTS new IP based services with tight delay constraints will be deployed over the W-CDMA air interface such as IP multimedia and interactive services. To integrate the wireline and wireless IP services, 3GPP standard forum adopted the Session Initiation Protocol (SIP) as the call control protocol for the UMTS Release 5, which will implement next generation, all IP networks for real-time QoS services. In the current form the SIP protocol is not suitable for wireless transmission due to its large message size which will need either a big radio pipe for transmission or it will take far much longer to transmit than the current GSM Call Control (CC) message sequence. In this paper we present a novel compression algorithm called Lightweight SIP/SDP Compression Scheme (LSSCS), which acts at the SIP application layer and therefore removes the information redundancy before it is sent to the network and transport layer. A binary octet-aligned header is added to the compressed SIP/SDP message before sending it to the network layer. The receiver uses this binary header as well as the pre-cached information to regenerate the original SIP/SDP message. The key features of the LSSCS compression scheme are presented in this paper along with implementation examples. It is shown that this compression algorithm makes SIP transmission efficient over the radio interface without losing the SIP generality and flexibility.
Layer Anti-Ferromagnetism on Bilayer Honeycomb Lattice
Tao, Hong-Shuai; Chen, Yao-Hua; Lin, Heng-Fu; Liu, Hai-Di; Liu, Wu-Ming
2014-01-01
Bilayer honeycomb lattice, with inter-layer tunneling energy, has a parabolic dispersion relation, and the inter-layer hopping can cause the charge imbalance between two sublattices. Here, we investigate the metal-insulator and magnetic phase transitions on the strongly correlated bilayer honeycomb lattice by cellular dynamical mean-field theory combined with continuous time quantum Monte Carlo method. The procedures of magnetic spontaneous symmetry breaking on dimer and non-dimer sites are different, causing a novel phase transition between normal anti-ferromagnet and layer anti-ferromagnet. The whole phase diagrams about the magnetism, temperature, interaction and inter-layer hopping are obtained. Finally, we propose an experimental protocol to observe these phenomena in future optical lattice experiments. PMID:24947369
Software Modules for the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Veregge, John R.; Gao, Jay L.; Clare, Loren P.; Mills, David
2012-01-01
The Proximity-1 Space Link Interleaved Time Synchronization (PITS) protocol provides time distribution and synchronization services for space systems. A software prototype implementation of the PITS algorithm has been developed that also provides the test harness to evaluate the key functionalities of PITS with simulated data source and sink. PITS integrates time synchronization functionality into the link layer of the CCSDS Proximity-1 Space Link Protocol. The software prototype implements the network packet format, data structures, and transmit- and receive-timestamp function for a time server and a client. The software also simulates the transmit and receive-time stamp exchanges via UDP (User Datagram Protocol) socket between a time server and a time client, and produces relative time offsets and delay estimates.
156 Mbps Ultrahigh-Speed Wireless LAN Prototype in the 38 GHz Band
NASA Astrophysics Data System (ADS)
Wu, Gang; Inoue, Masugi; Murakami, Homare; Hase, Yoshihiro
2001-12-01
This paper describes a 156 Mbps ultrahigh-speed wireless LAN operating in the 38 GHz millimeter (mm)-wave band. The system is a third prototype developed at the Communications Research Laboratory since 1998. Compared with the previous prototypes, the system is faster (156 Mbps) and smaller (volume of radio transceiver less than 1000 cc), it has a larger service area (two overlapping basic service sets), and a longer transmission distance (the protocol can support a distance of more than two hundred meters). The development is focused on the physical layer and the data link control layer, and thus a GMSK-based mm-wave transceiver and an enhanced RS-ISMA (reservation-based slotted idle signal multiple access) protocol are key development components. This paper describes the prototype system's design, configuration, and implementation.
A fiber optic tactical voice/data network based on FDDI
NASA Technical Reports Server (NTRS)
Bergman, L. A.; Hartmayer, R.; Marelid, S.; Wu, W. H.; Edgar, G.; Cassell, P.; Mancini, R.; Kiernicki, J.; Paul, L. J.; Jeng, J.
1988-01-01
An asynchronous high-speed fiber optic local area network is described that supports ordinary data packet traffic simultaneously with synchronous Tl voice traffic over a common FDDI token ring channel. A voice interface module was developed that parses, buffers, and resynchronizes the voice data to the packet network. The technique is general, however, and can be applied to any deterministic class of networks, including multi-tier backbones. A conventional single token access protocol was employed at the lowest layer, with fixed packet sizes for voice and variable for data. In addition, the higher layer packet data protocols are allowed to operate independently of those for the voice thereby permitting great flexibility in reconfiguring the network. Voice call setup and switching functions were performed external to the network with PABX equipment.
Study on Network Error Analysis and Locating based on Integrated Information Decision System
NASA Astrophysics Data System (ADS)
Yang, F.; Dong, Z. H.
2017-10-01
Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.
Evaluation and modeling of HIV based on communication theory in biological systems.
Dong, Miaowu; Li, Wenrong; Xu, Xi
2016-12-01
Some forms of communication are used in biological systems such as HIV transmission in human beings. In this paper, we plan to get a unique insight into biological communication systems generally through the analogy between HIV infection and electrical communication system. The model established in this paper can be used to test and simulate various communication systems since it provides researchers with an opportunity. We interpret biological communication systems by using telecommunications exemplification from a layered communication protocol developed before and use the model to indicate HIV spreading. We also implement a simulation of HIV infection based on the layered communication protocol to predict the development of this disease and the results prove the validity of the model. Copyright © 2016 Elsevier B.V. All rights reserved.
Sacrificial adhesive bonding: a powerful method for fabrication of glass microchips
Lima, Renato S.; Leão, Paulo A. G. C.; Piazzetta, Maria H. O.; Monteiro, Alessandra M.; Shiroma, Leandro Y.; Gobbi, Angelo L.; Carrilho, Emanuel
2015-01-01
A new protocol for fabrication of glass microchips is addressed in this research paper. Initially, the method involves the use of an uncured SU-8 intermediate to seal two glass slides irreversibly as in conventional adhesive bonding-based approaches. Subsequently, an additional step removes the adhesive layer from the channels. This step relies on a selective development to remove the SU-8 only inside the microchannel, generating glass-like surface properties as demonstrated by specific tests. Named sacrificial adhesive layer (SAB), the protocol meets the requirements of an ideal microfabrication technique such as throughput, relatively low cost, feasibility for ultra large-scale integration (ULSI), and high adhesion strength, supporting pressures on the order of 5 MPa. Furthermore, SAB eliminates the use of high temperature, pressure, or potential, enabling the deposition of thin films for electrical or electrochemical experiments. Finally, the SAB protocol is an improvement on SU-8-based bondings described in the literature. Aspects such as substrate/resist adherence, formation of bubbles, and thermal stress were effectively solved by using simple and inexpensive alternatives. PMID:26293346
A robust ECC based mutual authentication protocol with anonymity for session initiation protocol.
Mehmood, Zahid; Chen, Gongliang; Li, Jianhua; Li, Linsen; Alzahrani, Bander
2017-01-01
Over the past few years, Session Initiation Protocol (SIP) is found as a substantial application-layer protocol for the multimedia services. It is extensively used for managing, altering, terminating and distributing the multimedia sessions. Authentication plays a pivotal role in SIP environment. Currently, Lu et al. presented an authentication protocol for SIP and profess that newly proposed protocol is protected against all the familiar attacks. However, the detailed analysis describes that the Lu et al.'s protocol is exposed against server masquerading attack and user's masquerading attack. Moreover, it also fails to protect the user's identity as well as it possesses incorrect login and authentication phase. In order to establish a suitable and efficient protocol, having ability to overcome all these discrepancies, a robust ECC-based novel mutual authentication mechanism with anonymity for SIP is presented in this manuscript. The improved protocol contains an explicit parameter for user to cope the issues of security and correctness and is found to be more secure and relatively effective to protect the user's privacy, user's masquerading and server masquerading as it is verified through the comprehensive formal and informal security analysis.
Motwani, Manoj
2017-01-01
To demonstrate how using the Wavelight Contoura measured astigmatism and axis eliminates corneal astigmatism and creates uniformly shaped corneas. A retrospective analysis was conducted of the first 50 eyes to have bilateral full WaveLight ® Contoura LASIK correction of measured astigmatism and axis (vs conventional manifest refraction), using the Layer Yolked Reduction of Astigmatism Protocol in all cases. All patients had astigmatism corrected, and had at least 1 week of follow-up. Accuracy to desired refractive goal was assessed by postoperative refraction, aberration reduction via calculation of polynomials, and postoperative visions were analyzed as a secondary goal. The average difference of astigmatic power from manifest to measured was 0.5462D (with a range of 0-1.69D), and the average difference of axis was 14.94° (with a range of 0°-89°). Forty-seven of 50 eyes had a goal of plano, 3 had a monovision goal. Astigmatism was fully eliminated from all but 2 eyes, and 1 eye had regression with astigmatism. Of the eyes with plano as the goal, 80.85% were 20/15 or better, and 100% were 20/20 or better. Polynomial analysis postoperatively showed that at 6.5 mm, the average C3 was reduced by 86.5% and the average C5 by 85.14%. Using WaveLight ® Contoura measured astigmatism and axis removes higher order aberrations and allows for the creation of a more uniform cornea with accurate removal of astigmatism, and reduction of aberration polynomials. WaveLight ® Contoura successfully links the refractive correction layer and aberration repair layer using the Layer Yolked Reduction of Astigmatism Protocol to demonstrate how aberration removal can affect refractive correction.
Domain Name Server Security (DNSSEC) Protocol Deployment
2014-10-01
all the time. For mobile devices, end-system validation is much more difficult due to the state of their networks, many of which do not allow...way to distribute keying information than the current public-key infrastructure (PKI) allows. In addition, it will take work to convince CDNs and...Control Protocol (TCP) or even DNS over Secure Sockets Layer (SSL). One of the important outcomes of our work is the realization that that a " mobile
Improved efficient routing strategy on two-layer complex networks
NASA Astrophysics Data System (ADS)
Ma, Jinlong; Han, Weizhan; Guo, Qing; Zhang, Shuai; Wang, Junfang; Wang, Zhihao
2016-10-01
The traffic dynamics of multi-layer networks has become a hot research topic since many networks are comprised of two or more layers of subnetworks. Due to its low traffic capacity, the traditional shortest path routing (SPR) protocol is susceptible to congestion on two-layer complex networks. In this paper, we propose an efficient routing strategy named improved global awareness routing (IGAR) strategy which is based on the betweenness centrality of nodes in the two layers. With the proposed strategy, the routing paths can bypass hub nodes of both layers to enhance the transport efficiency. Simulation results show that the IGAR strategy can bring much better traffic capacity than the SPR and the global awareness routing (GAR) strategies. Because of the significantly improved traffic performance, this study is helpful to alleviate congestion of the two-layer complex networks.
A new Information publishing system Based on Internet of things
NASA Astrophysics Data System (ADS)
Zhu, Li; Ma, Guoguang
2018-03-01
A new information publishing system based on Internet of things is proposed, which is composed of four level hierarchical structure, including the screen identification layer, the network transport layer, the service management layer and the publishing application layer. In the architecture, the screen identification layer has realized the internet of screens in which geographically dispersed independent screens are connected to the internet by the customized set-top boxes. The service management layer uses MQTT protocol to implement a lightweight broker-based publish/subscribe messaging mechanism in constrained environments such as internet of things to solve the bandwidth bottleneck. Meanwhile the cloud-based storage technique is used to storage and manage the promptly increasing multimedia publishing information. The paper has designed and realized a prototype SzIoScreen, and give some related test results.
Climent, Salvador; Sanchez, Antonio; Capella, Juan Vicente; Meratnia, Nirvana; Serrano, Juan Jose
2014-01-06
This survey aims to provide a comprehensive overview of the current research on underwater wireless sensor networks, focusing on the lower layers of the communication stack, and envisions future trends and challenges. It analyzes the current state-of-the-art on the physical, medium access control and routing layers. It summarizes their security threads and surveys the currently proposed studies. Current envisioned niches for further advances in underwater networks research range from efficient, low-power algorithms and modulations to intelligent, energy-aware routing and medium access control protocols.
Analyzing the effect of routing protocols on media access control protocols in radio networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barrett, C. L.; Drozda, M.; Marathe, A.
2002-01-01
We study the effect of routing protocols on the performance of media access control (MAC) protocols in wireless radio networks. Three well known MAC protocols: 802.11, CSMA, and MACA are considered. Similarly three recently proposed routing protocols: AODV, DSR and LAR scheme 1 are considered. The experimental analysis was carried out using GloMoSim: a tool for simulating wireless networks. The main focus of our experiments was to study how the routing protocols affect the performance of the MAC protocols when the underlying network and traffic parameters are varied. The performance of the protocols was measured w.r.t. five important parameters: (i)more » number of received packets, (ii) average latency of each packet, (iii) throughput (iv) long term fairness and (v) number of control packets at the MAC layer level. Our results show that combinations of routing and MAC protocols yield varying performance under varying network topology and traffic situations. The result has an important implication; no combination of routing protocol and MAC protocol is the best over all situations. Also, the performance analysis of protocols at a given level in the protocol stack needs to be studied not locally in isolation but as a part of the complete protocol stack. A novel aspect of our work is the use of statistical technique, ANOVA (Analysis of Variance) to characterize the effect of routing protocols on MAC protocols. This technique is of independent interest and can be utilized in several other simulation and empirical studies.« less
Compact Modbus TCP/IP protocol for data acquisition systems based on limited hardware resources
NASA Astrophysics Data System (ADS)
Bai, Q.; Jin, B.; Wang, D.; Wang, Y.; Liu, X.
2018-04-01
The Modbus TCP/IP has been a standard industry communication protocol and widely utilized for establishing sensor-cloud platforms on the Internet. However, numerous existing data acquisition systems built on traditional single-chip microcontrollers without sufficient resources cannot support it, because the complete Modbus TCP/IP protocol always works dependent on a full operating system which occupies abundant hardware resources. Hence, a compact Modbus TCP/IP protocol is proposed in this work to make it run efficiently and stably even on a resource-limited hardware platform. Firstly, the Modbus TCP/IP protocol stack is analyzed and the refined protocol suite is rebuilt by streamlining the typical TCP/IP suite. Then, specific implementation of every hierarchical layer is respectively presented in detail according to the protocol structure. Besides, the compact protocol is implemented in a traditional microprocessor to validate the feasibility of the scheme. Finally, the performance of the proposed scenario is assessed. The experimental results demonstrate that message packets match the frame format of Modbus TCP/IP protocol and the average bandwidth reaches to 1.15 Mbps. The compact protocol operates stably even based on a traditional microcontroller with only 4-kB RAM and 12-MHz system clock, and no communication congestion or frequent packet loss occurs.
A post-Kyoto partner: Considering the Montreal Protocol as a tool to manage nitrous oxide
NASA Astrophysics Data System (ADS)
Mauzerall, D. L.; Kanter, D.; Ravishankara, A. R.; Daniel, J. S.; Portmann, R. W.; Grabiel, P.; Moomaw, W.; Galloway, J. N.
2012-12-01
While nitrous oxide (N2O) was recently identified as the largest remaining anthropogenic threat to the stratospheric ozone layer, it is currently regulated under the 1997 Kyoto Protocol due to its simultaneous ability to warm the climate. The threat N2O poses to the stratospheric ozone layer, coupled with the uncertain future of the international climate regime, motivates our exploration of issues that could be relevant to the Parties to the 1987 Montreal Protocol if they decide to take measures to manage N2O in the future. There are clear legal avenues for the Montreal Protocol and its parent treaty, the 1985 Vienna Convention, to regulate N2O, as well as several ways to share authority with the existing and future international climate treaties. N2O mitigation strategies exist to address its most significant anthropogenic sources, including agriculture, where behavioral practices and new technologies could contribute significantly to mitigation efforts. Existing policies managing N2O and other forms of reactive nitrogen could be harnessed and built upon by the Montreal Protocol's existing bodies to implement N2O controls. Given the tight coupling of the nitrogen cycle, such controls would likely simultaneously reduce emissions of reactive nitrogen and hence have co-benefits for ecosystems and public health. Nevertheless, there are at least three major regulatory challenges that are unique and central to N2O control: food security, equity, and the nitrogen cascade. The possible inclusion of N2O in the Montreal Protocol need not be viewed as a sign of the Kyoto Protocol's failure to adequately deal with climate change, given the complexity of the issue. Rather, it could represent an additional tool in the field of sustainable development diplomacy.lt;img border=0 src="images/B43K-06_B.jpg">
Stability and sensitivity of ABR flow control protocols
NASA Astrophysics Data System (ADS)
Tsai, Wie K.; Kim, Yuseok; Chiussi, Fabio; Toh, Chai-Keong
1998-10-01
This tutorial paper surveys the important issues in stability and sensitivity analysis of ABR flow control of ATM networks. THe stability and sensitivity issues are formulated in a systematic framework. Four main cause of instability in ABR flow control are identified: unstable control laws, temporal variations of available bandwidth with delayed feedback control, misbehaving components, and interactions between higher layer protocols and ABR flow control. Popular rate-based ABR flow control protocols are evaluated. Stability and sensitivity is shown to be the fundamental issues when the network has dynamically-varying bandwidth. Simulation result confirming the theoretical studies are provided. Open research problems are discussed.
A Chaos MIMO-OFDM Scheme for Mobile Communication with Physical-Layer Security
NASA Astrophysics Data System (ADS)
Okamoto, Eiji
Chaos communications enable a physical-layer security, which can enhance the transmission security in combining with upper-layer encryption techniques, or can omit the upper-layer secure protocol and enlarges the transmission efficiency. However, the chaos communication usually degrades the error rate performance compared to unencrypted digital modulations. To achieve both physical-layer security and channel coding gain, we have proposed a chaos multiple-input multiple-output (MIMO) scheme in which a rate-one chaos convolution is applied to MIMO multiplexing. However, in the conventional study only flat fading is considered. To apply this scheme to practical mobile environments, i.e., multipath fading channels, we propose a chaos MIMO-orthogonal frequency division multi-plexing (OFDM) scheme and show its effectiveness through computer simulations.
Wijdenes, Paula; Brouwers, Michael; van der Sluis, Corry K
2018-02-01
In order to create more uniformity in the prescription of upper limb prostheses by Dutch rehabilitation teams, the development and implementation of a Prosthesis Prescription Protocol of the upper limb (PPP-Arm) was initiated. The aim was to create a national digital protocol to structure, underpin, and evaluate the prescription of upper limb prostheses for clients with acquired or congenital arm defects. Prosthesis Prescription Protocol of the Arm (PPP-Arm) was developed on the basis of the International Classification of Functioning and consisted of several layers. All stakeholders (rehabilitation teams, orthopedic workshops, patients, and insurance companies) were involved in development and implementation. A national project coordinator and knowledge brokers in each team were essential for the project. PPP-Arm was successfully developed and implemented in nine Dutch rehabilitation teams. The protocol improved team collaboration, structure, and completeness of prosthesis prescriptions and treatment uniformity and might be interesting for other countries as well. Clinical relevance A national protocol to prescribe upper limb prostheses can be helpful to create uniformity in treatment of patients with upper limb defects. Such a protocol improves quality of care for all patients in the country.
Transitioning to Low-GWP Alternatives in Unitary Air Conditioning
This fact sheet provides current information on low-Global Warming Potential (GWP) refrigerant alternatives used in unitary air-conditioning equipment, relevant to the Montreal Protocol on Substances that Deplete the Ozone Layer.
NASA Astrophysics Data System (ADS)
Fathirad, Iraj; Devlin, John; Jiang, Frank
2012-09-01
The key-exchange and authentication are two crucial elements of any network security mechanism. IPsec, SSL/TLS, PGP and S/MIME are well-known security approaches in providing security service to network, transport and application layers; these protocols use different methods (based on their requirements) to establish keying materials and authenticates key-negotiation and participated parties. This paper studies and compares the authenticated key negotiation methods in mentioned protocols.
Advanced Shipboard Control Systems
2001-05-07
Nov 1989: 41-47. Carnivale, J. A. DD-21 Presentation. Jan 1999. Deitel H.M. and P. J. Deitel . C++ How To Program . Upper Saddle River...One distinct advantage to an OSI model is that each level of the network is clearly defined, allowing different users to understand how a specific...Pri) field that tells the layer how this message is to be sent. The L2Hdr is an 8 bit section of the Link Protocol Data Unit/MAC Protocol Data
Strong Password-Based Authentication in TLS Using the Three-PartyGroup Diffie-Hellman Protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdalla, Michel; Bresson, Emmanuel; Chevassut, Olivier
2006-08-26
The Internet has evolved into a very hostile ecosystem where"phishing'' attacks are common practice. This paper shows that thethree-party group Diffie-Hellman key exchange can help protect againstthese attacks. We have developed a suite of password-based cipher suitesfor the Transport Layer Security (TLS) protocol that are not onlyprovably secure but also assumed to be free from patent and licensingrestrictions based on an analysis of relevant patents in thearea.
CCSDS Advanced Orbiting Systems Virtual Channel Access Service for QoS MACHETE Model
NASA Technical Reports Server (NTRS)
Jennings, Esther H.; Segui, John S.
2011-01-01
To support various communications requirements imposed by different missions, interplanetary communication protocols need to be designed, validated, and evaluated carefully. Multimission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in "Simulator of Space Communication Networks" (NPO-41373), NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. By building abstract behavioral models of network protocols, one can validate performance after identifying the appropriate metrics of interest. The innovators have extended the MACHETE model library to include a generic link-layer Virtual Channel (VC) model supporting quality-of-service (QoS) controls based on IP streams. The main purpose of this generic Virtual Channel model addition was to interface fine-grain flow-based QoS (quality of service) between the network and MAC layers of the QualNet simulator, a commercial component of MACHETE. This software model adds the capability of mapping IP streams, based on header fields, to virtual channel numbers, allowing extended QoS handling at link layer. This feature further refines the QoS v existing at the network layer. QoS at the network layer (e.g. diffserv) supports few QoS classes, so data from one class will be aggregated together; differentiating between flows internal to a class/priority is not supported. By adding QoS classification capability between network and MAC layers through VC, one maps multiple VCs onto the same physical link. Users then specify different VC weights, and different queuing and scheduling policies at the link layer. This VC model supports system performance analysis of various virtual channel link-layer QoS queuing schemes independent of the network-layer QoS systems.
NASA Astrophysics Data System (ADS)
Diaz, Kristians; Castañeda, Benjamín; Miranda, César; Lavarello, Roberto; Llanos, Alejandro
2010-03-01
We developed a protocol for the acquisition of digital images and an algorithm for a color-based automatic segmentation of cutaneous lesions of Leishmaniasis. The protocol for image acquisition provides control over the working environment to manipulate brightness, lighting and undesirable shadows on the injury using indirect lighting. Also, this protocol was used to accurately calculate the area of the lesion expressed in mm2 even in curved surfaces by combining the information from two consecutive images. Different color spaces were analyzed and compared using ROC curves in order to determine the color layer with the highest contrast between the background and the wound. The proposed algorithm is composed of three stages: (1) Location of the wound determined by threshold and mathematical morphology techniques to the H layer of the HSV color space, (2) Determination of the boundaries of the wound by analyzing the color characteristics in the YIQ space based on masks (for the wound and the background) estimated from the first stage, and (3) Refinement of the calculations obtained on the previous stages by using the discrete dynamic contours algorithm. The segmented regions obtained with the algorithm were compared with manual segmentations made by a medical specialist. Broadly speaking, our results support that color provides useful information during segmentation and measurement of wounds of cutaneous Leishmaniasis. Results from ten images showed 99% specificity, 89% sensitivity, and 98% accuracy.
Piao, Jun-Yu; Liu, Xiao-Chan; Wu, Jinpeng; Yang, Wanli; Wei, Zengxi; Ma, Jianmin; Duan, Shu-Yi; Lin, Xi-Jie; Xu, Yan-Song; Cao, An-Min; Wan, Li-Jun
2018-06-28
Surface cobalt doping is an effective and economic way to improve the electrochemical performance of cathode materials. Herein, by tuning the precipitation kinetics of Co 2+ , we demonstrate an aqueous-based protocol to grow uniform basic cobaltous carbonate coating layer onto different substrates, and the thickness of the coating layer can be adjusted precisely in nanometer accuracy. Accordingly, by sintering the cobalt-coated LiNi 0.5 Mn 1.5 O 4 cathode materials, an epitaxial cobalt-doped surface layer will be formed, which will act as a protective layer without hindering charge transfer. Consequently, improved battery performance is obtained because of the suppression of interfacial degradation.
Climent, Salvador; Sanchez, Antonio; Capella, Juan Vicente; Meratnia, Nirvana; Serrano, Juan Jose
2014-01-01
This survey aims to provide a comprehensive overview of the current research on underwater wireless sensor networks, focusing on the lower layers of the communication stack, and envisions future trends and challenges. It analyzes the current state-of-the-art on the physical, medium access control and routing layers. It summarizes their security threads and surveys the currently proposed studies. Current envisioned niches for further advances in underwater networks research range from efficient, low-power algorithms and modulations to intelligent, energy-aware routing and medium access control protocols. PMID:24399155
Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.
Average waiting time in FDDI networks with local priorities
NASA Technical Reports Server (NTRS)
Gercek, Gokhan
1994-01-01
A method is introduced to compute the average queuing delay experienced by different priority group messages in an FDDI node. It is assumed that no FDDI MAC layer priorities are used. Instead, a priority structure is introduced to the messages at a higher protocol layer (e.g. network layer) locally. Such a method was planned to be used in Space Station Freedom FDDI network. Conservation of the average waiting time is used as the key concept in computing average queuing delays. It is shown that local priority assignments are feasable specially when the traffic distribution is asymmetric in the FDDI network.
A Real-Time System for Abusive Network Traffic Detection
2011-03-01
examine the spamming behavior at the network layer (IP layer) by correlating data collected from three sources: a sinkhole , a large e-mail provider, and...which spam originates • autonomous systems that sent spam messages to their sinkhole • BGP route announcements With respect to IP address space, their...applications or machines to communicate with each other. They exchange XML- formatted data [58] using the HTTP [59] protocol. Specifically, the client uses the
Local Area Network Strategies and Guidelines for a Peruvian Air Force Computer Center
1991-03-01
service elements to support application processes such as job management, and financial data exchange. The layer also supports the virtual terminal and... virtual file concept. [Ref.3 :p. 285] Essentially, the lowest three layers are concerned with the communication protocols associated with the data...General de la Fuerza Aerea Peruana Lima, Republica del Peru 5. Escuela de Oficiales de la Fuerza Aerea Peruana 2 Biblioteca del Grupo del Instruccion Base
A network architecture for precision formation flying using the IEEE 802.11 MAC Protocol
NASA Technical Reports Server (NTRS)
Clare, Loren P.; Gao, Jay L.; Jennings, Esther H.; Okino, Clayton
2005-01-01
Precision Formation Flying missions involve the tracking and maintenance of spacecraft in a desired geometric formation. The strong coupling of spacecraft in formation flying control requires inter-spacecraft communication to exchange information. In this paper, we present a network architecture that supports PFF control, from the initial random deployment phase to the final formation. We show that a suitable MAC layer for the application protocol is IEEE's 802.11 MAC protocol. IEEE 802.11 MAC has two modes of operations: DCF and PCF. We show that DCF is suitable for the initial deployment phase while switching to PCF when the spacecraft are in formation improves jitter and throughput. We also consider the effect of routing on protocol performance and suggest when it is profitable to turn off route discovery to achieve better network performance.
High-Throughput Screening Assay for Embryoid Body Differentiation of Human Embryonic Stem Cells
Outten, Joel T.; Gadue, Paul; French, Deborah L.; Diamond, Scott L.
2012-01-01
Serum-free human pluripotent stem cell media offer the potential to develop reproducible clinically applicable differentiation strategies and protocols. The vast array of possible growth factor and cytokine combinations for media formulations makes differentiation protocol optimization both labor and cost-intensive. This unit describes a 96-well plate, 4-color flow cytometry-based screening assay to optimize pluripotent stem cell differentiation protocols. We provide conditions both to differentiate human embryonic stem cells (hESCs) to the three primary germ layers, ectoderm, endoderm, and mesoderm, and to utilize flow cytometry to distinguish between them. This assay exhibits low inter-well variability and can be utilized to efficiently screen a variety of media formulations, reducing cost, incubator space, and labor. Protocols can be adapted to a variety of differentiation stages and lineages. PMID:22415836
Optical switching using IP protocol
NASA Astrophysics Data System (ADS)
Utreras, Andres J.; Gusqui, Luis; Reyes, Andres; Mena, Ricardo I.; Licenko, Gennady L.; Amirgaliyev, Yedilkhan; Komada, Paweł; Luganskaya, Saule; Kashaganova, Gulzhan
2017-08-01
To understand and evaluate the Optical Layer, and how it will affect the IP protocols over WDM (Switching), the present analyse is proposed. Optical communications have attractive proprieties, but also have some disadvantages, so the challenge is to combine the best of both branches. In this paper, general concepts for different options of switching are reviewed as: optical burst switching (OBS) and automatically switching optical network (ASON). Specific details such as their architectures are also discussed. In addition, the relevant characteristics of each variation for switching are reviewed.
Research on a Banknote Printing Wastewater Monitoring System based on Wireless Sensor Network
NASA Astrophysics Data System (ADS)
Li, B. B.; Yuan, Z. F.
2006-10-01
In this paper, a banknote printing wastewater monitoring system based on WSN is presented in line with the system demands and actual condition of the worksite for a banknote printing factory. In Physical Layer, the network node is a nRF9e5-centric embedded instrument, which can realize the multi-function such as data collecting, status monitoring, wireless data transmission and so on. Limited by the computing capability, memory capability, communicating energy and others factors, it is impossible for the node to get every detail information of the network, so the communication protocol on WSN couldn't be very complicated. The competitive-based MACA (Multiple Access with Collision Avoidance) Protocol is introduced in MAC, which can decide the communication process and working mode of the nodes, avoid the collision of data transmission, hidden and exposed station problem of nodes. On networks layer, the routing protocol in charge of the transmitting path of the data, the networks topology structure is arranged based on address assignation. Accompanied with some redundant nodes, the network performances stabile and expandable. The wastewater monitoring system is a tentative practice of WSN theory in engineering. Now, the system has passed test and proved efficiently.
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
UV Impacts Avoided by the Montreal Protocol
NASA Technical Reports Server (NTRS)
Newman, Paul; McKenzie, Richard
2010-01-01
Temporal and geographical variabilities in the future "World Expected" UV environment are compared with the "World Avoided", which would have occurred without the Montreal Protocol on protection of the ozone layer and its subsequent amendments and adjustments. Based on calculations of clear-sky UV irradiances, the effects of the Montreal Protocol have been hugely beneficial to avoid the health risks, such as skin cancer, which are associated with high UV, while there is only a small increase in health risks, such as vitamin D deficiency, that are associated with low UV. However, interactions with climate change may lead to changes in cloud and albedo, and possibly behavioural changes which could also be important.
UV impacts avoided by the Montreal Protocol.
Newman, Paul A; McKenzie, Richard
2011-07-01
Temporal and geographical variabilities in the future "world expected" UV environment are compared with the "world avoided", which would have occurred without the Montreal Protocol on Substances That Deplete the Ozone Layer and its subsequent amendments and adjustments. Based on calculations of clear-sky UV irradiances, the effects of the Montreal Protocol have been hugely beneficial to avoid the health risks, such as skin cancer, which are associated with high UV, while there is only a small increase in health risks, such as vitamin D deficiency, that are associated with low UV. However, interactions with climate change may lead to changes in cloud and albedo, and possibly behavioural changes that could also be important.
Somatic Embryogenesis in Two Orchid Genera (Cymbidium, Dendrobium).
da Silva, Jaime A Teixeira; Winarto, Budi
2016-01-01
The protocorm-like body (PLB) is the de facto somatic embryo in orchids. Here we describe detailed protocols for two orchid genera (hybrid Cymbidium Twilight Moon 'Day Light' and Dendrobium 'Jayakarta', D. 'Gradita 31', and D. 'Zahra FR 62') for generating PLBs. These protocols will most likely have to be tweaked for different cultivars as the response of orchids in vitro tends to be dependent on genotype. In addition to primary somatic embryogenesis, secondary (or repetitive) somatic embryogenesis is also described for both genera. The use of thin cell layers as a sensitive tissue assay is outlined for hybrid Cymbidium while the protocol outlined is suitable for bioreactor culture of D. 'Zahra FR 62'.
Influence of salt and rinsing protocol on the structure of PAH/PSS polyelectrolyte multilayers.
Feldötö, Zsombor; Varga, Imre; Blomberg, Eva
2010-11-16
A quartz crystal microbalance (QCM) and dual polarization interferometry (DPI) have been utilized to study how the structure of poly(allylamine hydrochloride) (PAH)/poly(styrene sulfonate) (PSS) multilayers is affected by the rinsing method (i.e., the termination of polyelectrolyte adsorption). The effect of the type of counterions used in the deposition solution was also investigated, and the polyelectrolyte multilayers were formed in a 0.5 M electrolyte solution (NaCl and KBr). From the measurements, it was observed that thicker layers were obtained when using KBr in the deposition solution than when using NaCl. Three different rinsing protocols have been studied: (i) the same electrolyte solution as used during multilayer formation, (ii) pure water, and (iii) first a salt solution (0.5 M) and then pure water. When the multilayer with PAH as the outermost layer was exposed to pure water, an interesting phenomenon was discovered: a large change in the energy dissipation was measured with the QCM. This could be attributed to the swelling of the layer, and from both QCM and DPI it is obvious that only the outermost PAH layer swells (to a thickness of 25-30 nm) because of a decrease in ionic strength and hence an increase in intra- and interchain repulsion, whereas the underlying layers retain a very rigid and compact structure with a low water content. Interestingly, the outermost PAH layer seems to obtain very similar thicknesses in water independent of the electrolyte used for the multilayer buildup. Another interesting aspect was that the measured thickness with the DPI evaluated by a single-layer model did not correlate with the estimated thickness from the model calculations performed on the QCM-D data. Thus, we applied a two-layer model to evaluate the DPI data and the results were in excellent agreement with the QCM-D results. To our knowledge, this evaluation of DPI data has not been done previously.
Andersen, Stephen O; Halberstadt, Marcel L; Borgford-Parnell, Nathan
2013-06-01
In 1974, Mario Molina and F. Sherwood Rowland warned that chlorofluorocarbons (CFCs) could destroy the stratospheric ozone layer that protects Earth from harmful ultraviolet radiation. In the decade after scientists documented the buildup and long lifetime of CFCs in the atmosphere; found the proof that CFCs chemically decomposed in the stratosphere and catalyzed the depletion of ozone; quantified the adverse effects; and motivated the public and policymakers to take action. In 1987, 24 nations plus the European Community signed the Montreal Protocol. Today, 25 years after the Montreal Protocol was agreed, every United Nations state is a party (universal ratification of 196 governments); all parties are in compliance with the stringent controls; 98% of almost 100 ozone-depleting chemicals have been phased out worldwide; and the stratospheric ozone layer is on its way to recovery by 2065. A growing coalition of nations supports using the Montreal Protocol to phase down hydrofluorocarbons, which are ozone safe but potent greenhouse gases. Without rigorous science and international consensus, emissions of CFCs and related ozone-depleting substances (ODSs) could have destroyed up to two-thirds of the ozone layer by 2065, increasing the risk of causing millions of cancer cases and the potential loss of half of global agricultural production. Furthermore, because most, ODSs are also greenhouse gases, CFCs and related ODSs could have had the effect of the equivalent of 24-76 gigatons per year of carbon dioxide. This critical review describes the history of the science of stratospheric ozone depletion, summarizes the evolution of control measures and compliance under the Montreal Protocol and national legislation, presents a review of six separate transformations over the last 100 years in refrigeration and air conditioning (A/C) technology, and illustrates government-industry cooperation in continually improving the environmental performance of motor vehicle A/C.
Andersen, Stephen O; Halberstadt, Marcel L; Borgford-Parnell, Nathan
2013-06-01
In 1974, Mario Molina and F. Sherwood Rowland warned that chlorofluorocarbons (CFCs) could destroy the stratospheric ozone layer that protects Earth from harmful ultraviolet radiation. In the decade after, scientists documented the buildup and long lifetime of CFCs in the atmosphere; found the proof that CFCs chemically decomposed in the stratosphere and catalyzed the depletion of ozone; quantified the adverse effects; and motivated the public and policymakers to take action. In 1987, 24 nations plus the European Community signed the Montreal Protocol. Today, 25 years after the Montreal Protocol was agreed, every United Nations state is a party (universal ratification of 196 governments); all parties are in compliance with the stringent controls; 98% of almost 100 ozone-depleting chemicals have been phased out worldwide; and the stratospheric ozone layer is on its way to recovery by 2065. A growing coalition of nations supports using the Montreal Protocol to phase down hydrofluorocarbons, which are ozone safe but potent greenhouse gases. Without rigorous science and international consensus, emissions of CFCs and related ozone-depleting substances (ODSs) could have destroyed up to two-thirds of the ozone layer by 2065, increasing the risk of causing millions of cancer cases and the potential loss of half of global agricultural production. Furthermore, because most ODSs are also greenhouse gases, CFCs and related ODSs could have had the effect of the equivalent of 24-76 gigatons per year of carbon dioxide. This critical review describes the history of the science of stratospheric ozone depletion, summarizes the evolution of control measures and compliance under the Montreal Protocol and national legislation, presents a review of six separate transformations over the last 100 years in refrigeration and air conditioning (A/C) technology, and illustrates government-industry cooperation in continually improving the environmental performance of motor vehicle A/C. [Box: see text].
Web tools for large-scale 3D biological images and atlases
2012-01-01
Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296
Access and accounting schemes of wireless broadband
NASA Astrophysics Data System (ADS)
Zhang, Jian; Huang, Benxiong; Wang, Yan; Yu, Xing
2004-04-01
In this paper, two wireless broadband access and accounting schemes were introduced. There are some differences in the client and the access router module between them. In one scheme, Secure Shell (SSH) protocol is used in the access system. The SSH server makes the authentication based on private key cryptography. The advantage of this scheme is the security of the user's information, and we have sophisticated access control. In the other scheme, Secure Sockets Layer (SSL) protocol is used the access system. It uses the technology of public privacy key. Nowadays, web browser generally combines HTTP and SSL protocol and we use the SSL protocol to implement the encryption of the data between the clients and the access route. The schemes are same in the radius sever part. Remote Authentication Dial in User Service (RADIUS), as a security protocol in the form of Client/Sever, is becoming an authentication/accounting protocol for standard access to the Internet. It will be explained in a flow chart. In our scheme, the access router serves as the client to the radius server.
A universal data access and protocol integration mechanism for smart home
NASA Astrophysics Data System (ADS)
Shao, Pengfei; Yang, Qi; Zhang, Xuan
2013-03-01
With the lack of standardized or completely missing communication interfaces in home electronics, there is no perfect solution to address every aspect in smart homes based on existing protocols and technologies. In addition, the central control unit (CCU) of smart home system working point-to-point between the multiple application interfaces and the underlying hardware interfaces leads to its complicated architecture and unpleasant performance. A flexible data access and protocol integration mechanism is required. The current paper offers a universal, comprehensive data access and protocol integration mechanism for a smart home. The universal mechanism works as a middleware adapter with unified agreements of the communication interfaces and protocols, offers an abstraction of the application level from the hardware specific and decoupling the hardware interface modules from the application level. Further abstraction for the application interfaces and the underlying hardware interfaces are executed based on adaption layer to provide unified interfaces for more flexible user applications and hardware protocol integration. This new universal mechanism fundamentally changes the architecture of the smart home and in some way meets the practical requirement of smart homes more flexible and desirable.
TECHNOLOGIES FOR CFC/HALON DESTRUCTION
The report presents an overview of the current status of possible technologies used to destroy chlorofluorocarbons (CFCs) and halons chemicals implicated in the destruction of the stratospheric ozone layer. The Montreal Protocol an international treaty to control the production a...
Availability Improvement of Layer 2 Seamless Networks Using OpenFlow
Molina, Elias; Jacob, Eduardo; Matias, Jon; Moreira, Naiara; Astarloa, Armando
2015-01-01
The network robustness and reliability are strongly influenced by the implementation of redundancy and its ability of reacting to changes. In situations where packet loss or maximum latency requirements are critical, replication of resources and information may become the optimal technique. To this end, the IEC 62439-3 Parallel Redundancy Protocol (PRP) provides seamless recovery in layer 2 networks by delegating the redundancy management to the end-nodes. In this paper, we present a combination of the Software-Defined Networking (SDN) approach and PRP topologies to establish a higher level of redundancy and thereby, through several active paths provisioned via the OpenFlow protocol, the global reliability is increased, as well as data flows are managed efficiently. Hence, the experiments with multiple failure scenarios, which have been run over the Mininet network emulator, show the improvement in the availability and responsiveness over other traditional technologies based on a single active path. PMID:25759861
Availability improvement of layer 2 seamless networks using OpenFlow.
Molina, Elias; Jacob, Eduardo; Matias, Jon; Moreira, Naiara; Astarloa, Armando
2015-01-01
The network robustness and reliability are strongly influenced by the implementation of redundancy and its ability of reacting to changes. In situations where packet loss or maximum latency requirements are critical, replication of resources and information may become the optimal technique. To this end, the IEC 62439-3 Parallel Redundancy Protocol (PRP) provides seamless recovery in layer 2 networks by delegating the redundancy management to the end-nodes. In this paper, we present a combination of the Software-Defined Networking (SDN) approach and PRP topologies to establish a higher level of redundancy and thereby, through several active paths provisioned via the OpenFlow protocol, the global reliability is increased, as well as data flows are managed efficiently. Hence, the experiments with multiple failure scenarios, which have been run over the Mininet network emulator, show the improvement in the availability and responsiveness over other traditional technologies based on a single active path.
Graphene Nanobubbles Produced by Water Splitting.
An, Hongjie; Tan, Beng Hau; Moo, James Guo Sheng; Liu, Sheng; Pumera, Martin; Ohl, Claus-Dieter
2017-05-10
Graphene nanobubbles are of significant interest due to their ability to trap mesoscopic volumes of gas for various applications in nanoscale engineering. However, conventional protocols to produce such bubbles are relatively elaborate and require specialized equipment to subject graphite samples to high temperatures or pressures. Here, we demonstrate the formation of graphene nanobubbles between layers of highly oriented pyrolytic graphite (HOPG) with electrolysis. Although this process can also lead to the formation of gaseous surface nanobubbles on top of the substrate, the two types of bubbles can easily be distinguished using atomic force microscopy. We estimated the Young's modulus, internal pressure, and the thickness of the top membrane of the graphene nanobubbles. The hydrogen storage capacity can reach ∼5 wt % for a graphene nanobubble with a membrane that is four layers thick. The simplicity of our protocol paves the way for such graphitic nanobubbles to be utilized for energy storage and industrial applications on a wide scale.
Abrasion of Candidate Spacesuit Fabrics by Simulated Lunar Dust
NASA Technical Reports Server (NTRS)
Gaier, James R.; Meador, Mary Ann; Rogers, Kerry J.; Sheehy, Brennan H.
2009-01-01
A protocol has been developed that produced the type of lunar soil abrasion damage observed on Apollo spacesuits. This protocol was then applied to four materials (Kevlar (DuPont), Vectran (Kuraray Co., Ltd.), Orthofabric, and Tyvek (DuPont)) that are candidates for advanced spacesuits. Three of the four new candidate fabrics (all but Vectran) were effective at keeping the dust from penetrating to layers beneath. In the cases of Kevlar and Orthofabric this was accomplished by the addition of a silicone layer. In the case of Tyvek, the paper structure was dense enough to block dust transport. The least abrasive damage was suffered by the Tyvek. This was thought to be due in large part to its non-woven paper structure. The woven structures were all abraded where the top of the weave was struck by the abrasive. Of these, the Orthofabric suffered the least wear, with both Vectran and Kevlar suffering considerably more extensive filament breakage.
A Model of In vitro Plasticity at the Parallel Fiber—Molecular Layer Interneuron Synapses
Lennon, William; Yamazaki, Tadashi; Hecht-Nielsen, Robert
2015-01-01
Theoretical and computational models of the cerebellum typically focus on the role of parallel fiber (PF)—Purkinje cell (PKJ) synapses for learned behavior, but few emphasize the role of the molecular layer interneurons (MLIs)—the stellate and basket cells. A number of recent experimental results suggest the role of MLIs is more important than previous models put forth. We investigate learning at PF—MLI synapses and propose a mathematical model to describe plasticity at this synapse. We perform computer simulations with this form of learning using a spiking neuron model of the MLI and show that it reproduces six in vitro experimental results in addition to simulating four novel protocols. Further, we show how this plasticity model can predict the results of other experimental protocols that are not simulated. Finally, we hypothesize what the biological mechanisms are for changes in synaptic efficacy that embody the phenomenological model proposed here. PMID:26733856
Mian, Adnan Noor; Fatima, Mehwish; Khan, Raees; Prakash, Ravi
2014-01-01
Energy efficiency is an important design paradigm in Wireless Sensor Networks (WSNs) and its consumption in dynamic environment is even more critical. Duty cycling of sensor nodes is used to address the energy consumption problem. However, along with advantages, duty cycle aware networks introduce some complexities like synchronization and latency. Due to their inherent characteristics, many traditional routing protocols show low performance in densely deployed WSNs with duty cycle awareness, when sensor nodes are supposed to have high mobility. In this paper we first present a three messages exchange Lightweight Random Walk Routing (LRWR) protocol and then evaluate its performance in WSNs for routing low data rate packets. Through NS-2 based simulations, we examine the LRWR protocol by comparing it with DYMO, a widely used WSN protocol, in both static and dynamic environments with varying duty cycles, assuming the standard IEEE 802.15.4 in lower layers. Results for the three metrics, that is, reliability, end-to-end delay, and energy consumption, show that LRWR protocol outperforms DYMO in scalability, mobility, and robustness, showing this protocol as a suitable choice in low duty cycle and dense WSNs.
A Robust and Energy-Efficient Transport Protocol for Cognitive Radio Sensor Networks
Salim, Shelly; Moh, Sangman
2014-01-01
A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. CRSNs benefit from cognitive radio capabilities such as dynamic spectrum access and transmission parameters reconfigurability; but cognitive radio also brings additional challenges and leads to higher energy consumption. Motivated to improve the energy efficiency in CRSNs, we propose a robust and energy-efficient transport protocol (RETP). The novelties of RETP are two-fold: (I) it combines distributed channel sensing and channel decision with centralized schedule-based data transmission; and (II) it differentiates the types of data transmission on the basis of data content and adopts different acknowledgment methods for different transmission types. To the best of our knowledge, no transport layer protocols have yet been designed for CRSNs. Simulation results show that the proposed protocol achieves remarkably longer network lifetime and shorter event-detection delay compared to those achieved with a conventional transport protocol, while simultaneously preserving event-detection reliability. PMID:25333288
Weber, Theresa; Bechthold, Maren; Winkler, Tobias; Dauselt, John; Terfort, Andreas
2013-11-01
Direct grafting of hyperbranched polyglycerol (PG) layers onto the oxide surfaces of steel, aluminum, and silicon has been achieved through surface-initiated polymerization of 2-hydroxymethyloxirane (glycidol). Optimization of the deposition conditions led to a protocol that employed N-methyl-2-pyrrolidone (NMP) as the solvent and temperatures of 100 and 140 °C, depending on the substrate material. In all cases, a linear growth of the PG layers could be attained, which allows for control of film thickness by altering the reaction time. At layer thicknesses >5 nm, the PG layers completely suppressed the adhesion of albumin, fibrinogen, and globulin. These layers were also at least 90% bio-repulsive for two bacteria strains, E. coli and Acinetobacter baylyi, with further improvement being observed when the PG film thickness was increased to 17 nm (up to 99.9% bio-repulsivity on silicon). Copyright © 2013 Elsevier B.V. All rights reserved.
Prado, Maíra; Simão, Renata Antoun; Gomes, Brenda Paula Figueiredo de Almeida
2014-06-01
The development and maintenance of the sealing of the root canal system is the key to the success of root canal treatment. The resin-based adhesive material has the potential to reduce the microleakage of the root canal because of its adhesive properties and penetration into dentinal walls. Moreover, the irrigation protocols may have an influence on the adhesiveness of resin-based sealers to root dentin. The objective of the present study was to evaluate the effect of different irrigant protocols on coronal bacterial microleakage of gutta-percha/AH Plus and Resilon/Real Seal Self-etch systems. One hundred ninety pre-molars were used. The teeth were divided into 18 experimental groups according to the irrigation protocols and filling materials used. The protocols used were: distilled water; sodium hypochlorite (NaOCl)+eDTA; NaOCl+H3PO4; NaOCl+eDTA+chlorhexidine (CHX); NaOCl+H3PO4+CHX; CHX+eDTA; CHX+ H3PO4; CHX+eDTA+CHX and CHX+H3PO4+CHX. Gutta-percha/AH Plus or Resilon/Real Seal Se were used as root-filling materials. The coronal microleakage was evaluated for 90 days against Enterococcus faecalis. Data were statistically analyzed using Kaplan-Meier survival test, Kruskal-Wallis and Mann-Whitney tests. No significant difference was verified in the groups using chlorhexidine or sodium hypochlorite during the chemo-mechanical preparation followed by eDTA or phosphoric acid for smear layer removal. The same results were found for filling materials. However, the statistical analyses revealed that a final flush with 2% chlorhexidine reduced significantly the coronal microleakage. A final flush with 2% chlorhexidine after smear layer removal reduces coronal microleakage of teeth filled with gutta-percha/AH Plus or Resilon/Real Seal SE.
A data transmission method for particle physics experiments based on Ethernet physical layer
NASA Astrophysics Data System (ADS)
Huang, Xi-Ru; Cao, Ping; Zheng, Jia-Jun
2015-11-01
Due to its advantages of universality, flexibility and high performance, fast Ethernet is widely used in readout system design for modern particle physics experiments. However, Ethernet is usually used together with the TCP/IP protocol stack, which makes it difficult to implement readout systems because designers have to use the operating system to process this protocol. Furthermore, TCP/IP degrades the transmission efficiency and real-time performance. To maximize the performance of Ethernet in physics experiment applications, a data readout method based on the physical layer (PHY) is proposed. In this method, TCP/IP is replaced with a customized and simple protocol, which makes it easier to implement. On each readout module, data from the front-end electronics is first fed into an FPGA for protocol processing and then sent out to a PHY chip controlled by this FPGA for transmission. This kind of data path is fully implemented by hardware. From the side of the data acquisition system (DAQ), however, the absence of a standard protocol causes problems for the network related applications. To solve this problem, in the operating system kernel space, data received by the network interface card is redirected from the traditional flow to a specified memory space by a customized program. This memory space can easily be accessed by applications in user space. For the purpose of verification, a prototype system has been designed and implemented. Preliminary test results show that this method can meet the requirements of data transmission from the readout module to the DAQ with an efficient and simple manner. Supported by National Natural Science Foundation of China (11005107) and Independent Projects of State Key Laboratory of Particle Detection and Electronics (201301)
An implementation of the SNR high speed network communication protocol (Receiver part)
NASA Astrophysics Data System (ADS)
Wan, Wen-Jyh
1995-03-01
This thesis work is to implement the receiver pan of the SNR high speed network transport protocol. The approach was to use the Systems of Communicating Machines (SCM) as the formal definition of the protocol. Programs were developed on top of the Unix system using C programming language. The Unix system features that were adopted for this implementation were multitasking, signals, shared memory, semaphores, sockets, timers and process control. The problems encountered, and solved, were signal loss, shared memory conflicts, process synchronization, scheduling, data alignment and errors in the SCM specification itself. The result was a correctly functioning program which implemented the SNR protocol. The system was tested using different connection modes, lost packets, duplicate packets and large data transfers. The contributions of this thesis are: (1) implementation of the receiver part of the SNR high speed transport protocol; (2) testing and integration with the transmitter part of the SNR transport protocol on an FDDI data link layered network; (3) demonstration of the functions of the SNR transport protocol such as connection management, sequenced delivery, flow control and error recovery using selective repeat methods of retransmission; and (4) modifications to the SNR transport protocol specification such as corrections for incorrect predicate conditions, defining of additional packet types formats, solutions for signal lost and processes contention problems etc.
2012-01-01
Background Variability among stallions in terms of semen cryopreservation quality renders it difficult to arrive at a standardized cryopreservation method. Different extenders and processing techniques (such us colloidal centrifugation) are used in order to optimize post-thaw sperm quality. Sperm chromatin integrity analysis is an effective tool for assessing such quality. The aim of the present study was to compare the effect of two single layer colloidal centrifugation protocols (prior to cryopreservation) in combination with three commercial freezing extenders on the post-thaw chromatin integrity of equine sperm samples at different post-thaw incubation (37°C) times (i.e., their DNA fragmentation dynamics). Results Post-thaw DNA fragmentation levels in semen samples subjected to either of the colloidal centrifugation protocols were significantly lower (p<0.05) immediately after thawing and after 4 h of incubation at 37°C compared to samples that underwent standard (control) centrifugation. The use of InraFreeze® extender was associated with significantly less DNA fragmentation than the use of Botu-Crio® extender at 6 h of incubation, and than the use of either Botu-Crio® or Gent® extender at 24 h of incubation (p<0.05). Conclusions These results suggest that single layer colloidal centrifugation performed with extended or raw semen prior to cryopreservation reduces DNA fragmentation during the first four hours after thawing. Further studies are needed to determine the influence of freezing extenders on equine sperm DNA fragmentation dynamics. PMID:23217215
A Dedicated Computational Platform for Cellular Monte Carlo T-CAD Software Tools
2015-07-14
computer that establishes an encrypted Virtual Private Network ( OpenVPN [44]) based on the Secure Socket Layer (SSL) paradigm. Each user is given a...security certificate for each device used to connect to the computing nodes. Stable OpenVPN clients are available for Linux, Microsoft Windows, Apple OSX...platform is granted by an encrypted connection base on the Secure Socket Layer (SSL) protocol, and implemented in the OpenVPN Virtual Personal Network
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-01-01
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-11-20
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation.
Cheetah: A Framework for Scalable Hierarchical Collective Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Richard L; Gorentla Venkata, Manjunath; Ladd, Joshua S
2011-01-01
Collective communication operations, used by many scientific applications, tend to limit overall parallel application performance and scalability. Computer systems are becoming more heterogeneous with increasing node and core-per-node counts. Also, a growing number of data-access mechanisms, of varying characteristics, are supported within a single computer system. We describe a new hierarchical collective communication framework that takes advantage of hardware-specific data-access mechanisms. It is flexible, with run-time hierarchy specification, and sharing of collective communication primitives between collective algorithms. Data buffers are shared between levels in the hierarchy reducing collective communication management overhead. We have implemented several versions of the Message Passingmore » Interface (MPI) collective operations, MPI Barrier() and MPI Bcast(), and run experiments using up to 49, 152 processes on a Cray XT5, and a small InfiniBand based cluster. At 49, 152 processes our barrier implementation outperforms the optimized native implementation by 75%. 32 Byte and one Mega-Byte broadcasts outperform it by 62% and 11%, respectively, with better scalability characteristics. Improvements relative to the default Open MPI implementation are much larger.« less
Online & Offline data storage and data processing at the European XFEL facility
NASA Astrophysics Data System (ADS)
Gasthuber, Martin; Dietrich, Stefan; Malka, Janusz; Kuhn, Manuela; Ensslin, Uwe; Wrona, Krzysztof; Szuba, Janusz
2017-10-01
For the upcoming experiments at the European XFEL light source facility, a new online and offline data processing and storage infrastructure is currently being built and verified. Based on the experience of the system being developed for the Petra III light source at DESY, presented at the last CHEP conference, we further develop the system to cope with the much higher volumes and rates ( 50GB/sec) together with a more complex data analysis and infrastructure conditions (i.e. long range InfiniBand connections). This work will be carried out in collaboration of DESY/IT, European XFEL and technology support from IBM/Research. This presentation will shortly wrap up the experience of 1 year runtime of the PetraIII ([3]) system, continue with a short description of the challenges for the European XFEL ([2]) experiments and the main section, showing the proposed system for online and offline with initial result from real implementation (HW & SW). This will cover the selected cluster filesystem GPFS ([5]) including Quality of Service (QOS), extensive use of flash based subsystems and other new and unique features this architecture will benefit from.
The Spider Center Wide File System; From Concept to Reality
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shipman, Galen M; Dillow, David A; Oral, H Sarp
2009-01-01
The Leadership Computing Facility (LCF) at Oak Ridge National Laboratory (ORNL) has a diverse portfolio of computational resources ranging from a petascale XT4/XT5 simulation system (Jaguar) to numerous other systems supporting development, visualization, and data analytics. In order to support vastly different I/O needs of these systems Spider, a Lustre-based center wide file system was designed and deployed to provide over 240 GB/s of aggregate throughput with over 10 Petabytes of formatted capacity. A multi-stage InfiniBand network, dubbed as Scalable I/O Network (SION), with over 889 GB/s of bisectional bandwidth was deployed as part of Spider to provide connectivity tomore » our simulation, development, visualization, and other platforms. To our knowledge, while writing this paper, Spider is the largest and fastest POSIX-compliant parallel file system in production. This paper will detail the overall architecture of the Spider system, challenges in deploying and initial testings of a file system of this scale, and novel solutions to these challenges which offer key insights into file system design in the future.« less
Announcing Supercomputer Summit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, Jack; Bland, Buddy; Nichols, Jeff
Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte ofmore » coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.« less
NASA Astrophysics Data System (ADS)
Newman, Gregory A.; Commer, Michael
2009-07-01
Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.
Extending the length and time scales of Gram–Schmidt Lyapunov vector computations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Anthony B., E-mail: acosta@northwestern.edu; Green, Jason R., E-mail: jason.green@umb.edu; Department of Chemistry, University of Massachusetts Boston, Boston, MA 02125
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram–Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N{sup 2} (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram–Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard–Jones fluids from N=100 to 1300 betweenmore » Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram–Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.« less
A mobile information management system used in textile enterprises
NASA Astrophysics Data System (ADS)
Huang, C.-R.; Yu, W.-D.
2008-02-01
The mobile information management system (MIMS) for textile enterprises is based on Microsoft Visual Studios. NET2003 Server, Microsoft SQL Server 2000, C++ language and wireless application protocol (WAP) and wireless markup language (WML) technology. The portable MIMS is composed of three-layer structures, i.e. showing layer; operating layer; and data visiting layer corresponding to the port-link module; processing module; and database module. By using the MIMS, not only the information exchanges become more convenient and easier, but also the compatible between the giant information capacity and a micro-cell phone and functional expansion nature in operating and designing can be realized by means of build-in units. The development of MIMS is suitable for the utilization in textile enterprises.
In-Space Networking on NASA's SCAN Testbed
NASA Technical Reports Server (NTRS)
Brooks, David E.; Eddy, Wesley M.; Clark, Gilbert J.; Johnson, Sandra K.
2016-01-01
The NASA Space Communications and Navigation (SCaN) Testbed, an external payload onboard the International Space Station, is equipped with three software defined radios and a flight computer for supporting in-space communication research. New technologies being studied using the SCaN Testbed include advanced networking, coding, and modulation protocols designed to support the transition of NASAs mission systems from primarily point to point data links and preplanned routes towards adaptive, autonomous internetworked operations needed to meet future mission objectives. Networking protocols implemented on the SCaN Testbed include the Advanced Orbiting Systems (AOS) link-layer protocol, Consultative Committee for Space Data Systems (CCSDS) Encapsulation Packets, Internet Protocol (IP), Space Link Extension (SLE), CCSDS File Delivery Protocol (CFDP), and Delay-Tolerant Networking (DTN) protocols including the Bundle Protocol (BP) and Licklider Transmission Protocol (LTP). The SCaN Testbed end-to-end system provides three S-band data links and one Ka-band data link to exchange space and ground data through NASAs Tracking Data Relay Satellite System or a direct-to-ground link to ground stations. The multiple data links and nodes provide several upgradable elements on both the space and ground systems. This paper will provide a general description of the testbeds system design and capabilities, discuss in detail the design and lessons learned in the implementation of the network protocols, and describe future plans for continuing research to meet the communication needs for evolving global space systems.
Swaine, Jillian M; Moe, Andrew; Breidahl, William; Bader, Daniel L; Oomens, Cees W J; Lester, Leanne; O'Loughlin, Edmond; Santamaria, Nick; Stacey, Michael C
2018-02-01
High strain in soft tissues that overly bony prominences are considered a risk factor for pressure ulcers (PUs) following spinal cord impairment (SCI) and have been computed using Finite Element methods (FEM). The aim of this study was to translate a MRI protocol into ultrasound (US) and determine between-operator reliability of expert sonographers measuring diameter of the inferior curvature of the ischial tuberosity (IT) and the thickness of the overlying soft tissue layers on able-bodied (AB) and SCI using real-time ultrasound. Part 1: Fourteen AB participants with a mean age of 36.7 ± 12.09 years with 7 males and 7 females had their 3 soft tissue layers in loaded and unloaded sitting measured independently by 2 sonographers: tendon/muscle, skin/fat and total soft tissue and the diameter of the IT in its short and long axis. Part 2: Nineteen participants with SCI were screened, three were excluded due to abnormal skin signs, and eight participants (42%) were excluded for abnormal US signs with normal skin. Eight SCI participants with a mean age of 31.6 ± 13.6 years and all male with 4 paraplegics and 4 tetraplegics were measured by the same sonographers for skin, fat, tendon, muscle and total. Skin/fat and tendon/muscle were computed. AB between-operator reliability was good (ICC = 0.81-0.90) for 3 soft tissues layers in unloaded and loaded sitting and poor for both IT short and long axis (ICC = -0.028 and -0.01). SCI between-operator reliability was good in unloaded and loaded for total, muscle, fat, skin/fat, tendon/muscle (ICC = 0.75-0.97) and poor for tendon (ICC = 0.26 unloaded and ICC = -0.71 loaded) and skin (ICC = 0.37 unloaded and ICC = 0.10). A MRI protocol was successfully adapted for a reliable 3 soft tissue layer model and could be used in a 2-D FEM model designed to estimate soft tissue strain as a novel risk factor for the development of a PU. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Motwani, Manoj
2017-01-01
Purpose To demonstrate how using the Wavelight Contoura measured astigmatism and axis eliminates corneal astigmatism and creates uniformly shaped corneas. Patients and methods A retrospective analysis was conducted of the first 50 eyes to have bilateral full WaveLight® Contoura LASIK correction of measured astigmatism and axis (vs conventional manifest refraction), using the Layer Yolked Reduction of Astigmatism Protocol in all cases. All patients had astigmatism corrected, and had at least 1 week of follow-up. Accuracy to desired refractive goal was assessed by postoperative refraction, aberration reduction via calculation of polynomials, and postoperative visions were analyzed as a secondary goal. Results The average difference of astigmatic power from manifest to measured was 0.5462D (with a range of 0–1.69D), and the average difference of axis was 14.94° (with a range of 0°–89°). Forty-seven of 50 eyes had a goal of plano, 3 had a monovision goal. Astigmatism was fully eliminated from all but 2 eyes, and 1 eye had regression with astigmatism. Of the eyes with plano as the goal, 80.85% were 20/15 or better, and 100% were 20/20 or better. Polynomial analysis postoperatively showed that at 6.5 mm, the average C3 was reduced by 86.5% and the average C5 by 85.14%. Conclusions Using WaveLight® Contoura measured astigmatism and axis removes higher order aberrations and allows for the creation of a more uniform cornea with accurate removal of astigmatism, and reduction of aberration polynomials. WaveLight® Contoura successfully links the refractive correction layer and aberration repair layer using the Layer Yolked Reduction of Astigmatism Protocol to demonstrate how aberration removal can affect refractive correction. PMID:28553071
Munasinghe, M; King, K
1992-06-01
Stratospheric ozone layer depletion has been recognized as a problem by the Vienna Convention for the Protection of the Ozone Layer and the 1987 Montreal Protocol (MP). The ozone layer shields the earth from harmful ultraviolet radiation (UV-B), which is more pronounced at the poles and around the equator. Industrialized countries have contributed significantly to the problem by releasing chlorofluorocarbons (CFCs) and halons into the atmosphere. The effect of these chemicals, which were known for their inertness, nonflammability, and nontoxicity, was discovered in 1874. Action to deal with the effects of CFCs and halons was initiated in 1985 in a 49-nation UN meeting. 21 nations signed a protocol limiting ozone depleting substances (ODS): CFCs and halons. Schedules were set based on each country's use in 1986; the target phaseout was set for the year 2000. The MP restricts trade in ODSs and weights the impact of substances to reflect the extent of damage; i.e., halons are 10 times more damaging than CFCs. ODS requirements for developing countries were eased to accommodate scarce resources and the small fraction of ODS emissions. An Interim Multilateral Fund under the Montreal Protocol (IMFMP) was established to provide loans to finance the costs to developing countries in meeting global environmental requirements. The IMFMP is administered by the World Bank, the UN Environmental Program, and the UN Development Program. Financing is available to eligible countries who use .3 kg of ODS/person/year. Rapid phaseout in developed countries has occurred due to strong support from industry and a lower than expected cost. Although there are clear advantages to rapid phaseout, there were no incentives included in the MP for rapid phaseout. Some of the difficulties occur because the schedules set minimum targets at the lowest possible cost. Also, costs cannot be minimized by a country-specific and ODS-specific process. The ways to improve implementation in scheduling and incremental costs are indicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, Tanya L.; Tonkyn, Russell G.; Danby, Tyler O.
We present accurate measurements for the determination of the optical constants for a series of organic liquids, including organophosphorous compounds. Bulk liquids are rarely encountered in the environment, but more commonly are present as droplets of liquids or thin layers on various substrates. Providing reference spectra to account for the plethora of morphological conditions that may be encountered under different scenarios is a challenge. An alternative approach is to provide the complex optical constants, n and k, which can be used to model the optical phenomena in media and at interfaces, minimizing the need for a vast number of laboratorymore » measurements. In this work, we present improved protocols for measuring the optical constants for a series of liquids that span the range from 7800 to 400 cm-1. The broad spectral range means that one needs to account for both the strong and weak spectral features that are encountered, all of which can be useful for detection, depending on the scenario. To span this dynamic range, both long and short cells are required for accurate measurements. The protocols are presented along with experimental and modeling results for thin layers of silicone oil on aluminum.« less
Branton, S L; Leigh, S A; Purswell, J L; Evans, J D; Collier, S D; Olanrewaju, H A; Pharr, G T
2010-09-01
Vaccination of multi-age layer operations, wherein one million plus commercial layer chickens are housed, has been spurious until the development of a self-propelled, constant-speed spray vaccinator. Still, even with its use, live Mycoplasma gallisepticum (MG) vaccinations have been questionable in terms of seroconversion. Using the vaccinator as a research tool over the past 5 yr, factors have been elucidated which impact seroconversion to one live MG vaccine in particular, the F strain of MG (FMG). These factors include the type of nozzle used to spray the vaccine, the temperature of the water used to rehydrate and administer the vaccine, and the pH and osmolarity of the fluid used to apply the vaccine. In the present study, one farm was monitored for its seroconversion rates over 4 1/2 yr, during which time the FMG vaccination protocol was amended as factors were identified that enhanced seroconversion rates. The results of this study showed that implementation and inclusion of the optimized factors into the vaccination protocol for FMG enhanced seroconversion rates because they went from an initial 50%-55% positive seroconversion rate to a consistent 100% positive seroconversion rate over the 56-mo study period.
The Intersystem - Internetworking for space systems
NASA Astrophysics Data System (ADS)
Landauer, C.
This paper is a description of the Intersystem, which is a mechanism for internetworking among existing and planned military satellite communication systems. The communication systems interconnected with this mechanism are called member systems, and the interconnected set of communication systems is called the Intersystem. The Intersystem is implemented with higher layer protocols that impose a common organization on the different signaling conventions, so that end users of different systems can communicate with each other. The Intersystem provides its coordination of member system access and resource requests with Intersystem Resource Controllers (IRCs), which are processors that implement the Intersystem protocols and have interfaces to the member systems' own access and resource control mechanisms. The IRCs are connected to each other to form the IRC Subnetwork. Terminals request services from the IRC Subnetwork using the Intersystem Access Control Protocols, and the IRC Subnetwork responses to the requests are coordinated using the RCRC (Resource Controller to Resource Controller) Protocols.
Quantifying the ozone and ultraviolet benefits already achieved by the Montreal Protocol.
Chipperfield, M P; Dhomse, S S; Feng, W; McKenzie, R L; Velders, G J M; Pyle, J A
2015-05-26
Chlorine- and bromine-containing ozone-depleting substances (ODSs) are controlled by the 1987 Montreal Protocol. In consequence, atmospheric equivalent chlorine peaked in 1993 and has been declining slowly since then. Consistent with this, models project a gradual increase in stratospheric ozone with the Antarctic ozone hole expected to disappear by ∼2050. However, we show that by 2013 the Montreal Protocol had already achieved significant benefits for the ozone layer. Using a 3D atmospheric chemistry transport model, we demonstrate that much larger ozone depletion than observed has been avoided by the protocol, with beneficial impacts on surface ultraviolet. A deep Arctic ozone hole, with column values <120 DU, would have occurred given meteorological conditions in 2011. The Antarctic ozone hole would have grown in size by 40% by 2013, with enhanced loss at subpolar latitudes. The decline over northern hemisphere middle latitudes would have continued, more than doubling to ∼15% by 2013.
Quantifying the ozone and ultraviolet benefits already achieved by the Montreal Protocol
NASA Astrophysics Data System (ADS)
Chipperfield, M. P.; Dhomse, S. S.; Feng, W.; McKenzie, R. L.; Velders, G. J. M.; Pyle, J. A.
2015-05-01
Chlorine- and bromine-containing ozone-depleting substances (ODSs) are controlled by the 1987 Montreal Protocol. In consequence, atmospheric equivalent chlorine peaked in 1993 and has been declining slowly since then. Consistent with this, models project a gradual increase in stratospheric ozone with the Antarctic ozone hole expected to disappear by ~2050. However, we show that by 2013 the Montreal Protocol had already achieved significant benefits for the ozone layer. Using a 3D atmospheric chemistry transport model, we demonstrate that much larger ozone depletion than observed has been avoided by the protocol, with beneficial impacts on surface ultraviolet. A deep Arctic ozone hole, with column values <120 DU, would have occurred given meteorological conditions in 2011. The Antarctic ozone hole would have grown in size by 40% by 2013, with enhanced loss at subpolar latitudes. The decline over northern hemisphere middle latitudes would have continued, more than doubling to ~15% by 2013.
Calibrated work function mapping by Kelvin probe force microscopy
NASA Astrophysics Data System (ADS)
Fernández Garrillo, Pablo A.; Grévin, Benjamin; Chevalier, Nicolas; Borowik, Łukasz
2018-04-01
We propose and demonstrate the implementation of an alternative work function tip calibration procedure for Kelvin probe force microscopy under ultrahigh vacuum, using monocrystalline metallic materials with known crystallographic orientation as reference samples, instead of the often used highly oriented pyrolytic graphite calibration sample. The implementation of this protocol allows the acquisition of absolute and reproducible work function values, with an improved uncertainty with respect to unprepared highly oriented pyrolytic graphite-based protocols. The developed protocol allows the local investigation of absolute work function values over nanostructured samples and can be implemented in electronic structures and devices characterization as demonstrated over a nanostructured semiconductor sample presenting Al0.7Ga0.3As and GaAs layers with variable thickness. Additionally, using our protocol we find that the work function of annealed highly oriented pyrolytic graphite is equal to 4.6 ± 0.03 eV.
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2009-09-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Study on Global GIS architecture and its key technologies
NASA Astrophysics Data System (ADS)
Cheng, Chengqi; Guan, Li; Lv, Xuefeng
2010-11-01
Global GIS (G2IS) is a system, which supports the huge data process and the global direct manipulation on global grid based on spheroid or ellipsoid surface. Based on global subdivision grid (GSG), Global GIS architecture is presented in this paper, taking advantage of computer cluster theory, the space-time integration technology and the virtual reality technology. Global GIS system architecture is composed of five layers, including data storage layer, data representation layer, network and cluster layer, data management layer and data application layer. Thereinto, it is designed that functions of four-level protocol framework and three-layer data management pattern of Global GIS based on organization, management and publication of spatial information in this architecture. Three kinds of core supportive technologies, which are computer cluster theory, the space-time integration technology and the virtual reality technology, and its application pattern in the Global GIS are introduced in detail. The primary ideas of Global GIS in this paper will be an important development tendency of GIS.
Large Scale Portability of Hospital Information System Software
Munnecke, Thomas H.; Kuhn, Ingeborg M.
1986-01-01
As part of its Decentralized Hospital Computer Program (DHCP) the Veterans Administration installed new hospital information systems in 169 of its facilities during 1984 and 1985. The application software for these systems is based on the ANS MUMPS language, is public domain, and is designed to be operating system and hardware independent. The software, developed by VA employees, is built upon a layered approach, where application packages layer on a common data dictionary which is supported by a Kernel of software. Communications between facilities are based on public domain Department of Defense ARPA net standards for domain naming, mail transfer protocols, and message formats, layered on a variety of communications technologies.
Transport Protocols for Wireless Mesh Networks
NASA Astrophysics Data System (ADS)
Eddie Law, K. L.
Transmission control protocol (TCP) provides reliable connection-oriented services between any two end systems on the Internet. With TCP congestion control algorithm, multiple TCP connections can share network and link resources simultaneously. These TCP congestion control mechanisms have been operating effectively in wired networks. However, performance of TCP connections degrades rapidly in wireless and lossy networks. To sustain the throughput performance of TCP connections in wireless networks, design modifications may be required accordingly in the TCP flow control algorithm, and potentially, in association with other protocols in other layers for proper adaptations. In this chapter, we explain the limitations of the latest TCP congestion control algorithm, and then review some popular designs for TCP connections to operate effectively in wireless mesh network infrastructure.
NASA Astrophysics Data System (ADS)
Jian, Wei; Estevez, Claudio; Chowdhury, Arshad; Jia, Zhensheng; Wang, Jianxin; Yu, Jianguo; Chang, Gee-Kung
2010-12-01
This paper presents an energy-efficient Medium Access Control (MAC) protocol for very-high-throughput millimeter-wave (mm-wave) wireless sensor communication networks (VHT-MSCNs) based on hybrid multiple access techniques of frequency division multiplexing access (FDMA) and time division multiplexing access (TDMA). An energy-efficient Superframe for wireless sensor communication network employing directional mm-wave wireless access technologies is proposed for systems that require very high throughput, such as high definition video signals, for sensing, processing, transmitting, and actuating functions. Energy consumption modeling for each network element and comparisons among various multi-access technologies in term of power and MAC layer operations are investigated for evaluating the energy-efficient improvement of proposed MAC protocol.
The measures needed for the protection of the Earth's ozone layer are decided regularly by the Parties to the Montreal Protocol. This progress report is the 2004 update by the Environmental Effects Assessment Panel.
Network security system for health and medical information using smart IC card
NASA Astrophysics Data System (ADS)
Kanai, Yoichi; Yachida, Masuyoshi; Yoshikawa, Hiroharu; Yamaguchi, Masahiro; Ohyama, Nagaaki
1998-07-01
A new network security protocol that uses smart IC cards has been designed to assure the integrity and privacy of medical information in communication over a non-secure network. Secure communication software has been implemented as a library based on this protocol, which is called the Integrated Secure Communication Layer (ISCL), and has been incorporated into information systems of the National Cancer Center Hospitals and the Health Service Center of the Tokyo Institute of Technology. Both systems have succeeded in communicating digital medical information securely.
Dan, Abhijit; Gochev, Georgi; Miller, Reinhard
2015-07-01
Oscillating drop tensiometry was applied to study adsorbed interfacial layers at water/air and water/hexane interfaces formed from mixed solutions of β-lactoglobulin (BLG, 1 μM in 10 mM buffer, pH 7 - negative net charge) and the anionic surfactant SDS or the cationic DoTAB. The interfacial pressure Π and the dilational viscoelasticity modulus |E| of the mixed layers were measured for mixtures of varying surfactant concentrations. The double capillary technique was employed which enables exchange of the protein solution in the drop bulk by surfactant solution (sequential adsorption) or by pure buffer (washing out). The first protocol allows probing the influence of the surfactant on a pre-adsorbed protein layer thus studying the protein/surfactant interactions at the interface. The second protocol gives access to the residual values of Π and |E| measured after the washing out procedure thus bringing information about the process of protein desorption. The DoTAB/BLG complexes exhibit higher surface activity and higher resistance to desorption in comparison with those for the SDS/BLG complexes due to hydrophobization via electrostatic binding of surfactant molecules. The neutral DoTAB/BLG complexes achieve maximum elastic response of the mixed layer. Mixed BLG/surfactant layers at the water/oil interface are found to reach higher surface pressure and lower maximum dilational elasticity than those at the water/air surface. The sequential adsorption mode experiments and the desorption study reveal that binding of DoTAB to pre-adsorbed BLG globules is somehow restricted at the water/air surface in comparison with the case of complex formation in the solution bulk and subsequently adsorbed at the water/air surface. Maximum elasticity is achieved with washed out layers obtained after simultaneous adsorption, i.e. isolation of the most surface active DoTAB/BLG complex. These specific effects are much less pronounced at the W/H interface. Copyright © 2015 Elsevier Inc. All rights reserved.
Need low-cost networking? Consider DeviceNet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, W.H.
1996-11-01
The drive to reduce production costs and optimize system performance in manufacturing facilities causes many end users to invest in network solutions. Because of distinct differences between the way tasks are performed and the way data are handled for various applications, it is clear than more than one network will be needed in most facilities. What is not clear is which network is most appropriate for a given application. The information layer is the link between automation and information environments via management information systems (MISs) and manufacturing execution systems (MESs) and manufacturing execution systems (MESs). Here the market has chosenmore » a de facto standard in Ethernet, primarily transmission control protocol/internet protocol (TCP/IP) and secondarily manufacturing messaging system (MMS). There is no single standard at the device layer. However, the DeviceNet communication standard has made strides to reach this goal. This protocol eliminates expensive hardwiring and provides improved communication between devices and important device-level diagnostics not easily accessible or available through hardwired I/O interfaces. DeviceNet is a low-cost communications link connecting industrial devices to a network. Many original equipment manufacturers and end users have chosen the DeviceNet platform for several reasons, but most frequently because of four key features: interchangeability; low cost; advanced diagnostics; insert devices under power.« less
CFTLB: a novel cross-layer fault tolerant and load balancing protocol for WMN
NASA Astrophysics Data System (ADS)
Krishnaveni, N. N.; Chitra, K.
2017-12-01
Wireless mesh network (WMN) forms a wireless backbone framework for multi-hop transmission among the routers and clients in the extensible coverage area. To improve the throughput of WMNs with multiple gateways (GWs), several issues related to GW selection, load balancing and frequent link failures due to the presence of dynamic obstacles and channel interference should be addressed. This paper presents a novel cross-layer fault tolerant and load balancing (CFTLB) protocol to overcome the issues in WMN. Initially, the neighbour GW is searched and channel load is calculated. The GW having least channel load is selected which is estimated during the arrival of the new node. The proposed algorithm finds the alternate GWs and calculates the channel availability under high loading scenarios. If the current load in the GW is high, another GW is found and channel availability is calculated. Besides, it initiates the channel switching and establishes the communication with the mesh client effectively. The utilisation of hashing technique in proposed CFTLB verifies the status of the packets and achieves better performance in terms of router average throughput, throughput, average channel access time and lower end-to-end delay, communication overhead and average data loss in the channel compared to the existing protocols.
The Interplanetary Overlay Networking Protocol Accelerator
NASA Technical Reports Server (NTRS)
Pang, Jackson; Torgerson, Jordan L.; Clare, Loren P.
2008-01-01
A document describes the Interplanetary Overlay Networking Protocol Accelerator (IONAC) an electronic apparatus, now under development, for relaying data at high rates in spacecraft and interplanetary radio-communication systems utilizing a delay-tolerant networking protocol. The protocol includes provisions for transmission and reception of data in bundles (essentially, messages), transfer of custody of a bundle to a recipient relay station at each step of a relay, and return receipts. Because of limitations on energy resources available for such relays, data rates attainable in a conventional software implementation of the protocol are lower than those needed, at any given reasonable energy-consumption rate. Therefore, a main goal in developing the IONAC is to reduce the energy consumption by an order of magnitude and the data-throughput capability by two orders of magnitude. The IONAC prototype is a field-programmable gate array that serves as a reconfigurable hybrid (hardware/ firmware) system for implementation of the protocol. The prototype can decode 108,000 bundles per second and encode 100,000 bundles per second. It includes a bundle-cache static randomaccess memory that enables maintenance of a throughput of 2.7Gb/s, and an Ethernet convergence layer that supports a duplex throughput of 1Gb/s.
The Importance of the Montreal Protocol in Protecting the Earth's Hydroclimate
NASA Astrophysics Data System (ADS)
Seager, R.; Wu, Y.; Polvani, L. M.
2012-12-01
The 1987 Montreal Protocol regulating emissions of ozone depleting chlorofluorocarbons (CFCs) was motivated primarily by the harm to human health and ecosystems arising from increased exposure to ultraviolet-B (UV-B) radiation associated with depletion from the ozone layer. It is now known that the Montreal Protocol has reduced global warming since CFCs are greenhouse gases (GHGs). In this paper we show that the Montreal Protocol also significantly protects the Earth's hydroclimate, even though this was also not a motivating factor in the decision-making that led to the Protocol. General Circulation Model (GCM) results show that in the coming decade (2020-29), under the 'World Avoided' scenario of no regulations on CFC emissions, the subtropical dry zones would in general get drier, and the middle and high latitude regions wetter. This change is similar, in both pattern and magnitude, to that in the coming decade caused by projected increases in carbon dioxide concentrations. This implies that because of the Montreal Protocol, and the ozone depletion and global warming associated with CFCs thus avoided, the hydrological cycle changes in the coming decade will be significantly less than what they otherwise would have been.
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic.
Li, Ning; Martínez, José-Fernán; Hernández Díaz, Vicente
2015-08-10
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters' dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively.
The Balanced Cross-Layer Design Routing Algorithm in Wireless Sensor Networks Using Fuzzy Logic
Li, Ning; Martínez, José-Fernán; Díaz, Vicente Hernández
2015-01-01
Recently, the cross-layer design for the wireless sensor network communication protocol has become more and more important and popular. Considering the disadvantages of the traditional cross-layer routing algorithms, in this paper we propose a new fuzzy logic-based routing algorithm, named the Balanced Cross-layer Fuzzy Logic (BCFL) routing algorithm. In BCFL, we use the cross-layer parameters’ dispersion as the fuzzy logic inference system inputs. Moreover, we give each cross-layer parameter a dynamic weight according the value of the dispersion. For getting a balanced solution, the parameter whose dispersion is large will have small weight, and vice versa. In order to compare it with the traditional cross-layer routing algorithms, BCFL is evaluated through extensive simulations. The simulation results show that the new routing algorithm can handle the multiple constraints without increasing the complexity of the algorithm and can achieve the most balanced performance on selecting the next hop relay node. Moreover, the Balanced Cross-layer Fuzzy Logic routing algorithm can adapt to the dynamic changing of the network conditions and topology effectively. PMID:26266412
1993-03-01
sstm[MAXFD], stpm [MAXFD]; u-short fport,bport; char sfport[MLAXPORTI, sbport[MAXPORTII; int childpid; char my~name[MAXHOSTNAME+ 1 ], *ip~add&, char my...34, "tpm", stpm , (char*) NULL); printf ("Error executing tpm\
Lee, Dae-Sik; Yang, Haesik; Chung, Kwang-Hyo; Pyo, Hyeon-Bong
2005-08-15
Because of their broad applications in biomedical analysis, integrated, polymer-based microdevices incorporating micropatterned metallic and insulating layers are significant in contemporary research. In this study, micropatterns for temperature sensing and microelectrode sets for electroanalysis have been implemented on an injection-molded thin polymer membrane by employing conventional semiconductor processing techniques (i.e., standard photolithographic methods). Cyclic olefin copolymer (COC) is chosen as the polymer substrate because of its high chemical and thermal stability. A COC 5-in. wafer (1-mm thickness) is manufactured using an injection molding method, in which polymer membranes (approximately 130 microm thick and 3 mm x 6 mm in area) are implemented simultaneously in order to reduce local thermal mass around micropatterned heaters and temperature sensors. The highly polished surface (approximately 4 nm within 40 microm x 40 microm area) of the fabricated COC wafer as well as its good resistance to typical process chemicals makes it possible to use the standard photolithographic and etching protocols on the COC wafer. Gold micropatterns with a minimum 5-microm line width are fabricated for making microheaters, temperature sensors, and microelectrodes. An insulating layer of aluminum oxide (Al2O3) is prepared at a COC-endurable low temperature (approximately 120 degrees C) by using atomic layer deposition and micropatterning for the electrode contacts. The fabricated microdevice for heating and temperature sensing shows improved performance of thermal isolation, and microelectrodes display good electrochemical performances for electrochemical sensors. Thus, this novel 5-in. wafer-level microfabrication method is a simple and cost-effective protocol to prepare polymer substrate and demonstrates good potential for application to highly integrated and miniaturized biomedical devices.
Enhanced Multi-Modal Access to Planetary Exploration
NASA Technical Reports Server (NTRS)
Lamarra, Norm; Doyle, Richard; Wyatt, Jay
2003-01-01
Tomorrow's Interplanetary Network (IPN) will evolve from JPL's Deep-Space Network (DSN) and provide key capabilities to future investigators, such as simplified acquisition of higher-quality science at remote sites and enriched access to these sites. These capabilities could also be used to foster public interest, e.g., by making it possible for students to explore these environments personally, eventually perhaps interacting with a virtual world whose models could be populated by data obtained continuously from the IPN. Our paper looks at JPL's approach to making this evolution happen, starting from improved communications. Evolving space protocols (e.g., today's CCSDS proximity and file-transfer protocols) will provide the underpinning of such communications in the next decades, just as today's rich web was enabled by progress in Internet Protocols starting from the early 1970's (ARPAnet research). A key architectural thrust of this effort is to deploy persistent infrastructure incrementally, using a layered service model, where later higher-layer capabilities (such as adaptive science planning) are enabled by earlier lower-layer services (such as automated routing of object-based messages). In practice, there is also a mind shift needed from an engineering culture raised on point-to-point single-function communications (command uplink, telemetry downlink), to one in which assets are only indirectly accessed, via well-defined interfaces. We are aiming to foster a 'community of access' both among space assets and the humans who control them. This enables appropriate (perhaps eventually optimized) sharing of services and resources to the greater benefit of all participants. We envision such usage to be as automated in the future as using a cell phone is today - with all the steps in creating the real-time link being automated.
Röhe, Ilen; Hüttner, Friedrich Joseph; Plendl, Johanna; Drewes, Barbara; Zentek, Jürgen
2018-02-05
The histological characterization of the intestinal mucus layer is important for many scientific experiments investigating the interaction between intestinal microbiota, mucosal immune response and intestinal mucus production. The aim of this study was to examine and compare different fixation protocols for displaying and quantifying the intestinal mucus layer in piglets and to test which histomorphological parameters may correlate with the determined mucus layer thickness. Jejunal and colonal tissue samples of weaned piglets (n=10) were either frozen in liquid nitrogen or chemically fixed using methacarn solution. The frozen tissue samples were cryosectioned and subsequently postfixed using three different postfixatives: paraformaldehyde vapor, neutrally buffered formalin solution and ethanol solution. After dehydration, methacarn fixed tissues were embedded in paraffin wax. Both sections of cryopreserved and methacarn fixed tissue samples were stained with Alcian blue (AB)-PAS followed by the microscopically determination of the mucus layer thickness. Different pH values of the Alcian Blue staining solution and two mucus layer thickness measuring methods were compared. In addition, various histomorphological parameters of methacarn fixed tissue samples were evaluated including the number of goblet cells and the mucin staining area. Cryopreservation in combination with chemical postfixation led to mucus preservation in the colon of piglets allowing mucus thickness measurements. Mucus could be only partly preserved in cryosections of the jejunum impeding any quantitative description of the mucus layer thickness. The application of different postfixations, varying pH values of the AB solution and different mucus layer measuring methods led to comparable results regarding the mucus layer thickness. Methacarn fixation proved to be unsuitable for mucus depiction as only mucus patches were found in the jejunum or a detachment of the mucus layer from the epithelium was observed in the colon. Correlation analyses revealed that the proportion of the mucin staining area per crypt area (relative mucin staining) measured in methacarn fixed tissue samples corresponded to the colonal mucus layer thickness determined in cryopreserved tissue samples. In conclusion, the results showed that cryopreservation using liquid nitrogen followed by chemical postfixation and AB-PAS staining led to a reliable mucus preservation allowing a mucus thickness determination in the colon of pigs. Moreover, the detected relative mucin staining area may serve as a suitable histomorphological parameter for the assessment of the intestinal mucus layer thickness. The findings obtained in this study can be used for the implementation of an improved standard for the histological description of the mucus layer in the colon of pigs.
Joint Cross-Layer Design for Wireless QoS Content Delivery
NASA Astrophysics Data System (ADS)
Chen, Jie; Lv, Tiejun; Zheng, Haitao
2005-12-01
In this paper, we propose a joint cross-layer design for wireless quality-of-service (QoS) content delivery. Central to our proposed cross-layer design is the concept of adaptation. Adaptation represents the ability to adjust protocol stacks and applications to respond to channel variations. We focus our cross-layer design especially on the application, media access control (MAC), and physical layers. The network is designed based on our proposed fast frequency-hopping orthogonal frequency division multiplex (OFDM) technique. We also propose a QoS-awareness scheduler and a power adaptation transmission scheme operating at both the base station and mobile sides. The proposed MAC scheduler coordinates the transmissions of an IP base station and mobile nodes. The scheduler also selects appropriate transmission formats and packet priorities for individual users based on current channel conditions and the users' QoS requirements. The test results show that our cross-layer design provides an excellent framework for wireless QoS content delivery.
Study of LTPP laboratory resilient modulus test data and response characteristics.
DOT National Transportation Integrated Search
2002-10-01
The resilient modulus of every unbound structural layer of the Long Term Pavement Performance (LTPP) Specific Pavement and : General Pavement Studies Test Sections is being measured in the laboratory using LTPP test protocol P46. A total of 2,014 : r...
STRATOSPHERIC OZONE PROTECTION: AN EPA ENGINEERING PERSPECTIVE
Chlorine released into the atmosphere is a major factor in the depletion of the protective stratospheric ozone layer. The Montreal Protocol, as amended in 1990, and the Clean Air Act Amendments of 1990, address the limits and reduction schedules to be placed on chlorine- and brom...
Enhanced parent selection algorithms in mintroute protocol
NASA Astrophysics Data System (ADS)
Kim, Ki-Il
2012-11-01
A low-rate, short-range wireless radio communication on a small device often hampers high reliability in wireless sensor networks. However, more applications are increasingly demanding high reliability. To meet this requirement, various approaches have been proposed in each viewpoint of layers. Among those, MintRoute is a well-known network layer approach to develop a new metric based on link quality for path selection towards the sink. By choosing the link with the highest measured value, it has a higher possibility to transmit a packet over the link without error. However, there are still several issues to be mentioned during operations. In this paper, we propose how to improve the MintRoute protocol through revised algorithms. They include a parent selection considering distance and level from the sink node, and a fast recovery method against failures. Simulations and analysis are performed by in order to validate the suitability of reduced end-to-end delay and fast recovery for failures, thus to enhance the reliability of communication.
Domain and nanoridge growth kinetics in stratifying foam films
NASA Astrophysics Data System (ADS)
Zhang, Yiran; Sharma, Vivek
Ultrathin films exhibit stratification due to confinement-induced structuring and layering of small molecules in simple fluids, and of supramolecular structures like micelles, lipid layers and nanoparticles in complex fluids. Stratification proceeds by the formation and growth of thinner domains at the expense of surrounding thicker film, and results in formation of nanoscopic terraces and mesas within a film. The detailed mechanisms underlying stratification are still under debate, and are resolved in this contribution by addressing long-standing experimental and theoretical challenges. Thickness variations in stratifying films are visualized and analyzed using interferometry, digital imaging and optical microscopy (IDIOM) protocols, with unprecedented high spatial (thickness <100 nm, lateral 500 nm) and temporal resolution (<1 ms). Using IDIOM protocols we developed recently, we characterize the shape and the growth dynamics of nanoridges that flank the expanding domains in micellar thin films. We show that topographical changes including nanoridge growth, and the overall stratification dynamics, can be described quantitatively by nonlinear thin film equation, amended with supramolecular oscillatory surface forces.
NASA Astrophysics Data System (ADS)
Huang, Hong-bin; Liu, Wei-ping; Chen, Shun-er; Zheng, Liming
2005-02-01
A new type of CATV network management system developed by universal MCU, which supports SNMP, is proposed in this paper. From the point of view in both hardware and software, the function and method of every modules inside the system, which include communications in the physical layer, protocol process, data process, and etc, are analyzed. In our design, the management system takes IP MAN as data transmission channel and every controlled object in the management structure has a SNMP agent. In the SNMP agent developed, there are four function modules, including physical layer communication module, protocol process module, internal data process module and MIB management module. In the paper, the structure and function of every module are designed and demonstrated while the related hardware circuit, software flow as well as the experimental results are tested. Furthermore, by introducing RTOS into the software programming, the universal MCU procedure can conducts such multi-thread management as fast Ethernet controller driving, TCP/IP process, serial port signal monitoring and so on, which greatly improves efficiency of CPU.
Padula, William V
The purpose of this study was to examine the effectiveness and value of prophylactic 5-layer foam sacral dressings to prevent hospital-acquired pressure injury rates in acute care settings. Retrospective observational cohort. We reviewed records of adult patients 18 years or older who were hospitalized at least 5 days across 38 acute care hospitals of the University Health System Consortium (UHC) and had a pressure injury as identified by Patient Safety Indicator #3 (PSI-03). All facilities are located in the United States. We collected longitudinal data pertaining to prophylactic 5-layer foam sacral dressings purchased by hospital-quarter for 38 academic medical centers between 2010 and 2015. Longitudinal data on acute care, hospital-level patient outcomes (eg, admissions and PSI-03 and pressure injury rate) were queried through the UHC clinical database/resource manager from the Johns Hopkins Medicine portal. Data on volumes of dressings purchased per UHC hospital were merged with UHC data. Mixed-effects negative binomial regression was used to test the longitudinal association of prophylactic foam sacral dressings on pressure injury rates, adjusted for hospital case-mix and Medicare payments rules. Significant pressure injury rate reductions in US acute care hospitals between 2010 and 2015 were associated with the adoption of prophylactic 5-layer foam sacral dressings within a prevention protocol (-1.0 cases/quarter; P = .002) and changes to Medicare payment rules in 2014 (-1.13 cases/quarter; P = .035). Prophylactic 5-layer foam sacral dressings are an effective component of a pressure injury prevention protocol. Hospitals adopting these technologies should expect good value for use of these products.
2017-01-01
PURPOSE: The purpose of this study was to examine the effectiveness and value of prophylactic 5-layer foam sacral dressings to prevent hospital-acquired pressure injury rates in acute care settings. DESIGN: Retrospective observational cohort. SAMPLE AND SETTING: We reviewed records of adult patients 18 years or older who were hospitalized at least 5 days across 38 acute care hospitals of the University Health System Consortium (UHC) and had a pressure injury as identified by Patient Safety Indicator #3 (PSI-03). All facilities are located in the United States. METHODS: We collected longitudinal data pertaining to prophylactic 5-layer foam sacral dressings purchased by hospital-quarter for 38 academic medical centers between 2010 and 2015. Longitudinal data on acute care, hospital-level patient outcomes (eg, admissions and PSI-03 and pressure injury rate) were queried through the UHC clinical database/resource manager from the Johns Hopkins Medicine portal. Data on volumes of dressings purchased per UHC hospital were merged with UHC data. Mixed-effects negative binomial regression was used to test the longitudinal association of prophylactic foam sacral dressings on pressure injury rates, adjusted for hospital case-mix and Medicare payments rules. RESULTS: Significant pressure injury rate reductions in US acute care hospitals between 2010 and 2015 were associated with the adoption of prophylactic 5-layer foam sacral dressings within a prevention protocol (−1.0 cases/quarter; P = .002) and changes to Medicare payment rules in 2014 (−1.13 cases/quarter; P = .035). CONCLUSIONS: Prophylactic 5-layer foam sacral dressings are an effective component of a pressure injury prevention protocol. Hospitals adopting these technologies should expect good value for use of these products. PMID:28816929
Prevention of intra-abdominal adhesion by bi-layer electrospun membrane.
Jiang, Shichao; Wang, Wei; Yan, Hede; Fan, Cunyi
2013-06-04
The aim of this study was to compare the anti-adhesion efficacy of a bi-layer electrospun fibrous membrane consisting of hyaluronic acid-loaded poly(ε-caprolactone) (PCL) fibrous membrane as the inner layer and PCL fibrous membrane as the outer layer with a single-layer PCL electrospun fibrous membrane in a rat cecum abrasion model. The rat model utilized a cecal abrasion and abdominal wall insult surgical protocol. The bi-layer and PCL membranes were applied between the cecum and the abdominal wall, respectively. Control animals did not receive any treatment. After postoperative day 14, a visual semiquantitative grading scale was used to grade the extent of adhesion. Histological analysis was performed to reveal the features of adhesion tissues. Bi-layer membrane treated animals showed significantly lower adhesion scores than control animals (p < 0.05) and a lower adhesion score compared with the PCL membrane. Histological analysis of the bi-layer membrane treated rat rarely demonstrated tissue adhesion while that of the PCL membrane treated rat and control rat showed loose and dense adhesion tissues, respectively. Bi-layer membrane can efficiently prevent adhesion formation in abdominal cavity and showed a significantly decreased adhesion tissue formation compared with the control.
Thompson, Brandon L; Ouyang, Yiwen; Duarte, Gabriela R M; Carrilho, Emanuel; Krauss, Shannon T; Landers, James P
2015-06-01
We describe a technique for fabricating microfluidic devices with complex multilayer architectures using a laser printer, a CO2 laser cutter, an office laminator and common overhead transparencies as a printable substrate via a laser print, cut and laminate (PCL) methodology. The printer toner serves three functions: (i) it defines the microfluidic architecture, which is printed on the overhead transparencies; (ii) it acts as the adhesive agent for the bonding of multiple transparency layers; and (iii) it provides, in its unmodified state, printable, hydrophobic 'valves' for fluidic flow control. By using common graphics software, e.g., CorelDRAW or AutoCAD, the protocol produces microfluidic devices with a design-to-device time of ∼40 min. Devices of any shape can be generated for an array of multistep assays, with colorimetric detection of molecular species ranging from small molecules to proteins. Channels with varying depths can be formed using multiple transparency layers in which a CO2 laser is used to remove the polyester from the channel sections of the internal layers. The simplicity of the protocol, availability of the equipment and substrate and cost-effective nature of the process make microfluidic devices available to those who might benefit most from expedited, microscale chemistry.
Davis, James; Vaughan, D Huw; Stirling, David; Nei, Lembit; Compton, Richard G
2002-07-19
The exploitation of the Ni(III)/Ni(II) transition as a means of quantifying the concentration of nickel within industrial samples was assessed. The methodology relies upon the reagentless electrodeposition of Ni onto a glassy carbon electrode and the subsequent oxidative conversion of the metallic layer to Ni(III). The analytical signal is derived from a cathodic stripping protocol in which the reduction of the Ni(III) layer to Ni(II) is monitored through the use of square wave voltammetry. The procedure was refined through the introduction of an ultrasonic source which served to both enhance the deposition of nickel and to remove the nickel hydroxide layer that results from the measurement process. A well-defined stripping peak was observed at +0.7 V (vs. Agmid R:AgCl) with the response found to be linear over the range 50 nM to 1 muM (based on a 30 s deposition time). Other metal ions such as Cu(II), Mn(II), Cr(III), Pb(II), Cd(II), Zn(II), Fe(III) and Co(II) did not interfere with the response when present in hundred fold excess. The viability of the technique was evaluated through the determination of nickel within a commercial copper nickel alloy and validated through an independent comparison with a standard ICP-AES protocol.
Self-assembled Nano-layering at the Adhesive interface.
Yoshida, Y; Yoshihara, K; Nagaoka, N; Hayakawa, S; Torii, Y; Ogawa, T; Osaka, A; Meerbeek, B Van
2012-04-01
According to the 'Adhesion-Decalcification' concept, specific functional monomers within dental adhesives can ionically interact with hydroxyapatite (HAp). Such ionic bonding has been demonstrated for 10-methacryloyloxydecyl dihydrogen phosphate (MDP) to manifest in the form of self-assembled 'nano-layering'. However, it remained to be explored if such nano-layering also occurs on tooth tissue when commercial MDP-containing adhesives (Clearfil SE Bond, Kuraray; Scotchbond Universal, 3M ESPE) were applied following common clinical application protocols. We therefore characterized adhesive-dentin interfaces chemically, using x-ray diffraction (XRD) and energy-dispersive x-ray spectroscopy (EDS), and ultrastructurally, using (scanning) transmission electron microscopy (TEM/STEM). Both adhesives revealed nano-layering at the adhesive interface, not only within the hybrid layer but also, particularly for Clearfil SE Bond (Kuraray), extending into the adhesive layer. Since such self-assembled nano-layering of two 10-MDP molecules, joined by stable MDP-Ca salt formation, must make the adhesive interface more resistant to biodegradation, it may well explain the documented favorable clinical longevity of bonds produced by 10-MDP-based adhesives.
Column chromatography as a useful step in purification of diatom pigments.
Tokarek, Wiktor; Listwan, Stanisław; Pagacz, Joanna; Leśniak, Piotr; Latowski, Dariusz
2016-01-01
Fucoxanthin, diadinoxanthin and diatoxanthin are carotenoids found in brown algae and most other heterokonts. These pigments are involved in photosynthetic and photoprotective reactions, and they have many potential health benefits. They can be extracted from diatom Phaeodactylum tricornutum by sonication, extraction with chloroform : methanol and preparative thin layer chromatography. We assessed the utility of an additional column chromatography step in purification of these pigments. This novel addition to the isolation protocol increased the purity of fucoxanthin and allowed for concentration of diadinoxanthin and diatoxanthin before HPLC separation. The enhanced protocol is useful for obtaining high purity pigments for biochemical studies.
Improving security of the ping-pong protocol
NASA Astrophysics Data System (ADS)
Zawadzki, Piotr
2013-01-01
A security layer for the asymptotically secure ping-pong protocol is proposed and analyzed in the paper. The operation of the improvement exploits inevitable errors introduced by the eavesdropping in the control and message modes. Its role is similar to the privacy amplification algorithms known from the quantum key distribution schemes. Messages are processed in blocks which guarantees that an eavesdropper is faced with a computationally infeasible problem as long as the system parameters are within reasonable limits. The introduced additional information preprocessing does not require quantum memory registers and confidential communication is possible without prior key agreement or some shared secret.
An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs
NASA Astrophysics Data System (ADS)
Basalamah, Anas; Sato, Takuro
For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.
NASA Astrophysics Data System (ADS)
Serbu, Sabina; Rivière, Étienne; Felber, Pascal
The emergence of large-scale distributed applications based on many-to-many communication models, e.g., broadcast and decentralized group communication, has an important impact on the underlying layers, notably the Internet routing infrastructure. To make an effective use of network resources, protocols should both limit the stress (amount of messages) on each infrastructure entity like routers and links, and balance as much as possible the load in the network. Most protocols use application-level metrics such as delays to improve efficiency of content dissemination or routing, but the extend to which such application-centric optimizations help reduce and balance the load imposed to the infrastructure is unclear. In this paper, we elaborate on the design of such network-friendly protocols and associated metrics. More specifically, we investigate random-based gossip dissemination. We propose and evaluate different ways of making this representative protocol network-friendly while keeping its desirable properties (robustness and low delays). Simulations of the proposed methods using synthetic and real network topologies convey and compare their abilities to reduce and balance the load while keeping good performance.
Quantifying the ozone and ultraviolet benefits already achieved by the Montreal Protocol
Chipperfield, M. P.; Dhomse, S. S.; Feng, W.; McKenzie, R. L.; Velders, G.J.M.; Pyle, J. A.
2015-01-01
Chlorine- and bromine-containing ozone-depleting substances (ODSs) are controlled by the 1987 Montreal Protocol. In consequence, atmospheric equivalent chlorine peaked in 1993 and has been declining slowly since then. Consistent with this, models project a gradual increase in stratospheric ozone with the Antarctic ozone hole expected to disappear by ∼2050. However, we show that by 2013 the Montreal Protocol had already achieved significant benefits for the ozone layer. Using a 3D atmospheric chemistry transport model, we demonstrate that much larger ozone depletion than observed has been avoided by the protocol, with beneficial impacts on surface ultraviolet. A deep Arctic ozone hole, with column values <120 DU, would have occurred given meteorological conditions in 2011. The Antarctic ozone hole would have grown in size by 40% by 2013, with enhanced loss at subpolar latitudes. The decline over northern hemisphere middle latitudes would have continued, more than doubling to ∼15% by 2013. PMID:26011106
The World Already Avoided: Quantifying the Ozone Benefits Achieved by the Montreal Protocol
NASA Astrophysics Data System (ADS)
Chipperfield, Martyn; Dhomse, Sandip; Feng, Wuhu; McKenzie, Richard; Velders, Guus; Pyle, John
2015-04-01
Chlorine and bromine-containing ozone-depleting substances (ODSs) are controlled by the 1987 Montreal Protocol. In consequence, atmospheric equivalent chlorine peaked in 1993 and has been declining slowly since then. Consistent with this, models project a gradual increase in stratospheric ozone with the Antarctic Ozone Hole expected to disappear by ~2050. However, we show that by 2014 the Montreal Protocol has already achieved significant benefits for the ozone layer. Using an off-line 3-D atmospheric chemistry model, we demonstrate that much larger ozone depletion than observed has been avoided by the protocol, with benefits for surface UV and climate. A deep Arctic Ozone Hole, with column values <120 DU, would have occurred given the meteorological conditions in 2011. The Antarctic Ozone Hole would have grown in size by 40% by 2013, with enhanced loss at subpolar latitudes. The ozone decline over northern hemisphere middle latitudes would have continued, more than doubling to ~15% by 2013.
Economics of "essential use exemptions" for metered-dose inhalers under the Montreal Protocol.
DeCanio, Stephen J; Norman, Catherine S
2007-10-01
The Montreal Protocol on Substances that Deplete the Ozone Layer has led to rapid reductions in the use of ozone-depleting substances worldwide. However, the Protocol provides for "essential use exemptions" (EUEs) if there are no "technically and economically feasible" alternatives. An application that might qualify as an "essential use" is CFC-powered medical metered-dose inhalers (MDIs) for the treatment of asthma and chronic obstructive pulmonary disease (COPD), and the US and other nations have applied for exemptions in this case. One concern is that exemptions are necessary to ensure access to medications for low-income uninsureds. We examine the consequences of granting or withholding such exemptions, and conclude that government policies and private-sector programs are available that make it economically feasible to phase out chlorofluorocarbons (CFCs) in this application, thereby furthering the global public health objectives of the Montreal Protocol without compromising the treatment of patients who currently receive medication by means of MDIs.
The measures needed for the protection of the Earth's ozone layer are decided regularly by the Parties to the Montreal Protocol. A section of this progress report focuses on the interactive effects of climate change and ozone depletion on biogeochemical cycles.
Cross-layer model design in wireless ad hoc networks for the Internet of Things.
Yang, Xin; Wang, Ling; Xie, Jian; Zhang, Zhaolin
2018-01-01
Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods.
Cross-layer model design in wireless ad hoc networks for the Internet of Things
Wang, Ling; Xie, Jian; Zhang, Zhaolin
2018-01-01
Wireless ad hoc networks can experience extreme fluctuations in transmission traffic in the Internet of Things, which is widely used today. Currently, the most crucial issues requiring attention for wireless ad hoc networks are making the best use of low traffic periods, reducing congestion during high traffic periods, and improving transmission performance. To solve these problems, the present paper proposes a novel cross-layer transmission model based on decentralized coded caching in the physical layer and a content division multiplexing scheme in the media access control layer. Simulation results demonstrate that the proposed model effectively addresses these issues by substantially increasing the throughput and successful transmission rate compared to existing protocols without a negative influence on delay, particularly for large scale networks under conditions of highly contrasting high and low traffic periods. PMID:29734355
Detecting recovery of the stratospheric ozone layer.
Chipperfield, Martyn P; Bekki, Slimane; Dhomse, Sandip; Harris, Neil R P; Hassler, Birgit; Hossaini, Ryan; Steinbrecht, Wolfgang; Thiéblemont, Rémi; Weber, Mark
2017-09-13
As a result of the 1987 Montreal Protocol and its amendments, the atmospheric loading of anthropogenic ozone-depleting substances is decreasing. Accordingly, the stratospheric ozone layer is expected to recover. However, short data records and atmospheric variability confound the search for early signs of recovery, and climate change is masking ozone recovery from ozone-depleting substances in some regions and will increasingly affect the extent of recovery. Here we discuss the nature and timescales of ozone recovery, and explore the extent to which it can be currently detected in different atmospheric regions.
Detecting recovery of the stratospheric ozone layer
NASA Astrophysics Data System (ADS)
Chipperfield, Martyn P.; Bekki, Slimane; Dhomse, Sandip; Harris, Neil R. P.; Hassler, Birgit; Hossaini, Ryan; Steinbrecht, Wolfgang; Thiéblemont, Rémi; Weber, Mark
2017-09-01
As a result of the 1987 Montreal Protocol and its amendments, the atmospheric loading of anthropogenic ozone-depleting substances is decreasing. Accordingly, the stratospheric ozone layer is expected to recover. However, short data records and atmospheric variability confound the search for early signs of recovery, and climate change is masking ozone recovery from ozone-depleting substances in some regions and will increasingly affect the extent of recovery. Here we discuss the nature and timescales of ozone recovery, and explore the extent to which it can be currently detected in different atmospheric regions.
PRADO, Maíra; SIMÃO, Renata Antoun; GOMES, Brenda Paula Figueiredo de Almeida
2014-01-01
The development and maintenance of the sealing of the root canal system is the key to the success of root canal treatment. The resin-based adhesive material has the potential to reduce the microleakage of the root canal because of its adhesive properties and penetration into dentinal walls. Moreover, the irrigation protocols may have an influence on the adhesiveness of resin-based sealers to root dentin. Objective The objective of the present study was to evaluate the effect of different irrigant protocols on coronal bacterial microleakage of gutta-percha/AH Plus and Resilon/Real Seal Self-etch systems. Material and Methods One hundred ninety pre-molars were used. The teeth were divided into 18 experimental groups according to the irrigation protocols and filling materials used. The protocols used were: distilled water; sodium hypochlorite (NaOCl)+eDTA; NaOCl+H3PO4; NaOCl+eDTA+chlorhexidine (CHX); NaOCl+H3PO4+CHX; CHX+eDTA; CHX+ H3PO4; CHX+eDTA+CHX and CHX+H3PO4+CHX. Gutta-percha/AH Plus or Resilon/Real Seal Se were used as root-filling materials. The coronal microleakage was evaluated for 90 days against Enterococcus faecalis. Data were statistically analyzed using Kaplan-Meier survival test, Kruskal-Wallis and Mann-Whitney tests. Results No significant difference was verified in the groups using chlorhexidine or sodium hypochlorite during the chemo-mechanical preparation followed by eDTA or phosphoric acid for smear layer removal. The same results were found for filling materials. However, the statistical analyses revealed that a final flush with 2% chlorhexidine reduced significantly the coronal microleakage. Conclusion A final flush with 2% chlorhexidine after smear layer removal reduces coronal microleakage of teeth filled with gutta-percha/AH Plus or Resilon/Real Seal SE. PMID:25025557
Nagayama, Yasunori; Nakaura, Takeshi; Oda, Seitaro; Utsunomiya, Daisuke; Funama, Yoshinori; Iyama, Yuji; Taguchi, Narumi; Namimoto, Tomohiro; Yuki, Hideaki; Kidoh, Masafumi; Hirata, Kenichiro; Nakagawa, Masataka; Yamashita, Yasuyuki
2018-04-01
To evaluate the image quality and lesion conspicuity of virtual-monochromatic-imaging (VMI) with dual-layer DECT (DL-DECT) for reduced-iodine-load multiphasic-hepatic CT. Forty-five adults with renal dysfunction who had undergone hepatic DL-DECT with 300-mgI/kg were included. VMI (40-70-keV, DL-DECT-VMI) was generated at each enhancement phase. As controls, 45 matched patients undergoing standard 120-kVp protocol (120-kVp, 600-mgI/kg, and iterative reconstruction) were included. We compared the size-specific dose estimate (SSDE), image noise, CT attenuation, and contrast-to-noise ratio (CNR) between protocols. Two radiologists scored the image quality and lesion conspicuity. SSDE was significantly lower in DL-DECT group (p < 0.01). Image noise of DL-DECT-VMI was almost constant at each keV (differences of ≤15%) and equivalent to or lower than of 120-kVp. As the energy decreased, CT attenuation and CNR gradually increased; the values of 55-60 keV images were almost equivalent to those of standard 120-kVp. The highest scores for overall quality and lesion conspicuity were assigned at 40-keV followed by 45 to 55-keV, all of which were similar to or better than of 120-kVp. For multiphasic-hepatic CT with 50% iodine-load, DL-DECT-VMI at 40- to 55-keV provides equivalent or better image quality and lesion conspicuity without increasing radiation dose compared with standard 120-kVp protocol. • 40-55-keV yields optimal image quality for half-iodine-load multiphasic-hepatic CT with DL-DECT. • DL-DECT protocol decreases radiation exposure compared with 120-kVp scans with iterative reconstruction. • 40-keV images maximise conspicuity of hepatocellular carcinoma especially at hepatic-arterial phase.
Del-Valle-Soto, Carolina; Mex-Perera, Carlos; Orozco-Lugo, Aldo; Lara, Mauricio; Galván-Tejada, Giselle M; Olmedo, Oscar
2014-12-02
Wireless Sensor Networks deliver valuable information for long periods, then it is desirable to have optimum performance, reduced delays, low overhead, and reliable delivery of information. In this work, proposed metrics that influence energy consumption are used for a performance comparison among our proposed routing protocol, called Multi-Parent Hierarchical (MPH), the well-known protocols for sensor networks, Ad hoc On-Demand Distance Vector (AODV), Dynamic Source Routing (DSR), and Zigbee Tree Routing (ZTR), all of them working with the IEEE 802.15.4 MAC layer. Results show how some communication metrics affect performance, throughput, reliability and energy consumption. It can be concluded that MPH is an efficient protocol since it reaches the best performance against the other three protocols under evaluation, such as 19.3% reduction of packet retransmissions, 26.9% decrease of overhead, and 41.2% improvement on the capacity of the protocol for recovering the topology from failures with respect to AODV protocol. We implemented and tested MPH in a real network of 99 nodes during ten days and analyzed parameters as number of hops, connectivity and delay, in order to validate our Sensors 2014, 14 22812 simulator and obtain reliable results. Moreover, an energy model of CC2530 chip is proposed and used for simulations of the four aforementioned protocols, showing that MPH has 15.9% reduction of energy consumption with respect to AODV, 13.7% versus DSR, and 5% against ZTR.
Smart Objects, Dumb Archives: A User-Centric, Layered Digital Library Framework
NASA Technical Reports Server (NTRS)
Maly, Kurt; Nelson, Michael L.; Zubair, Mohammad
1999-01-01
Currently, there exist a large number of superb digital libraries, all of which are, unfortunately, vertically integrated and all presenting a monolithic interface to their users. Ideally, a user would want to locate resources from a variety of digital libraries dealing only with one interface. A number of approaches exist to this interoperability issue exist including: defining a universal protocol for all libraries to adhere to; or developing mechanisms to translate between protocols. The approach we illustrate in this paper is to push down the level of universal protocols to one for digital object communication and for communication for simple archives. This approach creates the opportunity for digital library service providers to create digital libraries tailored to the needs of user communities drawing from available archives and individual publishers who adhere to this standard. We have created a reference implementation based on the hyper text transfer protocol (http) with the protocols being derived from the Dienst protocol. We have created a special class of digital objects called buckets and a number of archives based on a NASA collection and NSF funded projects. Starting from NCSTRL we have developed a set of digital library services called NCSTRL+ and have created digital libraries for researchers, educators and students that can each draw on all the archives and individually created buckets.
NASA Astrophysics Data System (ADS)
Amyay, Omar
A method defined in terms of synthesis and verification steps is presented. The specification of the services and protocols of communication within a multilayered architecture of the Open Systems Interconnection (OSI) type is an essential issue for the design of computer networks. The aim is to obtain an operational specification of the protocol service couple of a given layer. Planning synthesis and verification steps constitute a specification trajectory. The latter is based on the progressive integration of the 'initial data' constraints and verification of the specification originating from each synthesis step, through validity constraints that characterize an admissible solution. Two types of trajectories are proposed according to the style of the initial specification of the service protocol couple: operational type and service supplier viewpoint; knowledge property oriented type and service viewpoint. Synthesis and verification activities were developed and formalized in terms of labeled transition systems, temporal logic and epistemic logic. The originality of the second specification trajectory and the use of the epistemic logic are shown. An 'artificial intelligence' approach enables a conceptual model to be defined for a knowledge base system for implementing the method proposed. It is structured in three levels of representation of the knowledge relating to the domain, the reasoning characterizing synthesis and verification activities and the planning of the steps of a specification trajectory.
Real-time dosimeter employed to evaluate the half-value layer in CT
NASA Astrophysics Data System (ADS)
McKenney, Sarah E.; Seibert, J. Anthony; Burkett, George W.; Gelskey, Dale; Sunde, Paul B.; Newman, James D.; Boone, John M.
2014-01-01
Half-value layer (HVL) measurements on commercial whole body computer tomography (CT) scanners require serial measurements and, in many institutions, the presence of a service engineer. An assembly of aluminum filters (AAF), designed to be used in conjunction with a real-time dosimeter, was developed to provide estimates of the HVL using clinical protocols. Two real-time dose probes, a solid-state and air ionization chamber, were examined. The AAF consisted of eight rectangular filters of high-purity aluminum (Type 1100), symmetrically positioned to form a cylindrical ‘cage’ around the probe's detective volume. The incident x-ray beam was attenuated by varying thicknesses of aluminum filters as the gantry completed a minimum of one rotation. Measurements employing real-time chambers were conducted both in service mode and with a routine abdomen/pelvis protocol for several combinations of x-ray tube potentials and bow tie filters. These measurements were validated against conventional serial HVL measurements. The average relative difference between the HVL measurements using the two methods was less than 5% when using a 122 mm diameter AAF; relative differences were reduced to 1.1% when the diameter was increased to 505 mm, possibly due to reduced scatter contamination. Use of a real-time dose probe and the AAF allowed for time-efficient measurements of beam quality on a clinical CT scanner using clinical protocols.
Protein structure modeling for CASP10 by multiple layers of global optimization.
Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2014-02-01
In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.
Safeguarding Digital Library Contents: Charging for Online Content.
ERIC Educational Resources Information Center
Herzberg, Amir
1998-01-01
Investigates the need for mechanisms for charging by digital libraries and other providers of online content, in particular for micropayments, i.e., charging for small amounts. The SSL (Secure Socket Layer) and SET (Secure Electronic Transactions) protocols for charge card payments and the MiniPay micropayment mechanism for charging small amounts…
Development of a Real-Time Intelligent Network Environment.
ERIC Educational Resources Information Center
Gordonov, Anatoliy; Kress, Michael; Klibaner, Roberta
This paper presents a model of an intelligent computer network that provides real-time evaluation of students' performance by incorporating intelligence into the application layer protocol. Specially designed drills allow students to independently solve a number of problems based on current lecture material; students are switched to the most…
Network Computing for Distributed Underwater Acoustic Sensors
2014-03-31
underwater sensor network with mobility. In preparation. [3] EvoLogics (2013), Underwater Acoustic Modems, (Product Information Guide... Wireless Communications, 9(9), 2934–2944. [21] Pompili, D. and Akyildiz, I. (2010), A multimedia cross-layer protocol for underwater acoustic sensor networks ... Network Computing for Distributed Underwater Acoustic Sensors M. Barbeau E. Kranakis
The multidriver: A reliable multicast service using the Xpress Transfer Protocol
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.
1990-01-01
A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.
The Open System Interconnection as a building block in a health sciences information network.
Boss, R W
1985-01-01
The interconnection of integrated health sciences library systems with other health sciences computer systems to achieve information networks will require either custom linkages among specific devices or the adoption of standards that all systems support. The most appropriate standards appear to be those being developed under the Open System Interconnection (OSI) reference model, which specifies a set of rules and functions that computers must follow to exchange information. The protocols have been modularized into seven different layers. The lowest three layers are generally available as off-the-shelf interfacing products. The higher layers require special development for particular applications. This paper describes the OSI, its application in health sciences networks, and specific tasks that remain to be undertaken. PMID:4052672
Fault Tolerant Frequent Pattern Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan
FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less
Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning
Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron; ...
2017-11-21
Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less
Long-range interactions and parallel scalability in molecular simulations
NASA Astrophysics Data System (ADS)
Patra, Michael; Hyvönen, Marja T.; Falck, Emma; Sabouri-Ghomi, Mohsen; Vattulainen, Ilpo; Karttunen, Mikko
2007-01-01
Typical biomolecular systems such as cellular membranes, DNA, and protein complexes are highly charged. Thus, efficient and accurate treatment of electrostatic interactions is of great importance in computational modeling of such systems. We have employed the GROMACS simulation package to perform extensive benchmarking of different commonly used electrostatic schemes on a range of computer architectures (Pentium-4, IBM Power 4, and Apple/IBM G5) for single processor and parallel performance up to 8 nodes—we have also tested the scalability on four different networks, namely Infiniband, GigaBit Ethernet, Fast Ethernet, and nearly uniform memory architecture, i.e. communication between CPUs is possible by directly reading from or writing to other CPUs' local memory. It turns out that the particle-mesh Ewald method (PME) performs surprisingly well and offers competitive performance unless parallel runs on PC hardware with older network infrastructure are needed. Lipid bilayers of sizes 128, 512 and 2048 lipid molecules were used as the test systems representing typical cases encountered in biomolecular simulations. Our results enable an accurate prediction of computational speed on most current computing systems, both for serial and parallel runs. These results should be helpful in, for example, choosing the most suitable configuration for a small departmental computer cluster.
Data Acquisition Backbone Core DABC release v1.0
NASA Astrophysics Data System (ADS)
Adamczewski-Musch, J.; Essel, H. G.; Kurz, N.; Linev, S.
2010-04-01
The Data Acquisition Backbone Core (DABC) is a general purpose software framework designed for the implementation of a wide-range of data acquisition systems - from various small detector test beds to high performance systems. DABC consists of a compact data-flow kernel and a number of plug-ins for various functional components like data inputs, device drivers, user functional modules and applications. DABC provides configurable components for implementing event building over fast networks like InfiniBand or Gigabit Ethernet. A generic Java GUI provides the dynamic control and visualization of control parameters and commands, provided by DIM servers. A first set of application plug-ins has been implemented to use DABC as event builder for the front-end components of the GSI standard DAQ system MBS (Multi Branch System). Another application covers the connection to DAQ readout chains from detector front-end boards (N-XYTER) linked to read-out controller boards (ROC) over UDP into DABC for event building, archiving and data serving. This was applied for data taking in the September 2008 test beamtime for the CBM experiment at GSI. DABC version 1.0 is released and available from the website.
NASA Astrophysics Data System (ADS)
Schaaf, Kjeld; Overeem, Ruud
2004-06-01
Moore’s law is best exploited by using consumer market hardware. In particular, the gaming industry pushes the limit of processor performance thus reducing the cost per raw flop even faster than Moore’s law predicts. Next to the cost benefits of Common-Of-The-Shelf (COTS) processing resources, there is a rapidly growing experience pool in cluster based processing. The typical Beowulf cluster of PC’s supercomputers are well known. Multiple examples exists of specialised cluster computers based on more advanced server nodes or even gaming stations. All these cluster machines build upon the same knowledge about cluster software management, scheduling, middleware libraries and mathematical libraries. In this study, we have integrated COTS processing resources and cluster nodes into a very high performance processing platform suitable for streaming data applications, in particular to implement a correlator. The required processing power for the correlator in modern radio telescopes is in the range of the larger supercomputers, which motivates the usage of supercomputer technology. Raw processing power is provided by graphical processors and is combined with an Infiniband host bus adapter with integrated data stream handling logic. With this processing platform a scalable correlator can be built with continuously growing processing power at consumer market prices.
Unraveling Network-induced Memory Contention: Deeper Insights with Machine Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groves, Taylor Liles; Grant, Ryan; Gonzales, Aaron
Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems enabling asynchronous data transfers, so that applications may fully utilize CPU resources while simultaneously sharing data amongst remote nodes. We examine Network-induced Memory Contention (NiMC) on Infiniband networks. We expose the interactions between RDMA, main-memory and cache, when applications and out-of-band services compete for memory resources. We then explore NiMCs resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantify NiMC and show that NiMCs impact grows with scale resulting in up to 3X performance degradation atmore » scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. In addition, this work examines the problem of predicting NiMC's impact on applications by leveraging machine learning and easily accessible performance counters. This approach provides additional insights about the root cause of NiMC and facilitates dynamic selection of potential solutions. Finally, we evaluated three potential techniques to reduce NiMCs impact, namely hardware offloading, core reservation and network throttling.« less
Announcing Supercomputer Summit
Wells, Jack; Bland, Buddy; Nichols, Jeff; Hack, Jim; Foertter, Fernanda; Hagen, Gaute; Maier, Thomas; Ashfaq, Moetasim; Messer, Bronson; Parete-Koon, Suzanne
2018-01-16
Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titanâs 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIAâs high-speed NVLink. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the worldâs most pressing challenges.
A Programming Model Performance Study Using the NAS Parallel Benchmarks
Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; ...
2010-01-01
Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less
NASA Astrophysics Data System (ADS)
Hitchcock, Adam P.; Berejnov, Viatcheslav; Lee, Vincent; West, Marcia; Colbow, Vesna; Dutta, Monica; Wessel, Silvia
2014-11-01
Scanning Transmission X-ray Microscopy (STXM) at the C 1s, F 1s and S 2p edges has been used to investigate degradation of proton exchange membrane fuel cell (PEM-FC) membrane electrode assemblies (MEA) subjected to accelerated testing protocols. Quantitative chemical maps of the catalyst, carbon support and ionomer in the cathode layer are reported for beginning-of-test (BOT), and end-of-test (EOT) samples for two types of carbon support, low surface area carbon (LSAC) and medium surface area carbon (MSAC), that were exposed to accelerated stress testing with upper potentials (UPL) of 1.0, 1.2, and 1.3 V. The results are compared in order to characterize catalyst layer degradation in terms of the amounts and spatial distributions of these species. Pt agglomeration, Pt migration and corrosion of the carbon support are all visualized, and contribute to differing degrees in these samples. It is found that there is formation of a distinct Pt-in-membrane (PTIM) band for all EOT samples. The cathode thickness shrinks due to loss of the carbon support for all MSAC samples that were exposed to the different upper potentials, but only for the most aggressive testing protocol for the LSAC support. The amount of ionomer per unit volume significantly increases indicating it is being concentrated in the cathode as the carbon corrosion takes place. S 2p spectra and mapping of the cathode catalyst layer indicates there are still sulfonate groups present, even in the most damaged material.
Stacking sequence and interlayer coupling in few-layer graphene revealed by in situ imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhu-Jun; Dong, Jichen; Cui, Yi
In the transition from graphene to graphite, the addition of each individual graphene layer modifies the electronic structure and produces a different material with unique properties. Controlled growth of few-layer graphene is therefore of fundamental interest and will provide access to materials with engineered electronic structure. Here we combine isothermal growth and etching experiments with in situ scanning electron microscopy to reveal the stacking sequence and interlayer coupling strength in few-layer graphene. The observed layer-dependent etching rates reveal the relative strength of the graphene graphene and graphene substrate interaction and the resulting mode of adlayer growth. Scanning tunnelling microscopy andmore » density functional theory calculations confirm a strong coupling between graphene edge atoms and platinum. Simulated etching confirms that etching can be viewed as reversed growth. This work demonstrates that real-time imaging under controlled atmosphere is a powerful method for designing synthesis protocols for sp2 carbon nanostructures in between graphene and graphite.« less
Stacking sequence and interlayer coupling in few-layer graphene revealed by in situ imaging
Wang, Zhu-Jun; Dong, Jichen; Cui, Yi; ...
2016-10-19
In the transition from graphene to graphite, the addition of each individual graphene layer modifies the electronic structure and produces a different material with unique properties. Controlled growth of few-layer graphene is therefore of fundamental interest and will provide access to materials with engineered electronic structure. Here we combine isothermal growth and etching experiments with in situ scanning electron microscopy to reveal the stacking sequence and interlayer coupling strength in few-layer graphene. The observed layer-dependent etching rates reveal the relative strength of the graphene graphene and graphene substrate interaction and the resulting mode of adlayer growth. Scanning tunnelling microscopy andmore » density functional theory calculations confirm a strong coupling between graphene edge atoms and platinum. Simulated etching confirms that etching can be viewed as reversed growth. This work demonstrates that real-time imaging under controlled atmosphere is a powerful method for designing synthesis protocols for sp2 carbon nanostructures in between graphene and graphite.« less
Stacking sequence and interlayer coupling in few-layer graphene revealed by in situ imaging
Wang, Zhu-Jun; Dong, Jichen; Cui, Yi; Eres, Gyula; Timpe, Olaf; Fu, Qiang; Ding, Feng; Schloegl, R.; Willinger, Marc-Georg
2016-01-01
In the transition from graphene to graphite, the addition of each individual graphene layer modifies the electronic structure and produces a different material with unique properties. Controlled growth of few-layer graphene is therefore of fundamental interest and will provide access to materials with engineered electronic structure. Here we combine isothermal growth and etching experiments with in situ scanning electron microscopy to reveal the stacking sequence and interlayer coupling strength in few-layer graphene. The observed layer-dependent etching rates reveal the relative strength of the graphene–graphene and graphene–substrate interaction and the resulting mode of adlayer growth. Scanning tunnelling microscopy and density functional theory calculations confirm a strong coupling between graphene edge atoms and platinum. Simulated etching confirms that etching can be viewed as reversed growth. This work demonstrates that real-time imaging under controlled atmosphere is a powerful method for designing synthesis protocols for sp2 carbon nanostructures in between graphene and graphite. PMID:27759024
Layer uniformity in glucose oxidase immobilization on SiO 2 surfaces
NASA Astrophysics Data System (ADS)
Libertino, Sebania; Scandurra, Antonino; Aiello, Venera; Giannazzo, Filippo; Sinatra, Fulvia; Renis, Marcella; Fichera, Manuela
2007-09-01
The goal of this work was the characterization, step by step, of the enzyme glucose oxidase (GOx) immobilization on silicon oxide surfaces, mainly by means of X-Ray photoelectron spectroscopy (XPS). The immobilization protocol consists of four steps: oxide activation, silanization, linker molecule deposition and GOx immobilization. The linker molecule, glutaraldehyde (GA) in this study, must be able to form a uniform layer on the sample surface in order to maximize the sites available for enzyme bonding and achieve the best enzyme deposition. Using a thin SiO 2 layer grown on Si wafers and following the XPS Si2p signal of the Si substrate during the immobilization steps, we demonstrated both the glutaraldehyde layer uniformity and the possibility to use XPS to monitor thin layer uniformity. In fact, the XPS substrate signal, not shielded by the oxide, is suppressed only when a uniform layer is deposited. The enzyme correct immobilization was monitored using the XPS C1s and N1s signals. Atomic force microscopy (AFM) measurements carried out on the same samples confirmed the results.
Wind Tunnel Experiments to Study Chaparral Crown Fires.
Cobian-Iñiguez, Jeanette; Aminfar, AmirHessam; Chong, Joey; Burke, Gloria; Zuniga, Albertina; Weise, David R; Princevac, Marko
2017-11-14
The present protocol presents a laboratory technique designed to study chaparral crown fire ignition and spread. Experiments were conducted in a low velocity fire wind tunnel where two distinct layers of fuel were constructed to represent surface and crown fuels in chaparral. Chamise, a common chaparral shrub, comprised the live crown layer. The dead fuel surface layer was constructed with excelsior (shredded wood). We developed a methodology to measure mass loss, temperature, and flame height for both fuel layers. Thermocouples placed in each layer estimated temperature. A video camera captured the visible flame. Post-processing of digital imagery yielded flame characteristics including height and flame tilt. A custom crown mass loss instrument developed in-house measured the evolution of the mass of the crown layer during the burn. Mass loss and temperature trends obtained using the technique matched theory and other empirical studies. In this study, we present detailed experimental procedures and information about the instrumentation used. The representative results for the fuel mass loss rate and temperature filed within the fuel bed are also included and discussed.
Issues in providing a reliable multicast facility
NASA Technical Reports Server (NTRS)
Dempsey, Bert J.; Strayer, W. Timothy; Weaver, Alfred C.
1990-01-01
Issues involved in point-to-multipoint communication are presented and the literature for proposed solutions and approaches surveyed. Particular attention is focused on the ideas and implementations that align with the requirements of the environment of interest. The attributes of multicast receiver groups that might lead to useful classifications, what the functionality of a management scheme should be, and how the group management module can be implemented are examined. The services that multicasting facilities can offer are presented, followed by mechanisms within the communications protocol that implements these services. The metrics of interest when evaluating a reliable multicast facility are identified and applied to four transport layer protocols that incorporate reliable multicast.
The Montreal Protocol treaty and its illuminating history of science-policy decision-making
NASA Astrophysics Data System (ADS)
Grady, C.
2017-12-01
The Montreal Protocol on Substances that Deplete the Ozone Layer, hailed as one of the most effective environmental treaties of all time, has a thirty year history of science-policy decision-making. The partnership between Parties to the Montreal Protocol and its technical assessment panels serve as a basis for understanding successes and evaluating stumbles of global environmental decision-making. Real-world environmental treaty negotiations can be highly time-sensitive, politically motivated, and resource constrained thus scientists and policymakers alike are often unable to confront the uncertainties associated with the multitude of choices. The science-policy relationship built within the framework of the Montreal Protocol has helped constrain uncertainty and inform policy decisions but has also highlighted the limitations of the use of scientific understanding in political decision-making. This talk will describe the evolution of the scientist-policymaker relationship over the history of the Montreal Protocol. Examples will illustrate how the Montreal Protocol's technical panels inform decisions of the country governments and will characterize different approaches pursued by different countries with a particular focus on the recently adopted Kigali Amendment. In addition, this talk will take a deeper dive with an analysis of the historic technical panel assessments on estimating financial resources necessary to enable compliance to the Montreal Protocol compared to the political financial decisions made through the Protocol's Multilateral Fund replenishment negotiation process. Finally, this talk will describe the useful lessons and challenges from these interactions and how they may be applicable in other environmental management frameworks across multiple scales under changing climatic conditions.
Steinbacher, M; Vollmer, M K; Buchmann, B; Reimann, S
2008-03-01
A combination of reconstructed histories, long-term time series and recent quasi-continuous observations of non-CO2 greenhouse gases at the high-Alpine site Jungfraujoch is used to assess their current global radiative forcing budget and the influence of regulations due to the Montreal Protocol on Substances that Deplete the Ozone Layer in terms of climate change. Extrapolated atmospheric greenhouse gases trends from 1989 assuming a business-as-usual scenario, i.e. no Montreal Protocol restriction, are presented and compared to the observations. The largest differences between hypothetical business-as-usual mixing ratios and current atmospheric observations over the last 16 years were found for chlorinated species, in particular methyl chloroform (CH3CCl3) at 167 to 203 ppt and chlorofluorocarbon-12 (CFC-12) at 121 to 254 ppt. These prevented increases were used to estimate the effects of their restrictions on the radiative forcing budget. The net direct effect due to the Montreal Protocol regulations reduces global warming and offsets about 14 to 30% of the positive greenhouse effect related to the major greenhouse gases CO2, CH4, N2O and also SF6, and about 12 to 22% of the hypothetical current radiative forcing increase without Montreal Protocol restrictions. Thus, the Montreal Protocol succeeded not only in reducing the atmospheric chlorine content in the atmosphere but also dampened global warming. Nevertheless, the Montreal Protocol controlled species still add to global warming.
Cross-layer design for intrusion detection and data security in wireless ad hoc sensor networks
NASA Astrophysics Data System (ADS)
Hortos, William S.
2007-09-01
A wireless ad hoc sensor network is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. The nodes are severely resource-constrained, with limited processing, memory and power capacities and must operate cooperatively to fulfill a common mission in typically unattended modes. In a wireless sensor network (WSN), each sensor at a node can observe locally some underlying physical phenomenon and sends a quantized version of the observation to sink (destination) nodes via wireless links. Since the wireless medium can be easily eavesdropped, links can be compromised by intrusion attacks from nodes that may mount denial-of-service attacks or insert spurious information into routing packets, leading to routing loops, long timeouts, impersonation, and node exhaustion. A cross-layer design based on protocol-layer interactions is proposed for detection and identification of various intrusion attacks on WSN operation. A feature set is formed from selected cross-layer parameters of the WSN protocol to detect and identify security threats due to intrusion attacks. A separate protocol is not constructed from the cross-layer design; instead, security attributes and quantified trust levels at and among nodes established during data exchanges complement customary WSN metrics of energy usage, reliability, route availability, and end-to-end quality-of-service (QoS) provisioning. Statistical pattern recognition algorithms are applied that use observed feature-set patterns observed during network operations, viewed as security audit logs. These algorithms provide the "best" network global performance in the presence of various intrusion attacks. A set of mobile (software) agents distributed at the nodes implement the algorithms, by moving among the layers involved in the network response at each active node and trust neighborhood, collecting parametric information and executing assigned decision tasks. The communications overhead due to security mechanisms and the latency in network response are thus minimized by reducing the need to move large amounts of audit data through resource-limited nodes and by locating detection/identification programs closer to audit data. If network partitioning occurs due to uncoordinated node exhaustion, data compromise or other effects of the attacks, the mobile agents can continue to operate, thereby increasing fault tolerance in the network response to intrusions. Since the mobile agents behave like an ant colony in securing the WSN, published ant colony optimization (ACO) routines and other evolutionary algorithms are adapted to protect network security, using data at and through nodes to create audit records to detect and respond to denial-of-service attacks. Performance evaluations of algorithms are performed by simulation of a few intrusion attacks, such as black hole, flooding, Sybil and others, to validate the ability of the cross-layer algorithms to enable WSNs to survive the attacks. Results are compared for the different algorithms.
2011-05-01
rate convolutional codes or the prioritized Rate - Compatible Punctured ...Quality of service RCPC Rate - compatible and punctured convolutional codes SNR Signal to noise ratio SSIM... Convolutional (RCPC) codes . The RCPC codes achieve UEP by puncturing off different amounts of coded bits of the parent code . The
The dermatopharmacokinetic (DPK) method of dermal tape stripping may prove to be a valuable addition to risk assessment protocols for toxic substances. To examine this possibility, the dermal penetration and absorption characteristics of [14C]-malathion in
the Sprague-Dawley...
Quantitative magneto-optical investigation of superconductor/ferromagnet hybrid structures
NASA Astrophysics Data System (ADS)
Shaw, G.; Brisbois, J.; Pinheiro, L. B. G. L.; Müller, J.; Blanco Alvarez, S.; Devillers, T.; Dempsey, N. M.; Scheerder, J. E.; Van de Vondel, J.; Melinte, S.; Vanderbemden, P.; Motta, M.; Ortiz, W. A.; Hasselbach, K.; Kramer, R. B. G.; Silhanek, A. V.
2018-02-01
We present a detailed quantitative magneto-optical imaging study of several superconductor/ferromagnet hybrid structures, including Nb deposited on top of thermomagnetically patterned NdFeB and permalloy/niobium with erasable and tailored magnetic landscapes imprinted in the permalloy layer. The magneto-optical imaging data are complemented with and compared to scanning Hall probe microscopy measurements. Comprehensive protocols have been developed for calibrating, testing, and converting Faraday rotation data to magnetic field maps. Applied to the acquired data, they reveal the comparatively weaker magnetic response of the superconductor from the background of larger fields and field gradients generated by the magnetic layer.
Architectural and engineering issues for building an optical Internet
NASA Astrophysics Data System (ADS)
St. Arnaud, Bill
1998-10-01
Recent developments in high density Wave Division Multiplexing fiber systems allows for the deployment of a dedicated optical Internet network for large volume backbone pipes that does not require an underlying multi-service SONET/SDH and ATM transport protocol. Some intrinsic characteristics of Internet traffic such as its self similar nature, server bound congestion, routing and data asymmetry allow for highly optimized traffic engineered networks using individual wavelengths. By transmitting GigaBit Ethernet or SONET/SDH frames natively over WDM wavelengths that directly interconnect high performance routers the original concept of the Internet as an intrinsically survivable datagram network is possible. Traffic engineering, restoral, protection and bandwidth management of the network must now be carried out at the IP layer and so new routing or switching protocols such as MPLS that allow for uni- directional paths with fast restoral and protection at the IP layer become essential for a reliable production network. The deployment of high density WDM municipal and campus networks also gives carriers and ISPs the flexibility to offer customers as integrated and seamless set of optical Internet services.
Collaborative SDOCT Segmentation and Analysis Software.
Yun, Yeyi; Carass, Aaron; Lang, Andrew; Prince, Jerry L; Antony, Bhavna J
2017-02-01
Spectral domain optical coherence tomography (SDOCT) is routinely used in the management and diagnosis of a variety of ocular diseases. This imaging modality also finds widespread use in research, where quantitative measurements obtained from the images are used to track disease progression. In recent years, the number of available scanners and imaging protocols grown and there is a distinct absence of a unified tool that is capable of visualizing, segmenting, and analyzing the data. This is especially noteworthy in longitudinal studies, where data from older scanners and/or protocols may need to be analyzed. Here, we present a graphical user interface (GUI) that allows users to visualize and analyze SDOCT images obtained from two commonly used scanners. The retinal surfaces in the scans can be segmented using a previously described method, and the retinal layer thicknesses can be compared to a normative database. If necessary, the segmented surfaces can also be corrected and the changes applied. The interface also allows users to import and export retinal layer thickness data to an SQL database, thereby allowing for the collation of data from a number of collaborating sites.
Using software security analysis to verify the secure socket layer (SSL) protocol
NASA Technical Reports Server (NTRS)
Powell, John D.
2004-01-01
nal Aeronautics and Space Administration (NASA) have tens of thousands of networked computer systems and applications. Software Security vulnerabilities present risks such as lost or corrupted data, information the3, and unavailability of critical systems. These risks represent potentially enormous costs to NASA. The NASA Code Q research initiative 'Reducing Software Security Risk (RSSR) Trough an Integrated Approach '' offers, among its capabilities, formal verification of software security properties, through the use of model based verification (MBV) to address software security risks. [1,2,3,4,5,6] MBV is a formal approach to software assurance that combines analysis of software, via abstract models, with technology, such as model checkers, that provide automation of the mechanical portions of the analysis process. This paper will discuss: The need for formal analysis to assure software systems with respect to software and why testing alone cannot provide it. The means by which MBV with a Flexible Modeling Framework (FMF) accomplishes the necessary analysis task. An example of FMF style MBV in the verification of properties over the Secure Socket Layer (SSL) communication protocol as a demonstration.
Novel concept for the preparation of gas selective nanocomposite membranes
NASA Astrophysics Data System (ADS)
Drobek, M.; Ayral, A.; Motuzas, J.; Charmette, C.; Loubat, C.; Louradour, E.; Dhaler, D.; Julbe, A.
2015-07-01
In this work we report on a novel concept for the preparation of gas selective composite membranes by a simple and robust synthesis protocol involving a controlled in-situpolycondensation of functional alkoxysilanes within the pores of a mesoporous ceramic matrix. This innovative approach targets the manufacture of thin nanocomposite membranes, allowing good compromise between permeability, selectivity and thermomechanical strength. Compared to simple infiltration, the synthesis protocol allows a controlled formation of gas separation membranes from size-adjusted functional alkoxysilanes by a chemical reaction within the mesopores of a ceramic support, without any formation of a thick and continuous layer on the support top-surface. Membrane permeability can thus be effectively controlled by the thickness and pore size of the mesoporous layer, and by the oligomers chain length. The as-prepared composite membranes are expected to possess a good mechanical and thermomechanical resistance and exhibit a thermally activated transport of He and H2 up to 150 °C, resulting in enhanced separation factors for specific gas mixtures e.g. FH2/CO ˜ 10; FH2/CO2 ˜ 3; FH2/CH4 ˜ 62.
Laser direct-write for fabrication of three-dimensional paper-based devices.
He, P J W; Katis, I N; Eason, R W; Sones, C L
2016-08-16
We report the use of a laser-based direct-write (LDW) technique that allows the design and fabrication of three-dimensional (3D) structures within a paper substrate that enables implementation of multi-step analytical assays via a 3D protocol. The technique is based on laser-induced photo-polymerisation, and through adjustment of the laser writing parameters such as the laser power and scan speed we can control the depths of hydrophobic barriers that are formed within a substrate which, when carefully designed and integrated, produce 3D flow paths. So far, we have successfully used this depth-variable patterning protocol for stacking and sealing of multi-layer substrates, for assembly of backing layers for two-dimensional (2D) lateral flow devices and finally for fabrication of 3D devices. Since the 3D flow paths can also be formed via a single laser-writing process by controlling the patterning parameters, this is a distinct improvement over other methods that require multiple complicated and repetitive assembly procedures. This technique is therefore suitable for cheap, rapid and large-scale fabrication of 3D paper-based microfluidic devices.
Baqi, Younis; Müller, Christa E
2010-05-01
This protocol describes the efficient, generally applicable Ullmann coupling reaction of bromaminic acid with alkyl- or aryl-amines in phosphate buffer under microwave irradiation using elemental copper as a catalyst. The reaction leads to a number of biologically active compounds. As a prototypical example, the synthesis of a new, potent antagonist of human platelet P2Y(12) receptors, which has potential as an antithrombotic drug, is described in detail. The optimized protocol includes a description of an appropriate reaction setup, thin layer chromatography for monitoring the reaction and a procedure for the isolation, purification and characterization of the anticipated product. The reaction is performed without the use of a glove box and there is no requirement for an inert atmosphere. The reaction typically proceeds within 2-30 min, the protocol, including workup, generally takes 1-3 h to complete.
Experience with Delay-Tolerant Networking from Orbit
NASA Technical Reports Server (NTRS)
Ivancic, W.; Eddy, W. M.; Stewart, D.; Wood, L.; Northam, J.; Jackson, C.
2010-01-01
We describe the first use from space of the Bundle Protocol for Delay-Tolerant Networking (DTN) and lessons learned from experiments made and experience gained with this protocol. The Disaster Monitoring Constellation (DMC), constructed by Surrey Satellite Technology Ltd (SSTL), is a multiple-satellite Earth-imaging low-Earth-orbit sensor network in which recorded image swaths are stored onboard each satellite and later downloaded from the satellite payloads to a ground station. Store-and-forward of images with capture and later download gives each satellite the characteristics of a node in a disruption-tolerant network. Originally developed for the Interplanetary Internet, DTNs are now under investigation in an Internet Research Task Force (IRTF) DTN research group (RG), which has developed a bundle architecture and protocol. The DMC is technically advanced in its adoption of the Internet Protocol (IP) for its imaging payloads and for satellite command and control, based around reuse of commercial networking and link protocols. These satellites use of IP has enabled earlier experiments with the Cisco router in Low Earth Orbit (CLEO) onboard the constellation s UK-DMC satellite. Earth images are downloaded from the satellites using a custom IP-based high-speed transfer protocol developed by SSTL, Saratoga, which tolerates unusual link environments. Saratoga has been documented in the Internet Engineering Task Force (IETF) for wider adoption. We experiment with the use of DTNRG bundle concepts onboard the UK-DMC satellite, by examining how Saratoga can be used as a DTN convergence layer to carry the DTNRG Bundle Protocol, so that sensor images can be delivered to ground stations and beyond as bundles. Our practical experience with the first successful use of the DTNRG Bundle Protocol in a space environment gives us insights into the design of the Bundle Protocol and enables us to identify issues that must be addressed before wider deployment of the Bundle Protocol. Published in 2010 by John Wiley & Sons, Ltd. KEY WORDS: Internet; UK-DMC; satellite; Delay-Tolerant Networking (DTN); Bundle Protocol
LINCS: Livermore's network architecture. [Octopus computing network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fletcher, J.G.
1982-01-01
Octopus, a local computing network that has been evolving at the Lawrence Livermore National Laboratory for over fifteen years, is currently undergoing a major revision. The primary purpose of the revision is to consolidate and redefine the variety of conventions and formats, which have grown up over the years, into a single standard family of protocols, the Livermore Interactive Network Communication Standard (LINCS). This standard treats the entire network as a single distributed operating system such that access to a computing resource is obtained in a single way, whether that resource is local (on the same computer as the accessingmore » process) or remote (on another computer). LINCS encompasses not only communication but also such issues as the relationship of customer to server processes and the structure, naming, and protection of resources. The discussion includes: an overview of the Livermore user community and computing hardware, the functions and structure of each of the seven layers of LINCS protocol, the reasons why we have designed our own protocols and why we are dissatisfied by the directions that current protocol standards are taking.« less
OpenFlow arbitrated programmable network channels for managing quantum metadata
Dasari, Venkat R.; Humble, Travis S.
2016-10-10
Quantum networks must classically exchange complex metadata between devices in order to carry out information for protocols such as teleportation, super-dense coding, and quantum key distribution. Demonstrating the integration of these new communication methods with existing network protocols, channels, and data forwarding mechanisms remains an open challenge. Software-defined networking (SDN) offers robust and flexible strategies for managing diverse network devices and uses. We adapt the principles of SDN to the deployment of quantum networks, which are composed from unique devices that operate according to the laws of quantum mechanics. We show how quantum metadata can be managed within a software-definedmore » network using the OpenFlow protocol, and we describe how OpenFlow management of classical optical channels is compatible with emerging quantum communication protocols. We next give an example specification of the metadata needed to manage and control quantum physical layer (QPHY) behavior and we extend the OpenFlow interface to accommodate this quantum metadata. Here, we conclude by discussing near-term experimental efforts that can realize SDN’s principles for quantum communication.« less
OpenFlow arbitrated programmable network channels for managing quantum metadata
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasari, Venkat R.; Humble, Travis S.
Quantum networks must classically exchange complex metadata between devices in order to carry out information for protocols such as teleportation, super-dense coding, and quantum key distribution. Demonstrating the integration of these new communication methods with existing network protocols, channels, and data forwarding mechanisms remains an open challenge. Software-defined networking (SDN) offers robust and flexible strategies for managing diverse network devices and uses. We adapt the principles of SDN to the deployment of quantum networks, which are composed from unique devices that operate according to the laws of quantum mechanics. We show how quantum metadata can be managed within a software-definedmore » network using the OpenFlow protocol, and we describe how OpenFlow management of classical optical channels is compatible with emerging quantum communication protocols. We next give an example specification of the metadata needed to manage and control quantum physical layer (QPHY) behavior and we extend the OpenFlow interface to accommodate this quantum metadata. Here, we conclude by discussing near-term experimental efforts that can realize SDN’s principles for quantum communication.« less
Streetlight Control System Based on Wireless Communication over DALI Protocol
Bellido-Outeiriño, Francisco José; Quiles-Latorre, Francisco Javier; Moreno-Moreno, Carlos Diego; Flores-Arias, José María; Moreno-García, Isabel; Ortiz-López, Manuel
2016-01-01
Public lighting represents a large part of the energy consumption of towns and cities. Efficient management of public lighting can entail significant energy savings. This work presents a smart system for managing public lighting networks based on wireless communication and the DALI protocol. Wireless communication entails significant economic savings, as there is no need to install new wiring and visual impacts and damage to the facades of historical buildings in city centers are avoided. The DALI protocol uses bidirectional communication with the ballast, which allows its status to be controlled and monitored at all times. The novelty of this work is that it tackles all aspects related to the management of public lighting: a standard protocol, DALI, was selected to control the ballast, a wireless node based on the IEEE 802.15.4 standard with a DALI interface was designed, a network layer that considers the topology of the lighting network has been developed, and lastly, some user-friendly applications for the control and maintenance of the system by the technical crews of the different towns and cities have been developed. PMID:27128923
Security Analysis of DTN Architecture and Bundle Protocol Specification for Space-Based Networks
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2009-01-01
A Delay-Tolerant Network (DTN) Architecture (Request for Comment, RFC-4838) and Bundle Protocol Specification, RFC-5050, have been proposed for space and terrestrial networks. Additional security specifications have been provided via the Bundle Security Specification (currently a work in progress as an Internet Research Task Force internet-draft) and, for link-layer protocols applicable to Space networks, the Licklider Transport Protocol Security Extensions. This document provides a security analysis of the current DTN RFCs and proposed security related internet drafts with a focus on space-based communication networks, which is a rather restricted subset of DTN networks. Note, the original focus and motivation of DTN work was for the Interplanetary Internet . This document does not address general store-and-forward network overlays, just the current work being done by the Internet Research Task Force (IRTF) and the Consultative Committee for Space Data Systems (CCSDS) Space Internetworking Services Area (SIS) - DTN working group under the DTN and Bundle umbrellas. However, much of the analysis is relevant to general store-and-forward overlays.
Cheah, Pike See; Mohidin, Norhani; Mohd Ali, Bariah; Maung, Myint; Latif, Azian Abdul
2008-01-01
This study illustrates and quantifies the changes on corneal tissue between the paraffin-embedded and resin-embedded blocks and thus, selects a better target in investigational ophthalmology and optometry via light microscopy. Corneas of two cynomolgus monkeys (Macaca fascicularis) were used in this study. The formalin-fixed cornea was prepared in paraffin block via the conventional tissue processing protocol (4-day protocol) and stained with haematoxylin and eosin. The glutaraldehyde-fixed cornea was prepared in resin block via the rapid and modified tissue processing procedure (1.2-day protocol) and stained with toluidine blue. The paraffin-embedded sample exhibits various undesired tissue damage and artifact such as thinner epithelium (due to the substantial volumic extraction from the tissue), thicker stroma layer (due to the separation of lamellae and the presence of voids) and the distorted endothelium. In contrast, the resin-embedded corneal tissue has demonstrated satisfactory corneal ultrastructural preservation. The rapid and modified tissue processing method for preparing the resin-embedded is particularly beneficial to accelerate the microscopic evaluation in ophthalmology and optometry. PMID:22570589
Bulk Data Dissemination in Low Power Sensor Networks: Present and Future Directions
Xu, Zhirong; Hu, Tianlei; Song, Qianshu
2017-01-01
Wireless sensor network-based (WSN-based) applications need an efficient and reliable data dissemination service to facilitate maintenance, management and data distribution tasks. As WSNs nowadays are becoming pervasive and data intensive, bulk data dissemination protocols have been extensively studied recently. This paper provides a comprehensive survey of the state-of-the-art bulk data dissemination protocols. The large number of papers available in the literature propose various techniques to optimize the dissemination protocols. Different from the existing survey works which separately explores the building blocks of dissemination, our work categorizes the literature according to the optimization purposes: Reliability, Scalability and Transmission/Energy efficiency. By summarizing and reviewing the key insights and techniques, we further discuss on the future directions for each category. Our survey helps unveil three key findings for future direction: (1) The recent advances in wireless communications (e.g., study on cross-technology interference, error estimating codes, constructive interference, capture effect) can be potentially exploited to support further optimization on the reliability and energy efficiency of dissemination protocols; (2) Dissemination in multi-channel, multi-task and opportunistic networks requires more efforts to fully exploit the spatial-temporal network resources to enhance the data propagation; (3) Since many designs incur changes on MAC layer protocols, the co-existence of dissemination with other network protocols is another problem left to be addressed. PMID:28098830
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Wu-chi; Crawfis, Roger, Weide, Bruce
2002-02-01
In this project, the authors propose the research, development, and distribution of a stackable component-based multimedia streaming protocol middleware service. The goals of this stackable middleware interface include: (1) The middleware service will provide application writers and scientists easy to use interfaces that support their visualization needs. (2) The middleware service will support a variety of image compression modes. Currently, many of the network adaptation protocols for video have been developed with DCT-based compression algorithms like H.261, MPEG-1, or MPEG-2 in mind. It is expected that with advanced scientific computing applications that the lossy compression of the image data willmore » be unacceptable in certain instances. The middleware service will support several in-line lossless compression modes for error-sensitive scientific visualization data. (3) The middleware service will support two different types of streaming video modes: one for interactive collaboration of scientists and a stored video streaming mode for viewing prerecorded animations. The use of two different streaming types will allow the quality of the video delivered to the user to be maximized. Most importantly, this service will happen transparently to the user (with some basic controls exported to the user for domain specific tweaking). In the spirit of layered network protocols (like ISO and TCP/IP), application writers should not have to know a large amount about lower level network details. Currently, many example video streaming players have their congestion management techniques tightly integrated into the video player itself and are, for the most part, ''one-off'' applications. As more networked multimedia and video applications are written in the future, a larger percentage of these programmers and scientist will most likely know little about the underlying networking layer. By providing a simple, powerful, and semi-transparent middleware layer, the successful completion of this project will help serve as a catalyst to support future video-based applications, particularly those of advanced scientific computing applications.« less
High fidelity wireless network evaluation for heterogeneous cognitive radio networks
NASA Astrophysics Data System (ADS)
Ding, Lei; Sagduyu, Yalin; Yackoski, Justin; Azimi-Sadjadi, Babak; Li, Jason; Levy, Renato; Melodia, Tammaso
2012-06-01
We present a high fidelity cognitive radio (CR) network emulation platform for wireless system tests, measure- ments, and validation. This versatile platform provides the configurable functionalities to control and repeat realistic physical channel effects in integrated space, air, and ground networks. We combine the advantages of scalable simulation environment with reliable hardware performance for high fidelity and repeatable evaluation of heterogeneous CR networks. This approach extends CR design only at device (software-defined-radio) or lower-level protocol (dynamic spectrum access) level to end-to-end cognitive networking, and facilitates low-cost deployment, development, and experimentation of new wireless network protocols and applications on frequency- agile programmable radios. Going beyond the channel emulator paradigm for point-to-point communications, we can support simultaneous transmissions by network-level emulation that allows realistic physical-layer inter- actions between diverse user classes, including secondary users, primary users, and adversarial jammers in CR networks. In particular, we can replay field tests in a lab environment with real radios perceiving and learning the dynamic environment thereby adapting for end-to-end goals over distributed spectrum coordination channels that replace the common control channel as a single point of failure. CR networks offer several dimensions of tunable actions including channel, power, rate, and route selection. The proposed network evaluation platform is fully programmable and can reliably evaluate the necessary cross-layer design solutions with configurable op- timization space by leveraging the hardware experiments to represent the realistic effects of physical channel, topology, mobility, and jamming on spectrum agility, situational awareness, and network resiliency. We also provide the flexibility to scale up the test environment by introducing virtual radios and establishing seamless signal-level interactions with real radios. This holistic wireless evaluation approach supports a large-scale, het- erogeneous, and dynamic CR network architecture and allows developing cross-layer network protocols under high fidelity, repeatable, and scalable wireless test scenarios suitable for heterogeneous space, air, and ground networks.
Supporting Seamless Mobility for P2P Live Streaming
Kim, Eunsam; Kim, Sangjin; Lee, Choonhwa
2014-01-01
With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme. PMID:24977171
Cehreli, Zafer C; Uyanik, M Ozgur; Nagas, Emre; Tuncel, Behram; Er, Nuray; Comert, Fugen Dagli
2013-09-01
To compare the smear layer removal efficacy and erosive effects of different irrigation protocols under clinical and laboratory conditions. Mandibular third molars (n = 32) of 30-45 year-old patients were instrumented with rotary files and were randomly assigned to one of the following groups for final irrigation: (1) 5.25% NaOCl; (2) 17% EDTA; and (3) BioPure MTAD. Thereafter, the teeth were immediately extracted and processed for micromorphological investigation. In vitro specimen pairs were prepared by repeating the clinical experiments on freshly-extracted mandibular third molars. To compare open and closed systems, laboratory experiments were repeated on 32 additional teeth with enlarged apical foramen. The cleanliness of the root canals and the extent of erosion were assessed by environmental scanning electron microscopy. Specimens prepared under clinical and laboratory conditions had similar cleanliness and erosion scores (p > 0.05). Under both conditions, the tested solutions were more effective in removing the smear layer in the coronal and middle regions than in the apical one. Comparison of closed and open systems showed similar levels of cleanliness and erosion in all regions (p > 0.05), with the exception of 17% EDTA showing significantly higher levels of cleanliness and erosion in the apical third of open-end specimens. Based on clinical correlates of in vitro root canal cleanliness and erosion, laboratory testing of root canal irrigants on extracted teeth with closed apices can serve as a reliable method to simulate the clinical condition. EDTA was the most effective final irrigation solution in removing the smear layer at the expense of yielding the greatest erosive effect.
Sigwald, Eric L; Genoud, Manuel E; Giachero, Marcelo; de Olmos, Soledad; Molina, Víctor A; Lorenzo, Alfredo
2016-05-01
The retrosplenial cortex (RSC) is one of the largest cortical areas in rodents, and is subdivided in two main regions, A29 and A30, according to their cytoarchitectural organization and connectivities. However, very little is known about the functional activity of each RSC subdivision during the execution of complex cognitive tasks. Here, we used a well-established fear learning protocol that induced long-lasting contextual fear memory and showed that during evocation of the fear memory, the expression of early growth response gene 1 was up-regulated in A30, and in other brain areas implicated in fear and spatial memory, however, was down-regulated in A29, including layers IV and V. To search for the participation of A29 on fear memory, we triggered selective degeneration of neurons within cortical layers IV and V of A29 by using a non-invasive protocol that takes advantage of the vulnerability that these neurons have MK801-toxicity and the modulation of this neurodegeneration by testosterone. Application of 5 mg/kg MK801 in intact males induced negligible neuronal degeneration of A29 neurons and had no impact on fear memory retrieval. However, in orchiectomized rats, 5 mg/kg MK801 induced overt degeneration of layers IV-V neurons of A29, significantly impairing fear memory recall. Degeneration of A29 neurons did not affect exploratory or anxiety-related behavior nor altered unconditioned freezing. Importantly, protecting A29 neurons from MK801-toxicity by testosterone preserved fear memory recall in orchiectomized rats. Thus, neurons within cortical layers IV-V of A29 are critically required for efficient retrieval of contextual fear memory.
Lotfi, Mehrdad; Moghaddam, Negar; Vosoughhosseini, Sepideh; Zand, Vahid; Saghiri, Mohammad Ali
2012-01-01
Background and aims The aim of the present study was to compare 1.3% sodium hypochlorite (NaOCl) in MTAD (mixture of tetracycline isomer, acid, and detergent) for the removal of the smear layer and induction of canal erosion. Materials and methods 38 maxillary incisors were divided in three experimental groups of 10 and two positive and negative control groups of each 4 teeth, and prepared using rotary files. In test groups, 1.3% NaOCl was used for 5, 10 and 20 minutes during preparation followed by MTAD as the final rinse. In negative control group, 5.25% NaOCl was used for 10 minutes followed by 17% Ethylenediamine Tetra-Acetic Acid (EDTA) as the final rinse. In positive control group, dis-tilled water was used for 10 minutes during preparation and then as the final rinse. The samples were examined under scan-ning electron microscope, and the smear layer and dentinal erosion scores were recorded. Results Five and 10 min groups had significant differences with 20 min group (p < 0.05). In apical third, 5 and 10 min groups had also significant differences with 20 min (p < 0.05). In the coronal thirds, when the time of irrigation with 1.3% NaOCl increased from 5 min to 20 min, erosion also increased significantly. However, 5 and 10 min groups had no signifi-cant differences with negative control group. Conclusion The use of 1.3% sodium hypochlorite for 5 and 10 minutes in the MTAD protocol removes the smear layer in the coronal and middle thirds but does not induce erosion. PMID:22991642
Promoting Wired Links in Wireless Mesh Networks: An Efficient Engineering Solution
Barekatain, Behrang; Raahemifar, Kaamran; Ariza Quintana, Alfonso; Triviño Cabrera, Alicia
2015-01-01
Wireless Mesh Networks (WMNs) cannot completely guarantee good performance of traffic sources such as video streaming. To improve the network performance, this study proposes an efficient engineering solution named Wireless-to-Ethernet-Mesh-Portal-Passageway (WEMPP) that allows effective use of wired communication in WMNs. WEMPP permits transmitting data through wired and stable paths even when the destination is in the same network as the source (Intra-traffic). Tested with four popular routing protocols (Optimized Link State Routing or OLSR as a proactive protocol, Dynamic MANET On-demand or DYMO as a reactive protocol, DYMO with spanning tree ability and HWMP), WEMPP considerably decreases the end-to-end delay, jitter, contentions and interferences on nodes, even when the network size or density varies. WEMPP is also cost-effective and increases the network throughput. Moreover, in contrast to solutions proposed by previous studies, WEMPP is easily implemented by modifying the firmware of the actual Ethernet hardware without altering the routing protocols and/or the functionality of the IP/MAC/Upper layers. In fact, there is no need for modifying the functionalities of other mesh components in order to work with WEMPPs. The results of this study show that WEMPP significantly increases the performance of all routing protocols, thus leading to better video quality on nodes. PMID:25793516
Nieto, Sonia; Dragna, Justin M.; Anslyn, Eric V.
2010-01-01
A protocol for the rapid determination of the absolute configuration and enantiomeric excess of α-chiral primary amines with potential applications in asymmetric reaction discovery has been developed. The protocol requires derivatization of α-chiral primary amines via condensation with pyridine carboxaldehyde to quantitatively yield the corresponding imine. The Cu(I) complex with 2,2'-bis (diphenylphosphino)-1,1'-dinaphthyl (BINAP -CuI) with the imine yields a metal-to-ligand-charge-transfer band (MLCT) in the visible region of the circular dichroism spectrum upon binding. Diastereomeric host-guest complexes give CD signals of the same signs, but different amplitudes, allowing for differentiation of enantiomers. Processing the primary optical data from the CD spectrum with linear discriminant analysis (LDA) allows for the determination of absolute configuration and identification of the amines, and processing with a supervised multi-layer perceptron artifical neural network (MLP-ANN) allows for the simultaneous determination of ee and concentration. The primary optical data necessary to determine the ee of unknown samples is obtained in 2 minutes per sample. To demonstrate the utility of the protocol in asymmetric reaction discovery, the ee's and concentrations for an asymmetric metal catalyzed reaction are determined. The potential of the protocol's application in high-throughput screening (HTS) of ee is discussed. PMID:19946914
A Survey of MAC Protocols for Cognitive Radio Body Area Networks.
Bhandari, Sabin; Moh, Sangman
2015-04-20
The advancement in electronics, wireless communications and integrated circuits has enabled the development of small low-power sensors and actuators that can be placed on, in or around the human body. A wireless body area network (WBAN) can be effectively used to deliver the sensory data to a central server, where it can be monitored, stored and analyzed. For more than a decade, cognitive radio (CR) technology has been widely adopted in wireless networks, as it utilizes the available spectra of licensed, as well as unlicensed bands. A cognitive radio body area network (CRBAN) is a CR-enabled WBAN. Unlike other wireless networks, CRBANs have specific requirements, such as being able to automatically sense their environments and to utilize unused, licensed spectra without interfering with licensed users, but existing protocols cannot fulfill them. In particular, the medium access control (MAC) layer plays a key role in cognitive radio functions, such as channel sensing, resource allocation, spectrum mobility and spectrum sharing. To address various application-specific requirements in CRBANs, several MAC protocols have been proposed in the literature. In this paper, we survey MAC protocols for CRBANs. We then compare the different MAC protocols with one another and discuss challenging open issues in the relevant research.
NASA Astrophysics Data System (ADS)
Xi, Huixing
2017-03-01
With the continuous development of network technology and the rapid spread of the Internet, computer networks have been around the world every corner. However, the network attacks frequently occur. The ARP protocol vulnerability is one of the most common vulnerabilities in the TCP / IP four-layer architecture. The network protocol vulnerabilities can lead to the intrusion and attack of the information system, and disable or disable the normal defense function of the system [1]. At present, ARP spoofing Trojans spread widely in the LAN, the network security to run a huge hidden danger, is the primary threat to LAN security. In this paper, the author summarizes the research status and the key technologies involved in ARP protocol, analyzes the formation mechanism of ARP protocol vulnerability, and analyzes the feasibility of the attack technique. Based on the summary of the common defensive methods, the advantages and disadvantages of each defense method. At the same time, the current defense method is improved, and the advantage of the improved defense algorithm is given. At the end of this paper, the appropriate test method is selected and the test environment is set up. Experiment and test are carried out for each proposed improved defense algorithm.
The dermatopharmacokinetic (DPK) method of dermal tape stripping may prove to be a valuable addition to risk assessment protocols for toxic substances as it has been for the assessment of bioequivalence and bioavailability of topical dermatologic drugs. The measurement of drug ...
Radio frequency energy for postharvest control of pests in dry nuts and legumes
USDA-ARS?s Scientific Manuscript database
Methyl bromide (MeBr) is widely used as a fumigant in insect control, but it is harmful to the environment and a concern to human health. The Montreal Protocol on Substances that Deplete the Ozone Layer calls for the elimination of MeBr by 2005 in developed countries and by 2015 in developing countr...
Code of Federal Regulations, 2014 CFR
2014-07-01
...; (vii) Identify protocols for appropriate testing of manure, litter, process wastewater, and soil; (viii... under roof (beef cattle, broilers, layers, swine weighing 55 pounds or more, swine weighing less than 55... paragraph (e)(5)(ii) of this section, the results of any soil testing for nitrogen and phosphorus taken...
Code of Federal Regulations, 2013 CFR
2013-07-01
...; (vii) Identify protocols for appropriate testing of manure, litter, process wastewater, and soil; (viii... under roof (beef cattle, broilers, layers, swine weighing 55 pounds or more, swine weighing less than 55... paragraph (e)(5)(ii) of this section, the results of any soil testing for nitrogen and phosphorus taken...
Code of Federal Regulations, 2012 CFR
2012-07-01
...; (vii) Identify protocols for appropriate testing of manure, litter, process wastewater, and soil; (viii... under roof (beef cattle, broilers, layers, swine weighing 55 pounds or more, swine weighing less than 55... paragraph (e)(5)(ii) of this section, the results of any soil testing for nitrogen and phosphorus taken...
NASA Astrophysics Data System (ADS)
Martinez, Ralph; Nam, Jiseung
1992-07-01
Picture Archiving and Communication Systems (PACS) is an integration of digital image formation in a hospital, which encompasses various imaging equipment, image viewing workstations, image databases, and a high speed network. The integration requires a standardization of communication protocols to connect devices from different vendors. The American College of Radiology and the National Electrical Manufacturers Association (ACR- NEMA) standard Version 2.0 provides a point-to-point hardware interface, a set of software commands, and a consistent set of data formats for PACS. But, it is inadequate for PACS networking environments, because of its point-to-point nature and its inflexibility to allow other services and protocols in the future. Based on previous experience of PACS developments in The University of Arizona, a new communication protocol for PACS networks and an approach were proposed to ACR-NEMA Working Group VI. The defined PACS protocol is intended to facilitate the development of PACS''s capable of interfacing with other hospital information systems. Also, it is intended to allow the creation of diagnostic information data bases which can be interrogated by a variety of distributed devices. A particularly important goal is to support communications in a multivendor environment. The new protocol specifications are defined primarily as a combination of the International Organization for Standardization/Open Systems Interconnection (ISO/OSI), TCP/IP protocols, and the data format portion of ACR-NEMA standard. This paper addresses the specification and implementation of the ISO-based protocol into a PACS prototype. The protocol specification, which covers Presentation, Session, Transport, and Network layers, is summarized briefly. The protocol implementation is discussed based on our implementation efforts in the UNIX Operating System Environment. At the same time, results of performance comparison between the ISO and TCP/IP implementations are presented to demonstrate the implementation of defined protocol. The testing of performance analysis is done by prototyping PACS on available platforms, which are Micro VAX II, DECstation and SUN Workstation.
Use of the Delay-Tolerant Networking Bundle Protocol from Space
NASA Technical Reports Server (NTRS)
Wood, Lloyd; Ivancic, William D.; Eddy, Wesley M.; Stewart, Dave; Northam, James; Jackson, Chris; daSilvaCuriel, Alex
2009-01-01
The Disaster Monitoring Constellation (DMC), constructed by Survey Satellite Technology Ltd (SSTL), is a multisatellite Earth-imaging low-Earth-orbit sensor network where captured image swaths are stored onboard each satellite and later downloaded from the satellite payloads to a ground station. Store-and-forward of images with capture and later download gives each satellite the characteristics of a node in a Delay/Disruption Tolerant Network (DTN). Originally developed for the Interplanetary Internet, DTNs are now under investigation in an Internet Research Task Force (IRTF) DTN research group (RG), which has developed a bundle architecture and protocol. The DMC is currently unique in its adoption of the Internet Protocol (IP) for its imaging payloads and for satellite command and control, based around reuse of commercial networking and link protocols. These satellites use of IP has enabled earlier experiments with the Cisco router in Low Earth Orbit (CLEO) onboard the constellation's UK-DMC satellite. Earth images are downloaded from the satellites using a custom IPbased high-speed transfer protocol developed by SSTL, Saratoga, which tolerates unusual link environments. Saratoga has been documented in the Internet Engineering Task Force (IETF) for wider adoption. We experiment with use of DTNRG bundle concepts onboard the UKDMC satellite, by examining how Saratoga can be used as a DTN convergence layer to carry the DTNRG Bundle Protocol, so that sensor images can be delivered to ground stations and beyond as bundles. This is the first successful use of the DTNRG Bundle Protocol in a space environment. We use our practical experience to examine the strengths and weaknesses of the Bundle Protocol for DTN use, paying attention to fragmentation, custody transfer, and reliability issues.
Simulation Modeling and Performance Evaluation of Space Networks
NASA Technical Reports Server (NTRS)
Jennings, Esther H.; Segui, John
2006-01-01
In space exploration missions, the coordinated use of spacecraft as communication relays increases the efficiency of the endeavors. To conduct trade-off studies of the performance and resource usage of different communication protocols and network designs, JPL designed a comprehensive extendable tool, the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE). The design and development of MACHETE began in 2000 and is constantly evolving. Currently, MACHETE contains Consultative Committee for Space Data Systems (CCSDS) protocol standards such as Proximity-1, Advanced Orbiting Systems (AOS), Packet Telemetry/Telecommand, Space Communications Protocol Specification (SCPS), and the CCSDS File Delivery Protocol (CFDP). MACHETE uses the Aerospace Corporation s Satellite Orbital Analysis Program (SOAP) to generate the orbital geometry information and contact opportunities. Matlab scripts provide the link characteristics. At the core of MACHETE is a discrete event simulator, QualNet. Delay Tolerant Networking (DTN) is an end-to-end architecture providing communication in and/or through highly stressed networking environments. Stressed networking environments include those with intermittent connectivity, large and/or variable delays, and high bit error rates. To provide its services, the DTN protocols reside at the application layer of the constituent internets, forming a store-and-forward overlay network. The key capabilities of the bundling protocols include custody-based reliability, ability to cope with intermittent connectivity, ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses. In this presentation, we report on the addition of MACHETE models needed to support DTN, namely: the Bundle Protocol (BP) model. To illustrate the use of MACHETE with the additional DTN model, we provide an example simulation to benchmark its performance. We demonstrate the use of the DTN protocol and discuss statistics gathered concerning the total time needed to simulate numerous bundle transmissions
Generic, Extensible, Configurable Push-Pull Framework for Large-Scale Science Missions
NASA Technical Reports Server (NTRS)
Foster, Brian M.; Chang, Albert Y.; Freeborn, Dana J.; Crichton, Daniel J.; Woollard, David M.; Mattmann, Chris A.
2011-01-01
The push-pull framework was developed in hopes that an infrastructure would be created that could literally connect to any given remote site, and (given a set of restrictions) download files from that remote site based on those restrictions. The Cataloging and Archiving Service (CAS) has recently been re-architected and re-factored in its canonical services, including file management, workflow management, and resource management. Additionally, a generic CAS Crawling Framework was built based on motivation from Apache s open-source search engine project called Nutch. Nutch is an Apache effort to provide search engine services (akin to Google), including crawling, parsing, content analysis, and indexing. It has produced several stable software releases, and is currently used in production services at companies such as Yahoo, and at NASA's Planetary Data System. The CAS Crawling Framework supports many of the Nutch Crawler's generic services, including metadata extraction, crawling, and ingestion. However, one service that was not ported over from Nutch is a generic protocol layer service that allows the Nutch crawler to obtain content using protocol plug-ins that download content using implementations of remote protocols, such as HTTP, FTP, WinNT file system, HTTPS, etc. Such a generic protocol layer would greatly aid in the CAS Crawling Framework, as the layer would allow the framework to generically obtain content (i.e., data products) from remote sites using protocols such as FTP and others. Augmented with this capability, the Orbiting Carbon Observatory (OCO) and NPP (NPOESS Preparatory Project) Sounder PEATE (Product Evaluation and Analysis Tools Elements) would be provided with an infrastructure to support generic FTP-based pull access to remote data products, obviating the need for any specialized software outside of the context of their existing process control systems. This extensible configurable framework was created in Java, and allows the use of different underlying communication middleware (at present, both XMLRPC, and RMI). In addition, the framework is entirely suitable in a multi-mission environment and is supporting both NPP Sounder PEATE and the OCO Mission. Both systems involve tasks such as high-throughput job processing, terabyte-scale data management, and science computing facilities. NPP Sounder PEATE is already using the push-pull framework to accept hundreds of gigabytes of IASI (infrared atmospheric sounding interferometer) data, and is in preparation to accept CRIMS (Cross-track Infrared Microwave Sounding Suite) data. OCO will leverage the framework to download MODIS, CloudSat, and other ancillary data products for use in the high-performance Level 2 Science Algorithm. The National Cancer Institute is also evaluating the framework for use in sharing and disseminating cancer research data through its Early Detection Research Network (EDRN).
Ganesh, Praveen; Murthy, Jyotsna; Ulaghanathan, Navitha; Savitha, V H
2015-07-01
To study the growth and speech outcomes in children who were operated on for unilateral cleft lip and palate (UCLP) by a single surgeon using two different treatment protocols. A total of 200 consecutive patients with nonsyndromic UCLP were randomly allocated to two different treatment protocols. Of the 200 patients, 179 completed the protocol. However, only 85 patients presented for follow-up during the mixed dentition period (7-10 years of age). The following treatment protocol was followed. Protocol 1 consisted of the vomer flap (VF), whereby patients underwent primary lip nose repair and vomer flap for hard palate single-layer closure, followed by soft palate repair 6 months later; Protocol 2 consisted of the two-flap technique (TF), whereby the cleft palate (CP) was repaired by two-flap technique after primary lip and nose repair. GOSLON Yardstick scores for dental arch relation, and speech outcomes based on universal reporting parameters, were noted. A total of 40 patients in the VF group and 45 in the TF group completed the treatment protocols. The GOSLON scores showed marginally better outcomes in the VF group compared to the TF group. Statistically significant differences were found only in two speech parameters, with better outcomes in the TF group. Our results showed marginally better growth outcome in the VF group compared to the TF group. However, the speech outcomes were better in the TF group. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
The president speaks: prevention is best: lessons from protecting the ozone layer.
Woodcock, Ashley
2012-12-01
The Montreal Protocol was signed 25 years ago. As a result, the irreversible destruction of the ozone layer was prevented. However, stratospheric ozone will not recover completely until 2060 and the consequent epidemic in skin cancer cases will persist until 2100. Many millions of patients with asthma and chronic obstructive pulmonary disease have safely switched from chlorofluorocarbon (CFC)-powered metered-dose inhalers (MDIs) to either hydrofluorocarbon (HFC) or DPIs. China will be the last country to phase out CFCs by 2016. HFCs are global warming gases which will be controlled in the near future. HFCs in MDIs may be phased out over the next 10-20 years.
NASA Astrophysics Data System (ADS)
Harding, D. R.; Wittman, M. D.; Elasky, L.; Iwan, L. S.; Lund, L.
2001-10-01
The OMEGA Cryogenic Target Handling System (OCTHS) allows variable-thickness ice layers (nominal 100-μm) to be formed inside OMEGA-size (1-mm-diam., 3-μm-wall) plastic shells. The OCTHS design provides the most straightforward thermal environment for layering targets: permeation filled spherical targets are in a spherical isothermal environment. The layered target can be rotated 360^o to acquire multiple views of the ice layer. However, the capability of providing cryogenic targets for implosion experiments imposes constraints that do not exist in test systems dedicated to ice-layering studies. Most affected is the ability to characterize the target: space constraints and the need for multiple sets of windows limit the viewing access to f/5 optics, which affects the image quality. With these features, the OCTS provides the most relevant test system, to date, for layering targets and quantifying the overall ice roughness. No single layering protocol provides repeatable ice smoothness. All techniques require extensive operator interaction, and the layering process is lengthy. Typical ice rms smoothness varied from 5 to 10 μm for all targets studied. Characterizing the ice layer from different views shows a ~30% variation in the ice rms smoothness and a greater difference in the power spectra, depending on the view axis. This work was supported by the U.S. DOE Office of Inertial Confinement Fusion under Cooperative Agreement No. DE-FC03-92SF19460.
Misra, Ashish; Feng, Zhonghui; Zhang, Jiasheng; Lou, Zhi-Yin; Greif, Daniel M
2017-09-12
The aorta is the largest artery in the body. The aortic wall is composed of an inner layer of endothelial cells, a middle layer of alternating elastic lamellae and smooth muscle cells (SMCs), and an outer layer of fibroblasts and extracellular matrix. In contrast to the widespread study of pathological models (e.g., atherosclerosis) in the adult aorta, much less is known about the embryonic and perinatal aorta. Here, we focus on SMCs and provide protocols for the analysis of the morphogenesis and pathogenesis of embryonic and perinatal aortic SMCs in normal development and disease. Specifically, the four protocols included are: i) in vivo embryonic fate mapping and clonal analysis; ii) explant embryonic aorta culture; iii) SMC isolation from the perinatal aorta; and iv) subcutaneous osmotic mini-pump placement in pregnant (or non-pregnant) mice. Thus, these approaches facilitate the investigation of the origin(s), fate, and clonal architecture of SMCs in the aorta in vivo. They allow for modulating embryonic aorta morphogenesis in utero by continuous exposure to pharmacological agents. In addition, isolated aortic tissue explants or aortic SMCs can be used to gain insights into the role of specific gene targets during fundamental processes such as muscularization, proliferation, and migration. These hypothesis-generating experiments on isolated SMCs and the explanted aorta can then be assessed in the in vivo context through pharmacological and genetic approaches.
Research of marine sensor web based on SOA and EDA
NASA Astrophysics Data System (ADS)
Jiang, Yongguo; Dou, Jinfeng; Guo, Zhongwen; Hu, Keyong
2015-04-01
A great deal of ocean sensor observation data exists, for a wide range of marine disciplines, derived from in situ and remote observing platforms, in real-time, near-real-time and delayed mode. Ocean monitoring is routinely completed using sensors and instruments. Standardization is the key requirement for exchanging information about ocean sensors and sensor data and for comparing and combining information from different sensor networks. One or more sensors are often physically integrated into a single ocean `instrument' device, which often brings in many challenges related to diverse sensor data formats, parameters units, different spatiotemporal resolution, application domains, data quality and sensors protocols. To face these challenges requires the standardization efforts aiming at facilitating the so-called Sensor Web, which making it easy to provide public access to sensor data and metadata information. In this paper, a Marine Sensor Web, based on SOA and EDA and integrating the MBARI's PUCK protocol, IEEE 1451 and OGC SWE 2.0, is illustrated with a five-layer architecture. The Web Service layer and Event Process layer are illustrated in detail with an actual example. The demo study has demonstrated that a standard-based system can be built to access sensors and marine instruments distributed globally using common Web browsers for monitoring the environment and oceanic conditions besides marine sensor data on the Web, this framework of Marine Sensor Web can also play an important role in many other domains' information integration.
Modelling multimedia teleservices with OSI upper layers framework: Short paper
NASA Astrophysics Data System (ADS)
Widya, I.; Vanrijssen, E.; Michiels, E.
The paper presents the use of the concepts and modelling principles of the Open Systems Interconnection (OSI) upper layers structure in the modelling of multimedia teleservices. It puts emphasis on the revised Application Layer Structure (OSI/ALS). OSI/ALS is an object based reference model which intends to coordinate the development of application oriented services and protocols in a consistent and modular way. It enables the rapid deployment and integrated use of these services. The paper emphasizes further on the nesting structure defined in OSI/ALS which allows the design of scalable and user tailorable/controllable teleservices. OSI/ALS consistent teleservices are moreover implementable on communication platforms of different capabilities. An analysis of distributed multimedia architectures which can be found in the literature, confirms the ability of the OSI/ALS framework to model the interworking functionalities of teleservices.
NASA Astrophysics Data System (ADS)
Kodama, Yu; Hamagami, Tomoki
Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.
Bitzenhofer, Sebastian H; Ahlbeck, Joachim; Wolff, Amy; Wiegert, J. Simon; Gee, Christine E.; Oertner, Thomas G.; Hanganu-Opatz, Ileana L.
2017-01-01
Coordinated activity patterns in the developing brain may contribute to the wiring of neuronal circuits underlying future behavioural requirements. However, causal evidence for this hypothesis has been difficult to obtain owing to the absence of tools for selective manipulation of oscillations during early development. We established a protocol that combines optogenetics with electrophysiological recordings from neonatal mice in vivo to elucidate the substrate of early network oscillations in the prefrontal cortex. We show that light-induced activation of layer II/III pyramidal neurons that are transfected by in utero electroporation with a high-efficiency channelrhodopsin drives frequency-specific spiking and boosts network oscillations within beta–gamma frequency range. By contrast, activation of layer V/VI pyramidal neurons causes nonspecific network activation. Thus, entrainment of neonatal prefrontal networks in fast rhythms relies on the activation of layer II/III pyramidal neurons. This approach used here may be useful for further interrogation of developing circuits, and their behavioural readout. PMID:28216627
Investigation of PDMS based bi-layer elasticity via interpretation of apparent Young's modulus.
Sarrazin, Baptiste; Brossard, Rémy; Guenoun, Patrick; Malloggi, Florent
2016-02-21
As the need of new methods for the investigation of thin films on various kinds of substrates becomes greater, a novel approach based on AFM nanoindentation is explored. Substrates of polydimethylsiloxane (PDMS) coated by a layer of hard material are probed with an AFM tip in order to obtain the force profile as a function of the indentation. The equivalent elasticity of those composite systems is interpreted using a new numerical approach, the Coated Half-Space Indentation Model of Elastic Response (CHIMER), in order to extract the thicknesses of the upper layer. Two kinds of coating are investigated. First, chitosan films of known thicknesses between 30 and 200 nm were probed in order to test the model. A second type of samples is produced by oxygen plasma oxidation of the PDMS substrate, which results in the growth of a relatively homogeneous oxide layer. The local nature of this protocol enables measurements at long oxidation time, where the apparition of cracks prevents other kinds of measurements.
Implication of ethanol wet-bonding in hybrid layer remineralization.
Kim, J; Gu, L; Breschi, L; Tjäderhane, L; Choi, K K; Pashley, D H; Tay, F R
2010-06-01
During mineralization, unbound water within the collagen matrix is replaced by apatite. This study tested the null hypothesis that there is no difference in the status of in vitro biomimetic remineralization of hybrid layers, regardless of their moisture contents. Acid-etched dentin was bonded with One-Step with ethanol-wet-bonding, water-wet-bonding, and water-overwet-bonding protocols. Composite-dentin slabs were subjected to remineralization for 1-4 months in a medium containing dual biomimetic analogs, with set Portland cement as the calcium source and characterized by transmission electron microscopy. Remineralization was either non-existent or restricted to the intrafibrillar mode in ethanol-wet-bonded specimens. Extensive intrafibrillar and interfibrillar remineralization was observed in water-wet-bonded specimens. Water-overwet specimens demonstrated partial remineralization of hybrid layers and precipitation of mineralized plates within water channels. The use of ethanol-wet-bonding substantiates that biomimetic remineralization is a progressive dehydration process that replaces residual water in hybrid layers with apatite crystallites.
TCP throughput adaptation in WiMax networks using replicator dynamics.
Anastasopoulos, Markos P; Petraki, Dionysia K; Kannan, Rajgopal; Vasilakos, Athanasios V
2010-06-01
The high-frequency segment (10-66 GHz) of the IEEE 802.16 standard seems promising for the implementation of wireless backhaul networks carrying large volumes of Internet traffic. In contrast to wireline backbone networks, where channel errors seldom occur, the TCP protocol in IEEE 802.16 Worldwide Interoperability for Microwave Access networks is conditioned exclusively by wireless channel impairments rather than by congestion. This renders a cross-layer design approach between the transport and physical layers more appropriate during fading periods. In this paper, an adaptive coding and modulation (ACM) scheme for TCP throughput maximization is presented. In the current approach, Internet traffic is modulated and coded employing an adaptive scheme that is mathematically equivalent to the replicator dynamics model. The stability of the proposed ACM scheme is proven, and the dependence of the speed of convergence on various physical-layer parameters is investigated. It is also shown that convergence to the strategy that maximizes TCP throughput may be further accelerated by increasing the amount of information from the physical layer.
Covalent layer-by-layer films: chemistry, design, and multidisciplinary applications.
An, Qi; Huang, Tao; Shi, Feng
2018-05-16
Covalent layer-by-layer (LbL) assembly is a powerful method used to construct functional ultrathin films that enables nanoscopic structural precision, componential diversity, and flexible design. Compared with conventional LbL films built using multiple noncovalent interactions, LbL films prepared using covalent crosslinking offer the following distinctive characteristics: (i) enhanced film endurance or rigidity; (ii) improved componential diversity when uncharged species or small molecules are stably built into the films by forming covalent bonds; and (iii) increased structural diversity when covalent crosslinking is employed in componential, spacial, or temporal (labile bonds) selective manners. In this review, we document the chemical methods used to build covalent LbL films as well as the film properties and applications achievable using various film design strategies. We expect to translate the achievement in the discipline of chemistry (film-building methods) into readily available techniques for materials engineers and thus provide diverse functional material design protocols to address the energy, biomedical, and environmental challenges faced by the entire scientific community.
Using the ACR/NEMA standard with TCP/IP and Ethernet
NASA Astrophysics Data System (ADS)
Chimiak, William J.; Williams, Rodney C.
1991-07-01
There is a need for a consolidated picture archival and communications system (PACS) in hospitals. At the Bowman Gray School of Medicine of Wake Forest University (BGSM), the authors are enhancing the ACR/NEMA Version 2 protocol using UNIX sockets and TCP/IP to greatly improve connectivity. Initially, nuclear medicine studies using gamma cameras are to be sent to PACS. The ACR/NEMA Version 2 protocol provides the functionality of the upper three layers of the open system interconnection (OSI) model in this implementation. The images, imaging equipment information, and patient information are then sent in ACR/NEMA format to a software socket. From there it is handed to the TCP/IP protocol, which provides the transport and network service. TCP/IP, in turn, uses the services of IEEE 802.3 (Ethernet) to complete the connectivity. The advantage of this implementation is threefold: (1) Only one I/O port is consumed by numerous nuclear medicine cameras, instead of a physical port for each camera. (2) Standard protocols are used which maximize interoperability with ACR/NEMA compliant PACSs. (3) The use of sockets allows a migration path to the transport and networking services of OSIs TP4 and connectionless network service as well as the high-performance protocol being considered by the American National Standards Institute (ANSI) and the International Standards Organization (ISO) -- the Xpress Transfer Protocol (XTP). The use of sockets also gives access to ANSI's Fiber Distributed Data Interface (FDDI) as well as other high-speed network standards.
Influence of curing protocol and ceramic composition on the degree of conversion of resin cement.
Lanza, Marcos Daniel Septimio; Andreeta, Marcello Rubens Barsi; Pegoraro, Thiago Amadei; Pegoraro, Luiz Fernando; Carvalho, Ricardo Marins De
2017-01-01
Due to increasing of aesthetic demand, ceramic crowns are widely used in different situations. However, to obtain long-term prognosis of restorations, a good conversion of resin cement is necessary. To evaluate the degree of conversion (DC) of one light-cure and two dual-cure resin cements under a simulated clinical cementation of ceramic crowns. Prepared teeth were randomly split according to the ceramic's material, resin cement and curing protocol. The crowns were cemented as per manufacturer's directions and photoactivated either from occlusal suface only for 60 s; or from the buccal, occlusal and lingual surfaces, with an exposure time of 20 s on each aspect. After cementation, the specimens were stored in deionized water at 37°C for 7 days. Specimens were transversally sectioned from occlusal to cervical surfaces and the DC was determined along the cement line with three measurements taken and averaged from the buccal, lingual and approximal aspects using micro-Raman spectroscopy (Alpha 300R/WITec®). Data were analyzed by 3-way ANOVA and Tukey test at =5%. Statistical analysis showed significant differences among cements, curing protocols and ceramic type (p<0.001). The curing protocol 3x20 resulted in higher DC for all tested conditions; lower DC was observed for Zr ceramic crowns; Duolink resin cement culminated in higher DC regardless ceramic composition and curing protocol. The DC of resin cement layers was dependent on the curing protocol and type of ceramic.
Breast dosimetry in clinical mammography
NASA Astrophysics Data System (ADS)
Benevides, Luis Alberto Do Rego
The objective of this study was show that a clinical dosimetry protocol that utilizes a dosimetric breast phantom series based on population anthropometric measurements can reliably predict the average glandular dose (AGD) imparted to the patient during a routine screening mammogram. In the study, AGD was calculated using entrance skin exposure and dose conversion factors based on fibroglandular content, compressed breast thickness, mammography unit parameters and modifying parameters for homogeneous phantom (phantom factor), compressed breast lateral dimensions (volume factor) and anatomical features (anatomical factor). The protocol proposes the use of a fiber-optic coupled (FOCD) or Metal Oxide Semiconductor Field Effect Transistor (MOSFET) dosimeter to measure the entrance skin exposure at the time of the mammogram without interfering with diagnostic information of the mammogram. The study showed that FOCD had sensitivity with less than 7% energy dependence, linear in all tube current-time product stations, and was reproducible within 2%. FOCD was superior to MOSFET dosimeter in sensitivity, reusability, and reproducibility. The patient fibroglandular content was evaluated using a calibrated modified breast tissue equivalent homogeneous phantom series (BRTES-MOD) designed from anthropomorphic measurements of a screening mammography population and whose elemental composition was referenced to International Commission on Radiation Units and Measurements Report 44 tissues. The patient fibroglandular content, compressed breast thickness along with unit parameters and spectrum half-value layer were used to derive the currently used dose conversion factor (DgN). The study showed that the use of a homogeneous phantom, patient compressed breast lateral dimensions and patient anatomical features can affect AGD by as much as 12%, 3% and 1%, respectively. The protocol was found to be superior to existing methodologies. In addition, the study population anthropometric measurements enabled the development of analytical equations to calculate the whole breast area, estimate for the skin layer thickness and optimal location for automatic exposure control ionization chamber. The clinical dosimetry protocol developed in this study can reliably predict the AGD imparted to an individual patient during a routine screening mammogram.
Kim, Jongryul; Vaughn, Ryan M.; Gu, Lisha; Rockman, Roy A.; Arola, Dwayne D.; Schafer, Tara E.; Choi, Kyungkyu; Pashley, David H.; Tay, Franklin R.
2009-01-01
Degradation of hybrid layers created in primary dentin occurs as early as 6 months in vivo. Biomimetic remineralization utilizes “bottom-up” nanotechnology principles for interfibrillar and intrafibrillar remineralization of collagen matrices. This study examined whether imperfect hybrid layers created in primary dentin can be remineralized. Coronal dentin surfaces were prepared from extracted primary molars and bonded using Adper Prompt L-Pop and a composite. One millimeter-thick specimen slabs of the resin-dentin interface were immersed in a Portland cement-based remineralization medium that contained two biomimetic analogs to mimic the sequestration and templating functions of dentin noncollagenous proteins. Specimens were retrieved after 1–6 months. Confocal laser scanning microscopy was employed for evaluating the permeability of hybrid layers to Rhodamine B. Transmission electron microscopy was used to examine the status of remineralization within hybrid layers. Remineralization at different locations of the hybrid layers corresponded with quenching of fluorescence within similar locations of those hybrid layers. Remineralization was predominantly intrafibrillar in nature as interfibrillar spaces were filled with adhesive resin. Biomimetic remineralization of imperfect hybrid layers in primary human dentin is a potential means for preserving bond integrity. The success of the current proof-of-concept, laterally-diffusing remineralization protocol warrants development of a clinically-applicable biomimetic remineralization delivery system. PMID:19768792
Conference Support - Surgery in Extreme Environments - Center for Surgical Innovation
2007-01-01
flights. During this 16-day mission in April 1998, surgical procedures, including thoracotomies, laparotomies, craniotomies , laminectomies, and...fixation, craniotomy , laminectomy, and leg dissection. These experiments also permitted the evaluation of IV insertion using the autonomic protocol and...missions will be required to address: Repair of lacerations; wound cement, layered closure Incision and drainage of abscess Needle aspiration of
Mok, Pooi Ling; Leow, Sue Ngein; Koh, Avin Ee-Hwan; Mohd Nizam, Hairul Harun; Ding, Suet Lee Shirley; Luu, Chi; Ruhaslizan, Raduan; Wong, Hon Seng; Halim, Wan Haslina Wan Abdul; Ng, Min Hwei; Idrus, Ruszymah Binti Hj; Chowdhury, Shiplu Roy; Bastion, Catherine Mae-Lynn; Subbiah, Suresh Kumar; Higuchi, Akon; Alarfaj, Abdullah A; Then, Kong Yong
2017-02-08
Mesenchymal stem cells are widely used in many pre-clinical and clinical settings. Despite advances in molecular technology; the migration and homing activities of these cells in in vivo systems are not well understood. Labelling mesenchymal stem cells with gold nanoparticles has no cytotoxic effect and may offer suitable indications for stem cell tracking. Here, we report a simple protocol to label mesenchymal stem cells using 80 nm gold nanoparticles. Once the cells and particles were incubated together for 24 h, the labelled products were injected into the rat subretinal layer. Micro-computed tomography was then conducted on the 15th and 30th day post-injection to track the movement of these cells, as visualized by an area of hyperdensity from the coronal section images of the rat head. In addition, we confirmed the cellular uptake of the gold nanoparticles by the mesenchymal stem cells using transmission electron microscopy. As opposed to other methods, the current protocol provides a simple, less labour-intensive and more efficient labelling mechanism for real-time cell tracking. Finally, we discuss the potential manipulations of gold nanoparticles in stem cells for cell replacement and cancer therapy in ocular disorders or diseases.
Mok, Pooi Ling; Leow, Sue Ngein; Koh, Avin Ee-Hwan; Mohd Nizam, Hairul Harun; Ding, Suet Lee Shirley; Luu, Chi; Ruhaslizan, Raduan; Wong, Hon Seng; Halim, Wan Haslina Wan Abdul; Ng, Min Hwei; Idrus, Ruszymah Binti Hj.; Chowdhury, Shiplu Roy; Bastion, Catherine Mae-Lynn; Subbiah, Suresh Kumar; Higuchi, Akon; Alarfaj, Abdullah A.; Then, Kong Yong
2017-01-01
Mesenchymal stem cells are widely used in many pre-clinical and clinical settings. Despite advances in molecular technology; the migration and homing activities of these cells in in vivo systems are not well understood. Labelling mesenchymal stem cells with gold nanoparticles has no cytotoxic effect and may offer suitable indications for stem cell tracking. Here, we report a simple protocol to label mesenchymal stem cells using 80 nm gold nanoparticles. Once the cells and particles were incubated together for 24 h, the labelled products were injected into the rat subretinal layer. Micro-computed tomography was then conducted on the 15th and 30th day post-injection to track the movement of these cells, as visualized by an area of hyperdensity from the coronal section images of the rat head. In addition, we confirmed the cellular uptake of the gold nanoparticles by the mesenchymal stem cells using transmission electron microscopy. As opposed to other methods, the current protocol provides a simple, less labour-intensive and more efficient labelling mechanism for real-time cell tracking. Finally, we discuss the potential manipulations of gold nanoparticles in stem cells for cell replacement and cancer therapy in ocular disorders or diseases. PMID:28208719
Andrady, Anthony; Aucamp, Pieter J; Bais, Alkiviadis; Ballaré, Carlos L; Björn, Lars Olof; Bornman, Janet F; Caldwell, Martyn; Cullen, Anthony P; Erickson, David J; de Gruijl, Frank R; Häder, Donat-P; Ilyas, Mohammad; Kulandaivelu, G; Kumar, H D; Longstreth, Janice; McKenzie, Richard L; Norval, Mary; Paul, Nigel; Redhwi, Halim Hamid; Smith, Raymond C; Solomon, Keith R; Sulzberger, Barbara; Takizawa, Yukio; Tang, Xiaoyan; Teramura, Alan H; Torikai, Ayako; van der Leun, Jan C; Wilson, Stephen R; Worrest, Robert C; Zepp, Richard G
2009-01-01
After the enthusiastic celebration of the 20th Anniversary of the Montreal Protocol on Substances that Deplete the Ozone Layer in 2007, the work for the protection of the ozone layer continues. The Environmental Effects Assessment Panel is one of the three expert panels within the Montreal Protocol. This EEAP deals with the increase of the UV irradiance on the Earth's surface and its effects on human health, animals, plants, biogeochemistry, air quality and materials. For the past few years, interactions of ozone depletion with climate change have also been considered. It has become clear that the environmental problems will be long-lasting. In spite of the fact that the worldwide production of ozone depleting chemicals has already been reduced by 95%, the environmental disturbances are expected to persist for about the next half a century, even if the protective work is actively continued, and completed. The latest full report was published in Photochem. Photobiol. Sci., 2007, 6, 201-332, and the last progress report in Photochem. Photobiol. Sci., 2008, 7, 15-27. The next full report on environmental effects is scheduled for the year 2010. The present progress report 2008 is one of the short interim reports, appearing annually.
Ren, Peng; Qian, Jiansheng
2016-01-01
This study proposes a novel power-efficient and anti-fading clustering based on a cross-layer that is specific to the time-varying fading characteristics of channels in the monitoring of coal mine faces with wireless sensor networks. The number of active sensor nodes and a sliding window are set up such that the optimal number of cluster heads (CHs) is selected in each round. Based on a stable expected number of CHs, we explore the channel efficiency between nodes and the base station by using a probe frame and the joint surplus energy in assessing the CH selection. Moreover, the sending power of a node in different periods is regulated by the signal fade margin method. The simulation results demonstrate that compared with several common algorithms, the power-efficient and fading-aware clustering with a cross-layer (PEAFC-CL) protocol features a stable network topology and adaptability under signal time-varying fading, which effectively prolongs the lifetime of the network and reduces network packet loss, thus making it more applicable to the complex and variable environment characteristic of a coal mine face. PMID:27338380
Definition and evaluation of the data-link layer of PACnet
NASA Astrophysics Data System (ADS)
Alsafadi, Yasser H.; Martinez, Ralph; Sanders, William H.
1991-07-01
PACnet is a 200-500 Mbps dual-ring fiber optic network designed to implement a picture archiving and communication system (PACS) in a hospital environment. The network consists of three channels: an image transfer channel, a command and control channel, and a real-time data channel. An initial network interface unit (NIU) design for PACnet consisted of a functional description of the protocols and NIU major components. In order to develop a demonstration prototype, additional definition of protocol algorithms of each channel is necessary. Using the International Standards Organization/Open Systems Interconnection (ISO/OSI) reference model as a guide, the definition of the data link layer is extended. This definition covers interface service specifications for the two constituent sublayers: logical link control (LLC) and medium access control (MAC). Furthermore, it describes procedures for data transfer, mechanisms of error detection and fault recovery. A performance evaluation study was then made to determine how the network performs under various application scenarios. The performance evaluation study was performed using stochastic activity networks, which can formally describe the network behavior. The results of the study demonstrate the feasibility of PACnet as an integrated image, data, and voice network for PACS.
Performance Characterization of Global Address Space Applications: A Case Study with NWChem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Jeffrey R.; Krishnamoorthy, Sriram; Shende, Sameer
The use of global address space languages and one-sided communication for complex applications is gaining attention in the parallel computing community. However, lack of good evaluative methods to observe multiple levels of performance makes it difficult to isolate the cause of performance deficiencies and to understand the fundamental limitations of system and application design for future improvement. NWChem is a popular computational chemistry package which depends on the Global Arrays/ ARMCI suite for partitioned global address space functionality to deliver high-end molecular modeling capabilities. A workload characterization methodology was developed to support NWChem performance engineering on large-scale parallel platforms. Themore » research involved both the integration of performance instrumentation and measurement in the NWChem software, as well as the analysis of one-sided communication performance in the context of NWChem workloads. Scaling studies were conducted for NWChem on Blue Gene/P and on two large-scale clusters using different generation Infiniband interconnects and x86 processors. The performance analysis and results show how subtle changes in the runtime parameters related to the communication subsystem could have significant impact on performance behavior. The tool has successfully identified several algorithmic bottlenecks which are already being tackled by computational chemists to improve NWChem performance.« less
Implementation and Optimization of miniGMG - a Compact Geometric Multigrid Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Samuel; Kalamkar, Dhiraj; Singh, Amik
2012-12-01
Multigrid methods are widely used to accelerate the convergence of iterative solvers for linear systems used in a number of different application areas. In this report, we describe miniGMG, our compact geometric multigrid benchmark designed to proxy the multigrid solves found in AMR applications. We explore optimization techniques for geometric multigrid on existing and emerging multicore systems including the Opteron-based Cray XE6, Intel Sandy Bridge and Nehalem-based Infiniband clusters, as well as manycore-based architectures including NVIDIA's Fermi and Kepler GPUs and Intel's Knights Corner (KNC) co-processor. This report examines a variety of novel techniques including communication-aggregation, threaded wavefront-based DRAM communication-avoiding,more » dynamic threading decisions, SIMDization, and fusion of operators. We quantify performance through each phase of the V-cycle for both single-node and distributed-memory experiments and provide detailed analysis for each class of optimization. Results show our optimizations yield significant speedups across a variety of subdomain sizes while simultaneously demonstrating the potential of multi- and manycore processors to dramatically accelerate single-node performance. However, our analysis also indicates that improvements in networks and communication will be essential to reap the potential of manycore processors in large-scale multigrid calculations.« less
Diagnosing the Causes and Severity of One-sided Message Contention
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tallent, Nathan R.; Vishnu, Abhinav; van Dam, Hubertus
Two trends suggest network contention for one-sided messages is poised to become a performance problem that concerns application developers: an increased interest in one-sided programming models and a rising ratio of hardware threads to network injection bandwidth. Unfortunately, it is difficult to reason about network contention and one-sided messages because one-sided tasks can either decrease or increase contention. We present effective and portable techniques for diagnosing the causes and severity of one-sided message contention. To detect that a message is affected by contention, we maintain statistics representing instantaneous (non-local) network resource demand. Using lightweight measurement and modeling, we identify themore » portion of a message's latency that is due to contention and whether contention occurs at the initiator or target. We attribute these metrics to program statements in their full static and dynamic context. We characterize contention for an important computational chemistry benchmark on InfiniBand, Cray Aries, and IBM Blue Gene/Q interconnects. We pinpoint the sources of contention, estimate their severity, and show that when message delivery time deviates from an ideal model, there are other messages contending for the same network links. With a small change to the benchmark, we reduce contention up to 50% and improve total runtime as much as 20%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J
The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less
Design and performance evaluation of a distributed OFDMA-based MAC protocol for MANETs.
Park, Jaesung; Chung, Jiyoung; Lee, Hyungyu; Lee, Jung-Ryun
2014-01-01
In this paper, we propose a distributed MAC protocol for OFDMA-based wireless mobile ad hoc multihop networks, in which the resource reservation and data transmission procedures are operated in a distributed manner. A frame format is designed considering the characteristics of OFDMA that each node can transmit or receive data to or from multiple nodes simultaneously. Under this frame structure, we propose a distributed resource management method including network state estimation and resource reservation processes. We categorize five types of logical errors according to their root causes and show that two of the logical errors are inevitable while three of them are avoided under the proposed distributed MAC protocol. In addition, we provide a systematic method to determine the advertisement period of each node by presenting a clear relation between the accuracy of estimated network states and the signaling overhead. We evaluate the performance of the proposed protocol in respect of the reservation success rate and the success rate of data transmission. Since our method focuses on avoiding logical errors, it could be easily placed on top of the other resource allocation methods focusing on the physical layer issues of the resource management problem and interworked with them.
A holistic approach to ZigBee performance enhancement for home automation networks.
Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep
2014-08-14
Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network.
van der Werf, Inez D; Calvano, Cosima D; Palmisano, Francesco; Sabbatini, Luigia
2012-03-09
A simple protocol, based on Bligh-Dyer (BD) extraction followed by MALDI-TOF-MS analysis, for fast identification of paint binders in single microsamples is proposed. For the first time it is demonstrated that the BD method is effective for the simultaneous extraction of lipids and proteins from complex, and atypical matrices, such as pigmented paint layers. The protocol makes use of an alternative denaturing anionic detergent (RapiGest™) in order to improve efficiency of protein digestion and purification step. Detection of various lipid classes, such as triacylglycerols (TAGs) and phospholipids (PLs), and their oxidation by-products was accomplished, whereas proteins could be identified by peptide mass fingerprinting. The effect of pigments on ageing of lipids and proteins was also investigated. Finally, the proposed protocol was successfully applied to the study of a late-15th century Italian panel painting allowing the identification of various proteinaceous and lipid sections in organic binders, such as egg yolk, egg white, animal glue, casein, and drying oil. Copyright © 2011 Elsevier B.V. All rights reserved.
A Holistic Approach to ZigBee Performance Enhancement for Home Automation Networks
Betzler, August; Gomez, Carles; Demirkol, Ilker; Paradells, Josep
2014-01-01
Wireless home automation networks are gaining importance for smart homes. In this ambit, ZigBee networks play an important role. The ZigBee specification defines a default set of protocol stack parameters and mechanisms that is further refined by the ZigBee Home Automation application profile. In a holistic approach, we analyze how the network performance is affected with the tuning of parameters and mechanisms across multiple layers of the ZigBee protocol stack and investigate possible performance gains by implementing and testing alternative settings. The evaluations are carried out in a testbed of 57 TelosB motes. The results show that considerable performance improvements can be achieved by using alternative protocol stack configurations. From these results, we derive two improved protocol stack configurations for ZigBee wireless home automation networks that are validated in various network scenarios. In our experiments, these improved configurations yield a relative packet delivery ratio increase of up to 33.6%, a delay decrease of up to 66.6% and an improvement of the energy efficiency for battery powered devices of up to 48.7%, obtainable without incurring any overhead to the network. PMID:25196004
Zaidi, Shabi Abbas; Lee, Seung Mi; Cheong, Won Jo
2011-03-04
Some open tubular (OT) molecule imprinted polymer (MIP) silica capillary columns have been prepared using atenolol, sulpiride, methyl benzylamine (MBA) and (1-naphthyl)-ethylamine (NEA) as templates by the pre-established generalized preparation protocol. The four MIP thin layers of different templates showed quite different morphologies. The racemic selectivity of each MIP column for the template enantiomers was optimized by changing eluent composition and pH. The template structural effects on chiral separation performance have been examined. This work verifies the versatility of the generalized preparation protocol for OT-MIP silica capillary columns by extending its boundary toward templates with basic functional group moieties. This study is the very first report to demonstrate a generalized MIP preparation protocol that is valid for both acidic and basic templates. The chiral separation performances of atenolol and sulpiride by the MIPs of this study were found better than or comparable to those of atenolol and sulpiride obtained by non-MIP separation techniques and those of some basic template enantiomers obtained by MIP based techniques. Copyright © 2011 Elsevier B.V. All rights reserved.
de Gregorio, Cesar; Arias, Ana; Navarrete, Natalia; Cisneros, Rafael; Cohenca, Nestor
2015-07-01
The purpose of this study was to determine whether differences exist in disinfection protocols between endodontists and general dentists. The authors sent an invitation to participate in a Web-based survey to 950 dentists affiliated with the Spanish Board of Dentistry. Participants responded to 9 questions about irrigation protocols and other factors related to disinfection during root canal therapy. A total of 238 (25.05%) study participants successfully completed and returned the surveys. Among these participants, 50% were general dentists and 50% were endodontists. The authors found no statistically significant differences in respondents' first choice of an irrigant solution (that is, sodium hypochlorite), but they noted statistically significant differences in the protocols used by general dentists and by endodontists in relation to the concentration of sodium hypochlorite (P = .0003), the use and type of irrigant used to remove the smear layer (P = 5.39 × 10(-10)), the use of adjuncts to irrigation (P = 5.98 × 10(-8)), the enlargement of the apical preparation when shaping a necrotic tooth (P = .001), and the maintenance of apical patency throughout the debridement and shaping procedure (P = .04). General dentists and endodontists embrace different disinfection protocols. The results of the survey demonstrated that endodontists keep up to date with protocols published in the literature, whereas general dentists use protocols learned during their dental training. Both groups of clinicians should be aware of the importance of disinfection techniques and their relationship to treatment outcomes. Controlling microorganisms during a root canal treatment, especially in cases with necrotic pulp, is essential to improve treatment outcomes. Clinicians should update their protocols and also consider referring patients to a specialist when their protocols are based on traditional techniques, especially in those cases with necrotic pulp. Copyright © 2015 American Dental Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Shepard, Timothy J.; Partridge, Craig; Coulter, Robert
1997-01-01
The designers of the TCP/IP protocol suite explicitly included support of satellites in their design goals. The goal of the Internet Project was to design a protocol which could be layered over different networking technologies to allow them to be concatenated into an internet. The results of this project included two protocols, IP and TCP. IP is the protocol used by all elements in the network and it defines the standard packet format for IP datagrams. TCP is the end-to-end transport protocol commonly used between end systems on the Internet to derive a reliable bi-directional byte-pipe service from the underlying unreliable IP datagram service. Satellite links are explicitly mentioned in Vint Cerf's 2-page article which appeared in 1980 in CCR [2] to introduce the specifications for IP and TCP. In the past fifteen years, TCP has been demonstrated to work over many differing networking technologies, including over paths including satellites links. So if satellite links were in the minds of the designers from the beginning, what is the problem? The problem is that the performance of TCP has in some cases been disappointing. A goal of the authors of the original specification of TCP was to specify only enough behavior to ensure interoperability. The specification left a number of important decisions, in particular how much data is to be sent when, to the implementor. This was deliberately' done. By leaving performance-related decisions to the implementor, this would allow the protocol TCP to be tuned and adapted to different networks and situations in the future without the need to revise the specification of the protocol, or break interoperability. Interoperability would continue while future implementations would be allowed flexibility to adapt to needs which could not be anticipated at the time of the original protocol design.
Rao, Harsha L; Venkatesh, Chirravuri R; Vidyasagar, Kelli; Yadav, Ravi K; Addepalli, Uday K; Jude, Aarthi; Senthil, Sirisha; Garudadri, Chandra S
2014-12-01
To evaluate the (i) effects of biological (age and axial length) and instrument-related [typical scan score (TSS) and corneal birefringence] parameters on the retinal nerve fiber layer (RNFL) measurements and (ii) repeatability of RNFL measurements with the enhanced corneal compensation (ECC) protocol of scanning laser polarimetry (SLP) in healthy subjects. In a cross-sectional study, 140 eyes of 73 healthy subjects underwent RNFL imaging with the ECC protocol of SLP. Linear mixed modeling methods were used to evaluate the effects of age, axial length, TSS, and corneal birefringence on RNFL measurements. One randomly selected eye of 48 subjects from the cohort underwent 3 serial scans during the same session to determine the repeatability. Age significantly influenced all RNFL measurements. RNFL measurements decreased by 1 µm for every decade increase in age. TSS affected the overall average RNFL measurement (β=-0.62, P=0.003), whereas residual anterior segment retardance affected the superior quadrant measurement (β=1.14, P=0.01). Axial length and corneal birefringence measurements did not influence RNFL measurements. Repeatability, as assessed by the coefficient of variation, ranged between 1.7% for the overall average RNFL measurement and 11.4% for th nerve fiber indicator. Age significantly affected all RNFL measurements with the ECC protocol of SLP, whereas TSS and residual anterior segment retardance affected the overall average and the superior average RNFL measurements, respectively. Axial length and corneal birefringence measurements did not influence any RNFL measurements. RNFL measurements had good intrasession repeatability. These results are important while evaluating the change in structural measurements over time in glaucoma patients.
Morais, Jéssika Mayhara Pereira; Victorino, Keli Regina; Escalante-Otárola, Wilfredo Gustavo; Jordão-Basso, Keren Cristina Fagundes; Palma-Dibb, Regina Guenka; Kuga, Milton Carlos
2018-06-15
The aim of the study was to evaluate the effects when acid etching on the dentin surface was immediately performed (I) or 7 days (D) after calcium silicate-based sealer (MTA Fillapex) removal, using 95% ethanol (E) or xylol (X). First study, 60 bovine incisor dentin specimens were impregnated with sealer and divided into six groups (n = 10): (EI), E + I; (XI), X + I; (ED), E + D; (XD), X + D, (UN), untreated and (MR), mechanical removal of sealer. Scanning electron microscopy (SEM) images (500×) were obtained from each specimen and scores assessed the sealer residues persistence. Second study, 60 specimens were similarly treated; however, the specimens were restored with composite resin after the removal protocols. Hybrid layer formation was evaluated using confocal laser microscopy (1,024×). Third study, 60 specimens were similarly obtained and subjected to micro-shear test to evaluate the effects of removal protocols on the bond strength of etch-and- rinse adhesive system to dentin. XI showed the highest persistence of sealer residues (p < .05), similar to MR (p > .05). EI showed the greatest hybrid layer extension, except in relation to UN (p < .05). XI and MR presented the lowest bond strength adhesive system to dentin (p < .05). Acid etching immediately after calcium silicate-based endodontic sealer removal using xylol presented the highest residues persistence and negatively affected the adhesive interface between dentin and etch-and-rinse adhesive system. © 2018 Wiley Periodicals, Inc.
Effect of Intermediate Flush Using Different Devices to Prevent Chemical Smear Layer Formation.
Silva, Camilla Corrêa; Ferreira, Vivian Maria Durange; De-Deus, Gustavo; Herrera, Daniel Rodrigo; Prado, Maíra do; Silva, Emmanuel João Nogueira Leal da
2017-01-01
This study compared the effect of intermediate flush with distilled water delivered by conventional irrigation, EndoVac microcannula or Self-Adjusting File (SAF) system in the prevention of chemical smear layer (CSL) formation. Thirty human premolars were used. Canals were prepared with Reciproc system and 5.25% NaOCl. After chemomechanical preparation, samples were divided in 3 groups (n=10) according to the intermediate irrigation protocol with distilled water using: conventional irrigation, EndoVac microcannula or SAF. A final flush with 2% chlorhexidine solution was used and scanning electron microscopy was performed to assess protocol effectiveness. Two calibrated evaluators attributed scores according the presence or absence of CSL on the surface of the root canal walls at the coronal, middle and apical thirds, as follows: (1) no CSL; (2) small amounts of CSL; (3) moderate CSL; and (4) heavy CSL. Differences between protocols were analyzed with Kruskal-Wallis and Mann-Whitney U tests. Friedman and Wilcoxon signed rank tests were used for comparison between each root canal third. SAF resulted in less formation of CSL when compared with the conventional irrigation and EndoVac microcannula (p<0.05). When root canal thirds were analyzed, conventional irrigation and EndoVac groups showed less CSL formation at coronal and middle thirds in comparison to the apical third (p<0.05). In SAF group, there was no difference among the thirds (p>0.05). It may be concluded that an intermediate flush of distilled water, delivered by the SAF system resulted in a better reduction of CSL formation during chemomechanical preparation.
Shiravand, Fatemeh; Hutchinson, John M.; Calventus, Yolanda; Ferrando, Francesc
2014-01-01
Three different protocols for the preparation of polymer layered silicate nanocomposites based upon a tri-functional epoxy resin, triglycidyl para-amino phenol (TGAP), have been compared in respect of the cure kinetics, the nanostructure and their mechanical properties. The three preparation procedures involve 2 wt% and 5 wt% of organically modified montmorillonite (MMT), and are: isothermal cure at selected temperatures; pre-conditioning of the resin-clay mixture before isothermal cure; incorporation of an initiator of cationic homopolymerisation, a boron tri-fluoride methyl amine complex, BF3·MEA, within the clay galleries. It was found that features of the cure kinetics and of the nanostructure correlate with the measured impact strength of the cured nanocomposites, which increases as the degree of exfoliation of the MMT is improved. The best protocol for toughening the TGAP/MMT nanocomposites is by the incorporation of 1 wt% BF3·MEA into the clay galleries of nanocomposites containing 2 wt% MMT. PMID:28788672
NASA Astrophysics Data System (ADS)
Gabrielli, Alessandro; Loddo, Flavio; Ranieri, Antonio; De Robertis, Giuseppe
2008-10-01
This work is aimed at defining the architecture of a new digital ASIC, namely Slow-Control Adapter (SCA), which will be designed in a commercial 130-nm CMOS technology. This chip will be embedded within a high-speed data acquisition optical link (GBT) to control and monitor the front-end electronics in future high-energy physics experiments. The GBT link provides a transparent transport layer between the SCA and control electronics in the counting room. The proposed SCA supports a variety of common bus protocols to interface with end-user general-purpose electronics. Between the GBT and the SCA a standard 100 Mb/s IEEE-802.3 compatible protocol will be implemented. This standard protocol allows off-line tests of the prototypes using commercial components that support the same standard. The project is justified because embedded applications in modern large HEP experiments require particular care to assure the lowest possible power consumption, still offering the highest reliability demanded by very large particle detectors.
Shiravand, Fatemeh; Hutchinson, John M; Calventus, Yolanda; Ferrando, Francesc
2014-05-30
Three different protocols for the preparation of polymer layered silicate nanocomposites based upon a tri-functional epoxy resin, triglycidyl para -amino phenol (TGAP), have been compared in respect of the cure kinetics, the nanostructure and their mechanical properties. The three preparation procedures involve 2 wt% and 5 wt% of organically modified montmorillonite (MMT), and are: isothermal cure at selected temperatures; pre-conditioning of the resin-clay mixture before isothermal cure; incorporation of an initiator of cationic homopolymerisation, a boron tri-fluoride methyl amine complex, BF₃·MEA, within the clay galleries. It was found that features of the cure kinetics and of the nanostructure correlate with the measured impact strength of the cured nanocomposites, which increases as the degree of exfoliation of the MMT is improved. The best protocol for toughening the TGAP/MMT nanocomposites is by the incorporation of 1 wt% BF₃·MEA into the clay galleries of nanocomposites containing 2 wt% MMT.
Industrial WSN Based on IR-UWB and a Low-Latency MAC Protocol
NASA Astrophysics Data System (ADS)
Reinhold, Rafael; Underberg, Lisa; Wulf, Armin; Kays, Ruediger
2016-07-01
Wireless sensor networks for industrial communication require high reliability and low latency. As current wireless sensor networks do not entirely meet these requirements, novel system approaches need to be developed. Since ultra wideband communication systems seem to be a promising approach, this paper evaluates the performance of the IEEE 802.15.4 impulse-radio ultra-wideband physical layer and the IEEE 802.15.4 Low Latency Deterministic Network (LLDN) MAC for industrial applications. Novel approaches and system adaptions are proposed to meet the application requirements. In this regard, a synchronization approach based on circular average magnitude difference functions (CAMDF) and on a clean template (CT) is presented for the correlation receiver. An adapted MAC protocol titled aggregated low latency (ALL) MAC is proposed to significantly reduce the resulting latency. Based on the system proposals, a hardware prototype has been developed, which proves the feasibility of the system and visualizes the real-time performance of the MAC protocol.
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
Aucamp, Pieter J
2007-03-01
The ozone molecule contains three atoms of oxygen and is mainly formed by the action of the ultraviolet rays of the sun on the diatomic oxygen molecules in the upper part of the Earth's atmosphere (called the stratosphere). Atmospheric pollution near the Earth's surface can form localized areas of ozone. The stratospheric ozone layer protects life on Earth by absorbing most of the harmful ultraviolet radiation from the sun. In the mid 1970s it was discovered that some manmade products destroy ozone molecules in the stratosphere. This destruction can result in damage to ecosystems and to materials such as plastics. It may cause an increase in human diseases such as skin cancers and cataracts. The discovery of the role of the synthetic ozone-depleting chemicals such as chlorofluorocarbons (CFCs) stimulated increased research and monitoring in this field. Computer models predicted a disaster if no action was taken to protect the ozone layer. Based on this research and monitoring, the nations of the world took action in 1985 with the Vienna Convention for the Protection of the Ozone Layer followed by the Montreal Protocol on Substances that Deplete the Ozone Layer in 1987. The Convention and Protocol were amended and adjusted several times as new knowledge was obtained. The Meetings of the Parties to the Montreal Protocol appointed three Assessment Panels to review the progress in scientific knowledge on their behalf. These panels are the Scientific Assessment Panel, the Technological and Economic Assessment Panel and the Environmental Effects Assessment Panel. Each panel covers a designated area and there is a natural level of overlap. The main reports of the Panels are published every four years as required by the Meeting of the Parties. All the reports have an executive summary that is distributed more widely than the main report itself. It became customary to add a set of questions and answers--mainly for non-expert readers--to the executive summaries. This document contains the questions and answers prepared by experts who comprise the Environmental Assessment Panel. It is based mainly on the 2006 report of the Panel but also contains information from previous assessments. Readers who need detailed information on any question should consult the full reports for a more complete scientific discussion. This set of questions refers mainly to the environmental effects of ozone depletion and climate change. The report of the Scientific Assessment Panel contains questions and answers related to the other scientific issues addressed by that Panel. All these reports can be found on the UNEP website (http://ozone.unep.org).
Low Latency MAC Protocol in Wireless Sensor Networks Using Timing Offset
NASA Astrophysics Data System (ADS)
Choi, Seung Sik
This paper proposes a low latency MAC protocol that can be used in sensor networks. To extend the lifetime of sensor nodes, the conventional solution is to synchronize active/sleep periods of all sensor nodes. However, due to these synchronized sensor nodes, packets in the intermediate nodes must wait until the next node wakes up before it can forward a packet. This induces a large delay in sensor nodes. To solve this latency problem, a clustered sensor network which uses two types of sensor nodes and layered architecture is considered. Clustered heads in each cluster are synchronized with different timing offsets to reduce the sleep delay. Using this concept, the latency problem can be solved and more efficient power usage can be obtained.
Photonics and other approaches to high speed communications
NASA Technical Reports Server (NTRS)
Maly, Kurt
1992-01-01
Our research group of 4 faculty and about 10-15 graduate students was actively involved (as a group) in the development of computer communication networks for the last five years. Many of its individuals have been involved in related research for a much longer period. The overall research goal is to extend network performance to higher data rates, to improve protocol performance at most ISO layers and to improve network operational performance. We briefly state our research goals, then discuss the research accomplishments and direct your attention to attached and/or published papers which cover the following topics: scalable parallel communications; high performance interconnection between high data rate networks; and a simple, effective media access protocol system for integrated, high data rate networks.
Gutiérrez-Cepeda, L; Fernández, A; Crespo, F; Gosálvez, J; Serres, C
2011-03-01
For many years in human assisted-reproduction procedures there have been special protocols to prepare and improve sperm quality. Colloidal centrifugation (CC) is a useful technique that has been proved to enhance semen quality by selection of the best spermatozoa for different species. Its use is recommended to improve fertility of subfertile stallions but current CC protocols are clinically complicated in the equine sperm processing technique due to economic and technical difficulties. The aim of this study was to determine the optimal processing procedures to adapt the use of a CC product (EquiPure™) in the equine reproduction industry. A total of nineteen ejaculates were collected from 10 Purebred Spanish Horses (P.R.E horses) using a Missouri artificial vagina. Gel-free semen aliquots were analyzed prior to treatment (control). Semen was subjected to one of six CC protocols with EquiPure™ and centrifuged samples were statistically evaluated by ANOVA and Duncan tests (p<0.05) for sperm quality and recovery rate. We obtained higher values by colloidal centrifugation in LIN, STR and BCF variables and DNA fragmentation index trended to be lower in most of the CC protocols. The studied protocols were shown to be as efficient in improving equine sperm quality as the current commercial EquiPure™, with the added advantage of being much more economical and simple to use. According to these results it seems to be possible to incorporate single layer and or high colloidal centrifugation volume protocols what would make them simple, economic and clinically viable for the equine sperm processing procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
Time Synchronization and Distribution Mechanisms for Space Networks
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Gao, Jay L.; Clare, Loren P.; Mills, David L.
2011-01-01
This work discusses research on the problems of synchronizing and distributing time information between spacecraft based on the Network Time Protocol (NTP), where NTP is a standard time synchronization protocol widely used in the terrestrial network. The Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol was designed and developed for synchronizing spacecraft that are in proximity where proximity is less than 100,000 km distant. A particular application is synchronization between a Mars orbiter and rover. Lunar scenarios as well as outer-planet deep space mother-ship-probe missions may also apply. Spacecraft with more accurate time information functions as a time-server, and the other spacecraft functions as a time-client. PITS can be easily integrated and adaptable to the CCSDS Proximity-1 Space Link Protocol with minor modifications. In particular, PITS can take advantage of the timestamping strategy that underlying link layer functionality provides for accurate time offset calculation. The PITS algorithm achieves time synchronization with eight consecutive space network time packet exchanges between two spacecraft. PITS can detect and avoid possible errors from receiving duplicate and out-of-order packets by comparing with the current state variables and timestamps. Further, PITS is able to detect error events and autonomously recover from unexpected events that can possibly occur during the time synchronization and distribution process. This capability achieves an additional level of protocol protection on top of CRC or Error Correction Codes. PITS is a lightweight and efficient protocol, eliminating the needs for explicit frame sequence number and long buffer storage. The PITS protocol is capable of providing time synchronization and distribution services for a more general domain where multiple entities need to achieve time synchronization using a single point-to-point link.
Umar, Amara; Javaid, Nadeem; Ahmad, Ashfaq; Khan, Zahoor Ali; Qasim, Umar; Alrajeh, Nabil; Hayat, Amir
2015-06-18
Performance enhancement of Underwater Wireless Sensor Networks (UWSNs) in terms of throughput maximization, energy conservation and Bit Error Rate (BER) minimization is a potential research area. However, limited available bandwidth, high propagation delay, highly dynamic network topology, and high error probability leads to performance degradation in these networks. In this regard, many cooperative communication protocols have been developed that either investigate the physical layer or the Medium Access Control (MAC) layer, however, the network layer is still unexplored. More specifically, cooperative routing has not yet been jointly considered with sink mobility. Therefore, this paper aims to enhance the network reliability and efficiency via dominating set based cooperative routing and sink mobility. The proposed work is validated via simulations which show relatively improved performance of our proposed work in terms the selected performance metrics.
Aksel, Hacer; Serper, Ahmet
2017-01-01
The aim of this study was to compare the ability of 17% ethylenediaminetetraacetic acid (EDTA) and QMix with different concentrations and time exposures of initial sodium hypochlorite (NaOCl) to remove the smear layer from the root canals. Eighty maxillary central incisors were used. After instrumentation, the teeth were divided into eight experimental groups according to the initial and final rinse. About 2.5% and 5% NaOCl were used during instrumentation and for 1 or 3 min was used as postinstrumentation initial irrigants, and 17% EDTA and QMix used as final irrigants. The apical and middle parts of the specimens were observed by scanning electron microscope. Data were analyzed using the Kruskal-Wallis, Mann-Whitney, and Friedman's test. Regardless of the type of final irrigant, QMix allowed more smear layer removal than EDTA after using 5% initial NaOCl for 3 min. In the apical part of the root canal walls, the smear layer was not completely removed. QMix and EDTA were similarly effective in smear layer removal at the middle parts of the root canal regardless of the concentration and time exposure of initial NaOCl, while none of the irrigation protocols was able to remove smear layer at the apical parts.
TMN: Introduction and interpretation
NASA Astrophysics Data System (ADS)
Pras, Aiko
An overview of Telecommunications Management Network (TMN) status is presented. Its relation with Open System Interconnection (OSI) systems management is given and the commonalities and distinctions are identified. Those aspects that distinguish TMN from OSI management are introduced; TMN's functional and physical architectures and TMN's logical layered architecture are discussed. An analysis of the concepts used by these architectures (reference point, interface, function block, and building block) is given. The use of these concepts to express geographical distribution and functional layering is investigated. This aspect is interesting to understand how OSI management protocols can be used in a TMN environment. A statement regarding applicability of TMN as a model that helps the designers of (management) networks is given.
NASA Astrophysics Data System (ADS)
Kato, Riku; Frusawa, Hiroshi
2015-07-01
We investigated the individual properties of various polyion-coated bubbles with a mean diameter ranging from 300 to 500 nm. Dark field microscopy allows one to track the individual particles of the submicron bubbles (SBs) encapsulated by the layer-by-layer (LbL) deposition of cationic and anionic polyelectrolytes (PEs). Our focus is on the two-step charge reversals of PE-SB complexes: the first is a reversal from negatively charged bare SBs with no PEs added to positive SBs encapsulated by polycations (monolayer deposition), and the second is overcharging into negatively charged PE-SB complexes due to the subsequent addition of polyanions (double-layer deposition). The details of these phenomena have been clarified through the analysis of a number of trajectories of various PE-SB complexes that experience either Brownian motion or electrophoresis. The contrasted results obtained from the analysis were as follows: an amount in excess of the stoichiometric ratio of the cationic polymers was required for the first charge-reversal, whereas the stoichiometric addition of the polyanions lead to the electrical neutralization of the PE-SB complex particles. The recovery of the stoichiometry in the double-layer deposition paves the way for fabricating multi-layered SBs encapsulated solely with anionic and cationic PEs, which provides a simple protocol to create smart agents for either drug delivery or ultrasound contrast imaging.
Kato, Riku; Frusawa, Hiroshi
2015-07-08
We investigated the individual properties of various polyion-coated bubbles with a mean diameter ranging from 300 to 500 nm. Dark field microscopy allows one to track the individual particles of the submicron bubbles (SBs) encapsulated by the layer-by-layer (LbL) deposition of cationic and anionic polyelectrolytes (PEs). Our focus is on the two-step charge reversals of PE-SB complexes: the first is a reversal from negatively charged bare SBs with no PEs added to positive SBs encapsulated by polycations (monolayer deposition), and the second is overcharging into negatively charged PE-SB complexes due to the subsequent addition of polyanions (double-layer deposition). The details of these phenomena have been clarified through the analysis of a number of trajectories of various PE-SB complexes that experience either Brownian motion or electrophoresis. The contrasted results obtained from the analysis were as follows: an amount in excess of the stoichiometric ratio of the cationic polymers was required for the first charge-reversal, whereas the stoichiometric addition of the polyanions lead to the electrical neutralization of the PE-SB complex particles. The recovery of the stoichiometry in the double-layer deposition paves the way for fabricating multi-layered SBs encapsulated solely with anionic and cationic PEs, which provides a simple protocol to create smart agents for either drug delivery or ultrasound contrast imaging.
Implementation Of Secure 6LoWPAN Communications For Tactical Wireless Sensor Networks
2016-09-01
wireless sensor networks (WSN) consist of power -constrained devices spread throughout a region-of-interest to provide data extraction in real time...1 A. LOW POWER WIRELESS SENSOR NETWORKS ............................1 B. INTRODUCTION TO...communication protocol for low power wireless personal area networks Since the IEEE 802.15.4 standard only defines the first two layers of the Open
2002-09-01
to Ref (1). 34 RS232.java Serial Coomunication port class To Bluetooth module HCI.java Host Control Interface class L2CAP.java Logical Link Control...standard protocol for transporting IP datagrams over point-to-point link . It is designed to run over RFCOMM to accomplish point-to-point connections...Control and Adaption Host Controller Interface Link Manager Baseband / Link Controller Radio Figure 2. Bluetooth layers (From Ref. [3].) C
Design and FPGA implementation for MAC layer of Ethernet PON
NASA Astrophysics Data System (ADS)
Zhu, Zengxi; Lin, Rujian; Chen, Jian; Ye, Jiajun; Chen, Xinqiao
2004-04-01
Ethernet passive optical network (EPON), which represents the convergence of low-cost, high-bandwidth and supporting multiple services, appears to be one of the best candidates for the next-generation access network. The work of standardizing EPON as a solution for access network is still underway in the IEEE802.3ah Ethernet in the first mile (EFM) task force. The final release is expected in 2004. Up to now, there has been no standard application specific integrated circuit (ASIC) chip available which fulfills the functions of media access control (MAC) layer of EPON. The MAC layer in EPON system has many functions, such as point-to-point emulation (P2PE), Ethernet MAC functionality, multi-point control protocol (MPCP), network operation, administration and maintenance (OAM) and link security. To implement those functions mentioned above, an embedded real-time operating system (RTOS) and a flexible programmable logic device (PLD) with an embedded processor are used. The software and hardware functions in MAC layer are realized through programming embedded microprocessor and field programmable gate array(FPGA). Finally, some experimental results are given in this paper. The method stated here can provide a valuable reference for developing EPON MAC layer ASIC.
Influence of curing protocol and ceramic composition on the degree of conversion of resin cement
Lanza, Marcos Daniel Septimio; Andreeta, Marcello Rubens Barsi; Pegoraro, Thiago Amadei; Pegoraro, Luiz Fernando; Carvalho, Ricardo Marins De
2017-01-01
Abstract Due to increasing of aesthetic demand, ceramic crowns are widely used in different situations. However, to obtain long-term prognosis of restorations, a good conversion of resin cement is necessary. Objective: To evaluate the degree of conversion (DC) of one light-cure and two dual-cure resin cements under a simulated clinical cementation of ceramic crowns. Material and Methods: Prepared teeth were randomly split according to the ceramic's material, resin cement and curing protocol. The crowns were cemented as per manufacturer's directions and photoactivated either from occlusal suface only for 60 s; or from the buccal, occlusal and lingual surfaces, with an exposure time of 20 s on each aspect. After cementation, the specimens were stored in deionized water at 37°C for 7 days. Specimens were transversally sectioned from occlusal to cervical surfaces and the DC was determined along the cement line with three measurements taken and averaged from the buccal, lingual and approximal aspects using micro-Raman spectroscopy (Alpha 300R/WITec®). Data were analyzed by 3-way ANOVA and Tukey test at =5%. Results: Statistical analysis showed significant differences among cements, curing protocols and ceramic type (p<0.001). The curing protocol 3x20 resulted in higher DC for all tested conditions; lower DC was observed for Zr ceramic crowns; Duolink resin cement culminated in higher DC regardless ceramic composition and curing protocol. Conclusion: The DC of resin cement layers was dependent on the curing protocol and type of ceramic. PMID:29211292
Schindera, Sebastian T; Nelson, Rendon C; Toth, Thomas L; Nguyen, Giao T; Toncheva, Greta I; DeLong, David M; Yoshizumi, Terry T
2008-02-01
The purpose of this study was to evaluate in a phantom study the effect of patient size on radiation dose for abdominal MDCT with automatic tube current modulation. One or two 4-cm-thick circumferential layers of fat-equivalent material were added to the abdomen of an anthropomorphic phantom to simulate patients of three sizes: small (cross-sectional dimensions, 18 x 22 cm), average size (26 x 30 cm), and oversize (34 x 38 cm). Imaging was performed with a 64-MDCT scanner with combined z-axis and xy-axis tube current modulation according to two protocols: protocol A had a noise index of 12.5 H, and protocol B, 15.0 H. Radiation doses to three abdominal organs and the skin were assessed. Image noise also was measured. Despite increasing patient size, the image noise measured was similar for protocol A (range, 11.7-12.2 H) and protocol B (range, 13.9-14.8 H) (p > 0.05). With the two protocols, in comparison with the dose of the small patient, the abdominal organ doses of the average-sized patient and the oversized patient increased 161.5-190.6%and 426.9-528.1%, respectively (p < 0.001). The skin dose increased as much as 268.6% for the average-sized patient and 816.3% for the oversized patient compared with the small patient (p < 0.001). Oversized patients undergoing abdominal MDCT with tube current modulation receive significantly higher doses than do small patients. The noise index needs to be adjusted to the body habitus to ensure dose efficiency.
Single cell–resolution western blotting
Kang, Chi-Chih; Yamauchi, Kevin A; Vlassakis, Julea; Sinkala, Elly; Duncombe, Todd A; Herr, Amy E
2017-01-01
This protocol describes how to perform western blotting on individual cells to measure cell-to-cell variation in protein expression levels and protein state. like conventional western blotting, single-cell western blotting (scWB) is particularly useful for protein targets that lack selective antibodies (e.g., isoforms) and in cases in which background signal from intact cells is confounding. scWB is performed on a microdevice that comprises an array of microwells molded in a thin layer of a polyacrylamide gel (PAG). the gel layer functions as both a molecular sieving matrix during PAGE and a blotting scaffold during immunoprobing. scWB involves five main stages: (i) gravity settling of cells into microwells; (ii) chemical lysis of cells in each microwell; (iii) PAGE of each single-cell lysate; (iv) exposure of the gel to UV light to blot (immobilize) proteins to the gel matrix; and (v) in-gel immunoprobing of immobilized proteins. Multiplexing can be achieved by probing with antibody cocktails and using antibody stripping/reprobing techniques, enabling detection of 10+ proteins in each cell. We also describe microdevice fabrication for both uniform and pore-gradient microgels. to extend in-gel immunoprobing to gels of small pore size, we describe an optional gel de-cross-linking protocol for more effective introduction of antibodies into the gel layer. once the microdevice has been fabricated, the assay can be completed in 4–6 h by microfluidic novices and it generates high-selectivity, multiplexed data from single cells. the technique is relevant when direct measurement of proteins in single cells is needed, with applications spanning the fundamental biosciences to applied biomedicine. PMID:27466711
NASA Astrophysics Data System (ADS)
Romanin, Marco; Aleksandra Bitner, Maria; Brand, Uwe
2017-04-01
Brachiopods secrete low-Mg calcite shells in near equilibrium with the surrounding sea water, with respect to their secondary and tertiary layers. For this reason, in recent years they have been intensively studied as archives for oceanographic and environmental proxies. The primary layer has been shown not to be deposited in equilibrium with the ambient sea water, leading to a novel cleaning protocol proposed by Zaki et al (2015). In the spite of improving on existing proxies, the shell microstructure and growth has to be taken in to account in their applications. The secretion of the primary layer is known to be external of the shell, but in SEM investigations of Liothyrella uva and L. neozelanica we discovered that the primary layer has its origin within the fibres of the secondary layer. Furthermore, the primary layer calcite is not a continuum but instead it consists of a 'new' band for each major growth increment. There is overlap between the preceding and subsequent 'band' (or shingles) of the primary layer, which may extend into the secondary/tertiary layer. This finding may lead to more comprehensive knowledge of shell microstructure processes in L. uva and L. neozelanica that may be applied and extended to other modern and fossil brachiopods, including age dating of brachiopods. This discovery may make brachiopod archives more reliable and consistent proxies when applied to and interpreting their geological record.
Shittu, Aminu; Raji, Abdullahi Abdullahi; Madugu, Shuaibu A; Hassan, Akinola Waheed; Fasina, Folorunso Oludayo
2014-09-12
Layer chickens are exposed to high risks of production losses and mortality with impact on farm profitability. The harsh tropical climate and severe disease outbreaks, poor biosecurity, sub-minimal vaccination and treatment protocols, poor management practices, poor chick quality, feed-associated causes, and unintended accidents oftentimes aggravate mortality and negatively affect egg production. The objectives of this study were to estimate the probability of survival and evaluate risk factors for death under different intensive housing conditions in a tropical climate, and to assess the production performance in the housing systems. Daily mean mortality percentages and egg production figures were significantly lower and higher in the sealed pens and open houses (P < 0. 001) respectively. The total mean feed consumption/bird/day was similar for the open sided and sealed pens but the mean feed quantity per egg produce was significantly lower in the sealed pens ((P < 0.005). Seasons differently impacted on mortality with the hot-dry season producing significantly higher risk of mortality (61 times) and reduced egg production. Other parameters also differed except the egg production during the cold-dry season. Layers in sealed pens appear to have higher probability of survival and the Kaplan-Meir survival curves differed for each pen; ≥ 78 weeks old layer have higher probability of survival compared with the younger chickens and the 19-38 weeks age category are at highest risk of death (P < 0.001). The hazard-ratio for mortality of layers raised in sealed pens was 0.568 (56.8%). Reasons for spiked mortality in layer chickens may not always be associated with disease. Hot-dry climatic environment is associated with heat stress, waning immunity and inefficient feed usage and increase probability of death with reduced egg production; usage of environmentally controlled building in conditions where environmental temperature may rise significantly above 25°C will reduce this impact. Since younger birds (19-38 weeks) are at higher risk of death due to stress of coming into production, management changes and diseases, critical implementation of protocols that will reduce death at this precarious period becomes mandatory. Whether older chickens' better protection from death is associated with many prophylactic and metaphylactic regimen of medications/vaccination will need further investigation.
Hybrid evolutionary computing model for mobile agents of wireless Internet multimedia
NASA Astrophysics Data System (ADS)
Hortos, William S.
2001-03-01
The ecosystem is used as an evolutionary paradigm of natural laws for the distributed information retrieval via mobile agents to allow the computational load to be added to server nodes of wireless networks, while reducing the traffic on communication links. Based on the Food Web model, a set of computational rules of natural balance form the outer stage to control the evolution of mobile agents providing multimedia services with a wireless Internet protocol WIP. The evolutionary model shows how mobile agents should behave with the WIP, in particular, how mobile agents can cooperate, compete and learn from each other, based on an underlying competition for radio network resources to establish the wireless connections to support the quality of service QoS of user requests. Mobile agents are also allowed to clone themselves, propagate and communicate with other agents. A two-layer model is proposed for agent evolution: the outer layer is based on the law of natural balancing, the inner layer is based on a discrete version of a Kohonen self-organizing feature map SOFM to distribute network resources to meet QoS requirements. The former is embedded in the higher OSI layers of the WIP, while the latter is used in the resource management procedures of Layer 2 and 3 of the protocol. Algorithms for the distributed computation of mobile agent evolutionary behavior are developed by adding a learning state to the agent evolution state diagram. When an agent is in an indeterminate state, it can communicate to other agents. Computing models can be replicated from other agents. Then the agents transitions to the mutating state to wait for a new information-retrieval goal. When a wireless terminal or station lacks a network resource, an agent in the suspending state can change its policy to submit to the environment before it transitions to the searching state. The agents learn the facts of agent state information entered into an external database. In the cloning process, two agents on a host station sharing a common goal can be merged or married to compose a new agent. Application of the two-layer set of algorithms for mobile agent evolution, performed in a distributed processing environment, is made to the QoS management functions of the IP multimedia IM sub-network of the third generation 3G Wideband Code-division Multiple Access W-CDMA wireless network.
Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography.
Wojtkowski, Maciej; Srinivasan, Vivek; Fujimoto, James G; Ko, Tony; Schuman, Joel S; Kowalczyk, Andrzej; Duker, Jay S
2005-10-01
To demonstrate high-speed, ultrahigh-resolution, 3-dimensional optical coherence tomography (3D OCT) and new protocols for retinal imaging. Ultrahigh-resolution OCT using broadband light sources achieves axial image resolutions of approximately 2 microm compared with standard 10-microm-resolution OCT current commercial instruments. High-speed OCT using spectral/Fourier domain detection enables dramatic increases in imaging speeds. Three-dimensional OCT retinal imaging is performed in normal human subjects using high-speed ultrahigh-resolution OCT. Three-dimensional OCT data of the macula and optic disc are acquired using a dense raster scan pattern. New processing and display methods for generating virtual OCT fundus images; cross-sectional OCT images with arbitrary orientations; quantitative maps of retinal, nerve fiber layer, and other intraretinal layer thicknesses; and optic nerve head topographic parameters are demonstrated. Three-dimensional OCT imaging enables new imaging protocols that improve visualization and mapping of retinal microstructure. An OCT fundus image can be generated directly from the 3D OCT data, which enables precise and repeatable registration of cross-sectional OCT images and thickness maps with fundus features. Optical coherence tomography images with arbitrary orientations, such as circumpapillary scans, can be generated from 3D OCT data. Mapping of total retinal thickness and thicknesses of the nerve fiber layer, photoreceptor layer, and other intraretinal layers is demonstrated. Measurement of optic nerve head topography and disc parameters is also possible. Three-dimensional OCT enables measurements that are similar to those of standard instruments, including the StratusOCT, GDx, HRT, and RTA. Three-dimensional OCT imaging can be performed using high-speed ultrahigh-resolution OCT. Three-dimensional OCT provides comprehensive visualization and mapping of retinal microstructures. The high data acquisition speeds enable high-density data sets with large numbers of transverse positions on the retina, which reduces the possibility of missing focal pathologies. In addition to providing image information such as OCT cross-sectional images, OCT fundus images, and 3D rendering, quantitative measurement and mapping of intraretinal layer thickness and topographic features of the optic disc are possible. We hope that 3D OCT imaging may help to elucidate the structural changes associated with retinal disease as well as improve early diagnosis and monitoring of disease progression and response to treatment.
Zhi, Zhongwei; Chao, Jennifer R.; Wietecha, Tomasz; Hudkins, Kelly L.; Alpers, Charles E.; Wang, Ruikang K.
2014-01-01
Purpose. To evaluate early diabetes-induced changes in retinal thickness and microvasculature in a type 2 diabetic mouse model by using optical coherence tomography (OCT)/optical microangiography (OMAG). Methods. Twenty-two-week-old obese (OB) BTBR mice (n = 10) and wild-type (WT) control mice (n = 10) were imaged. Three-dimensional (3D) data volumes were captured with spectral domain OCT using an ultrahigh-sensitive OMAG scanning protocol for 3D volumetric angiography of the retina and dense A-scan protocol for measurement of the total retinal blood flow (RBF) rate. The thicknesses of the nerve fiber layer (NFL) and that of the NFL to the inner plexiform layer (IPL) were measured and compared between OB and WT mice. The linear capillary densities within intermediate and deep capillary layers were determined by the number of capillaries crossing a 500-μm line. The RBF rate was evaluated using an en face Doppler approach. These quantitative measurements were compared between OB and WT mice. Results. The retinal thickness of the NFL to IPL was significantly reduced in OB mice (P < 0.01) compared to that in WT mice, whereas the NFL thickness between the two was unchanged. 3D depth-resolved OMAG angiography revealed the first in vivo 3D model of mouse retinal microcirculation. Although no obvious differences in capillary vessel densities of the intermediate and deep capillary layers were detected between normal and OB mice, the total RBF rate was significantly lower (P < 0.05) in OB mice than in WT mice. Conclusions. We conclude that OB BTBR mice have significantly reduced NFL–IPL thicknesses and total RBF rates compared with those of WT mice, as imaged by OCT/OMAG. OMAG provides an unprecedented capability for high-resolution depth-resolved imaging of mouse retinal vessels and blood flow that may play a pivotal role in providing a noninvasive method for detecting early microvascular changes in patients with diabetic retinopathy. PMID:24458155
Scriven, J. M.; Taylor, L. E.; Wood, A. J.; Bell, P. R.; Naylor, A. R.; London, N. J.
1998-01-01
This trial was undertaken to examine the safety and efficacy of four-layer compared with short stretch compression bandages for the treatment of venous leg ulcers within the confines of a prospective, randomised, ethically approved trial. Fifty-three patients were recruited from a dedicated venous ulcer assessment clinic and their individual ulcerated limbs were randomised to receive either a four-layer bandage (FLB)(n = 32) or a short stretch bandage (SSB)(n = 32). The endpoint was a completely healed ulcer. However, if after 12 weeks of compression therapy no healing had been achieved, that limb was withdrawn from the study and deemed to have failed to heal with the prescribed bandage. Leg volume was measured using the multiple disc model at the first bandaging visit, 4 weeks later, and on ulcer healing. Complications arising during the study were recorded. Data from all limbs were analysed on an intention to treat basis; thus the three limbs not completing the protocol were included in the analysis. Of the 53 patients, 50 completed the protocol. At 1 year the healing rate was FLB 55% and SSB 57% (chi 2 = 0.0, df = 1, P = 1.0). Limbs in the FLB arm of the study sustained one minor complication, whereas SSB limbs sustained four significant complications. Leg volumes reduced significantly after 4 weeks of compression, but subsequent volume changes were insignificant. Ulcer healing rates were not influenced by the presence of deep venous reflux, post-thrombotic deep vein changes nor by ulcer duration. Although larger ulcers took longer to heal, the overall healing rates for large (> 10 cm2) and small (10 cm2 or less) ulcers were comparable. Four-layer and short stretch bandages were equally efficacious in healing venous ulcers independent of pattern of venous reflux, ulcer area or duration. FLB limbs sustained fewer complications than SSB. PMID:9682649
Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang
2017-01-01
In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN’s MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS. PMID:28134853
Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang
2017-01-28
In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.
NASA Astrophysics Data System (ADS)
Pleros, N.; Kalfas, G.; Mitsolidou, C.; Vagionas, C.; Tsiokos, D.; Miliou, A.
2017-01-01
Future broadband access networks in the 5G framework will need to be bilateral, exploiting both optical and wireless technologies. This paper deals with new approaches and synergies on radio-over-fiber (RoF) technologies and how those can be leveraged to seamlessly converge wireless technology for agility and mobility with passive optical networks (PON)-based backhauling. The proposed convergence paradigm is based upon a holistic network architecture mixing mm-wave wireless access with photonic integration, dynamic capacity allocation and network coding schemes to enable high bandwidth and low-latency fixed and 60GHz wireless personal area communications for gigabit rate per user, proposing and deploying on top a Medium-Transparent MAC (MT-MAC) protocol as a low-latency bandwidth allocation mechanism. We have evaluated alternative network topologies between the central office (CO) and the access point module (APM) for data rates up to 2.5 Gb/s and SC frequencies up to 60 GHz. Optical network coding is demonstrated for SCM-based signaling to enhance bandwidth utilization and facilitate optical-wireless convergence in 5G applications, reporting medium-transparent network coding directly at the physical layer between end-users communicating over a RoF infrastructure. Towards equipping the physical layer with the appropriate agility to support MT-MAC protocols, a monolithic InP-based Remote Antenna Unit optoelectronic PIC interface is shown that ensures control over the optical resource allocation assisting at the same time broadband wireless service. Finally, the MT-MAC protocol is analysed and simulation and analytical theoretical results are presented that are found to be in good agreement confirming latency values lower than 1msec for small- to mid-load conditions.
Fellin, Francesco; Righetto, Roberto; Fava, Giovanni; Trevisan, Diego; Amelio, Dante; Farace, Paolo
2017-03-01
To investigate the range errors made in treatment planning due to the presence of the immobilization devices along the proton beam path. The measured water equivalent thickness (WET) of selected devices was measured by a high-energy spot and a multi-layer ionization chamber and compared with that predicted by treatment planning system (TPS). Two treatment couches, two thermoplastic masks (both un-stretched and stretched) and one headrest were selected. At TPS, every immobilization device was modelled as being part of the patient. The following parameters were assessed: CT acquisition protocol, dose-calculation grid-sizes (1.5 and 3.0mm) and beam-entrance with respect to the devices (coplanar and non-coplanar). Finally, the potential errors produced by a wrong manual separation between treatment couch and the CT table (not present during treatment) were investigated. In the thermoplastic mask, there was a clear effect due to beam entrance, a moderate effect due to the CT protocols and almost no effect due to TPS grid-size, with 1mm errors observed only when thick un-stretched portions were crossed by non-coplanar beams. In the treatment couches the WET errors were negligible (<0.3mm) regardless of the grid-size and CT protocol. The potential range errors produced in the manual separation between treatment couch and CT table were small with 1.5mm grid-size, but could be >0.5mm with a 3.0mm grid-size. In the headrest, WET errors were negligible (0.2mm). With only one exception (un-stretched mask, non-coplanar beams), the WET of all the immobilization devices was properly modelled by the TPS. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Self-organization of human embryonic stem cells on micropatterns
Deglincerti, Alessia; Etoc, Fred; Guerra, M. Cecilia; Martyn, Iain; Metzger, Jakob; Ruzo, Albert; Simunovic, Mijo; Yoney, Anna; Brivanlou, Ali H.; Siggia, Eric; Warmflash, Aryeh
2018-01-01
Fate allocation in the gastrulating embryo is spatially organized as cells differentiate to specialized cell types depending on their positions with respect to the body axes. There is a need for in vitro protocols that allow the study of spatial organization associated with this developmental transition. While embryoid bodies and organoids can exhibit some spatial organization of differentiated cells, these methods do not yield consistent and fully reproducible results. Here, we describe a micropatterning approach where human embryonic stem cells are confined to disk-shaped, sub-millimeter colonies. After 42 hours of BMP4 stimulation, cells form self-organized differentiation patterns in concentric radial domains, which express specific markers associated with the embryonic germ layers, reminiscent of gastrulating embryos. Our protocol takes 3 days; it uses commercial microfabricated slides (CYTOO), human laminin-521 (LN-521) as extra-cellular matrix coating, and either conditioned or chemically-defined medium (mTeSR). Differentiation patterns within individual colonies can be determined by immunofluorescence and analyzed with cellular resolution. Both the size of the micropattern and the type of medium affect the patterning outcome. The protocol is appropriate for personnel with basic stem cell culture training. This protocol describes a robust platform for quantitative analysis of the mechanisms associated with pattern formation at the onset of gastrulation. PMID:27735934
A highly efficient protocol for micropropagation of Begonia tuberous.
Duong, Tan Nhut; Nguyen, Thanh Hai; Mai, Xuan Phan
2010-01-01
A protocol for micropropagation of begonia was established utilizing a thin cell layer (TCL) system. This system has been employed to produce several thousand shoots per sample. Explant size and position, and plant growth regulators (PGRs) contribute to the tissue morphogenesis. By optimizing the size of the tissue and applying an improved selection procedure, shoots were elongated in 8 weeks of culture, with an average number of 210 +/- 9.7 shoots per segment. This system has facilitated a number of studies using TCL as a model for micropropagation and will enable the large-scale production of begonia. On an average, the best treatment would allow production of about 10,000 plantlets by the micropropagation of the axillary buds of one plant with five petioles, within a period of 8 months.
A reference model for space data system interconnection services
NASA Astrophysics Data System (ADS)
Pietras, John; Theis, Gerhard
1993-03-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
A reference model for space data system interconnection services
NASA Technical Reports Server (NTRS)
Pietras, John; Theis, Gerhard
1993-01-01
The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).
Araujo, Pedro; Tilahun, Ephrem; Breivik, Joar Fjørtoft; Abdulkader, Bashir M; Frøyland, Livar; Zeng, Yingxu
2016-02-01
It is well-known that triacylglycerol (TAG) ions are suppressed by phospholipid (PL) ions in regiospecific analysis of TAG by mass spectrometry (MS). Hence, it is essential to remove the PL during sample preparation prior to MS analysis. The present article proposes a cost-effective liquid-liquid extraction (LLE) method to remove PL from TAG in different kinds of biological samples by using methanol, hexane and water. High performance thin layer chromatography confirmed the lack of PL in krill oil and salmon liver samples, submitted to the proposed LLE protocol, and liquid chromatography tandem MS confirmed that the identified TAG ions were highly enhanced after implementing the LLE procedure. Copyright © 2015 Elsevier B.V. All rights reserved.
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-02-24
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption.
Yousaf, Sidrah; Javaid, Nadeem; Qasim, Umar; Alrajeh, Nabil; Khan, Zahoor Ali; Ahmed, Mansoor
2016-01-01
In this study, we analyse incremental cooperative communication for wireless body area networks (WBANs) with different numbers of relays. Energy efficiency (EE) and the packet error rate (PER) are investigated for different schemes. We propose a new cooperative communication scheme with three-stage relaying and compare it to existing schemes. Our proposed scheme provides reliable communication with less PER at the cost of surplus energy consumption. Analytical expressions for the EE of the proposed three-stage cooperative communication scheme are also derived, taking into account the effect of PER. Later on, the proposed three-stage incremental cooperation is implemented in a network layer protocol; enhanced incremental cooperative critical data transmission in emergencies for static WBANs (EInCo-CEStat). Extensive simulations are conducted to validate the proposed scheme. Results of incremental relay-based cooperative communication protocols are compared to two existing cooperative routing protocols: cooperative critical data transmission in emergencies for static WBANs (Co-CEStat) and InCo-CEStat. It is observed from the simulation results that incremental relay-based cooperation is more energy efficient than the existing conventional cooperation protocol, Co-CEStat. The results also reveal that EInCo-CEStat proves to be more reliable with less PER and higher throughput than both of the counterpart protocols. However, InCo-CEStat has less throughput with a greater stability period and network lifetime. Due to the availability of more redundant links, EInCo-CEStat achieves a reduced packet drop rate at the cost of increased energy consumption. PMID:26927104
NASA Astrophysics Data System (ADS)
Johannessen, Sophia C.; Macdonald, Robie W.
2018-03-01
In their comment on the review paper, ‘Geoengineering with seagrasses: is credit due where credit is given?,’ Oreska et al 2018 state that some of the concerns raised in the review ‘warrant serious consideration by the seagrass research community,’ but they argue that these concerns are either not relevant to the Voluntary Carbon Standard protocol, VM0033, or are already addressed by specific provisions in the protocol. The VM0033 protocol is a strong and detailed document that includes much of merit, but the methodology for determining carbon sequestration in sediment is flawed, both in the carbon stock change method and in the carbon burial method. The main problem with the carbon stock change method is that the labile carbon in the surface layer of sediments is vulnerable to remineralization and resuspension; it is not sequestered on the 100 year timescale required for carbon credits. The problem with the carbon burial method is chiefly in its application. The protocol does not explain how to apply 210Pb-dating to a core, leaving project proponents to apply the inappropriate methods frequently reported in the blue carbon literature, which result in overestimated sediment accumulation rates. Finally, the default emission factors permitted by the protocol are based on literature values that are themselves too high. All of these problems can be addressed, which should result in clearer, more rigorous guidelines for awarding carbon credits for the protection or restoration of seagrass meadows.
NASA Astrophysics Data System (ADS)
Morschheuser, Lena; Wessels, Hauke; Pille, Christina; Fischer, Judith; Hünniger, Tim; Fischer, Markus; Paschke-Kratzin, Angelika; Rohn, Sascha
2016-05-01
Protein analysis using high-performance thin-layer chromatography (HPTLC) is not commonly used but can complement traditional electrophoretic and mass spectrometric approaches in a unique way. Due to various detection protocols and possibilities for hyphenation, HPTLC protein analysis is a promising alternative for e.g., investigating posttranslational modifications. This study exemplarily focused on the investigation of lysozyme, an enzyme which is occurring in eggs and technologically added to foods and beverages such as wine. The detection of lysozyme is mandatory, as it might trigger allergenic reactions in sensitive individuals. To underline the advantages of HPTLC in protein analysis, the development of innovative, highly specific staining protocols leads to improved sensitivity for protein detection on HPTLC plates in comparison to universal protein derivatization reagents. This study aimed at developing a detection methodology for HPTLC separated proteins using aptamers. Due to their affinity and specificity towards a wide range of targets, an aptamer based staining procedure on HPTLC (HPTLC-aptastaining) will enable manifold analytical possibilities. Besides the proof of its applicability for the very first time, (i) aptamer-based staining of proteins is applicable on different stationary phase materials and (ii) furthermore, it can be used as an approach for a semi-quantitative estimation of protein concentrations.
Comparison of protocols for measuring cosmetic ingredient distribution in human and pig skin.
Gerstel, D; Jacques-Jamin, C; Schepky, A; Cubberley, R; Eilstein, J; Grégoire, S; Hewitt, N; Klaric, M; Rothe, H; Duplan, H
2016-08-01
The Cosmetics Europe Skin Bioavailability and Metabolism Task Force aims to improve the measurement and prediction of the bioavailability of topically-exposed compounds for risk assessment. Key parameters of the experimental design of the skin penetration studies were compared. Penetration studies with frozen human and pig skin were conducted in two laboratories, according to the SCCS and OECD 428 guidelines. The disposition in skin was measured 24h after finite topical doses of caffeine, resorcinol and 7-ethoxycoumarin. The bioavailability distribution in skin layers of cold and radiolabelled chemicals were comparable. Furthermore, the distribution of each chemical was comparable in human and pig skin. The protocol was reproducible across the two laboratories. There were small differences in the amount of chemical detected in the skin layers, which were attributed to differences in washing procedures and anatomical sites of the skin used. In conclusion, these studies support the use of pig skin as an alternative source of skin should the availability of human skin become a limiting factor. If radiolabelled chemicals are not available, cold chemicals can be used, provided that the influence of chemical stability, reactivity or metabolism on the experimental design and the relevance of the data obtained is considered. Copyright © 2016. Published by Elsevier Ltd.
Building distributed rule-based systems using the AI Bus
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain C.
1990-01-01
The AI Bus software architecture was designed to support the construction of large-scale, production-quality applications in areas of high technology flux, running heterogeneous distributed environments, utilizing a mix of knowledge-based and conventional components. These goals led to its current development as a layered, object-oriented library for cooperative systems. This paper describes the concepts and design of the AI Bus and its implementation status as a library of reusable and customizable objects, structured by layers from operating system interfaces up to high-level knowledge-based agents. Each agent is a semi-autonomous process with specialized expertise, and consists of a number of knowledge sources (a knowledge base and inference engine). Inter-agent communication mechanisms are based on blackboards and Actors-style acquaintances. As a conservative first implementation, we used C++ on top of Unix, and wrapped an embedded Clips with methods for the knowledge source class. This involved designing standard protocols for communication and functions which use these protocols in rules. Embedding several CLIPS objects within a single process was an unexpected problem because of global variables, whose solution involved constructing and recompiling a C++ version of CLIPS. We are currently working on a more radical approach to incorporating CLIPS, by separating out its pattern matcher, rule and fact representations and other components as true object oriented modules.
IP access networks with QoS support
NASA Astrophysics Data System (ADS)
Sargento, Susana; Valadas, Rui J. M. T.; Goncalves, Jorge; Sousa, Henrique
2001-07-01
The increasing demand of new services and applications is pushing for drastic changes on the design of access networks targeted mainly for residential and SOHO users. Future access networks will provide full service integration (including multimedia), resource sharing at the packet level and QoS support. It is expected that using IP as the base technology, the ideal plug-and-play scenario, where the management actions of the access network operator are kept to a minimum, will be achieved easily. This paper proposes an architecture for access networks based on layer 2 or layer 3 multiplexers that allows a number of simplifications in the network elements and protocols (e.g. in the routing and addressing functions). We discuss two possible steps in the evolution of access networks towards a more efficient support of IP based services. The first one still provides no QoS support and was designed with the goal of reusing as much as possible current technologies; it is based on tunneling to transport PPP sessions. The second one introduces QoS support through the use of emerging technologies and protocols. We illustrate the different phases of a multimedia Internet access session, when using SIP for session initiation, COPS for the management of QoS policies including the AAA functions and RSVP for resource reservation.
Current state of the mass storage system reference model
NASA Technical Reports Server (NTRS)
Coyne, Robert
1993-01-01
IEEE SSSWG was chartered in May 1990 to abstract the hardware and software components of existing and emerging storage systems and to define the software interfaces between these components. The immediate goal is the decomposition of a storage system into interoperable functional modules which vendors can offer as separate commercial products. The ultimate goal is to develop interoperable standards which define the software interfaces, and in the distributed case, the associated protocols to each of the architectural modules in the model. The topics are presented in viewgraph form and include the following: IEEE SSSWG organization; IEEE SSSWG subcommittees & chairs; IEEE standards activity board; layered view of the reference model; layered access to storage services; IEEE SSSWG emphasis; and features for MSSRM version 5.
Sánchez, Antonio; Blanc, Sara; Yuste, Pedro; Perles, Angel; Serrano, Juan José
2012-01-01
This paper is focused on the description of the physical layer of a new acoustic modem called ITACA. The modem architecture includes as a major novelty an ultra-low power asynchronous wake-up system implementation for underwater acoustic transmission that is based on a low-cost off-the-shelf RFID peripheral integrated circuit. This feature enables a reduced power dissipation of 10 μW in stand-by mode and registers very low power values during reception and transmission. The modem also incorporates clear channel assessment (CCA) to support CSMA-based medium access control (MAC) layer protocols. The design is part of a compact platform for a long-life short/medium range underwater wireless sensor network. PMID:22969324
Emergence of healing in the Antarctic ozone layer
NASA Astrophysics Data System (ADS)
Solomon, Susan; Ivy, Diane J.; Kinnison, Doug; Mills, Michael J.; Neely, Ryan R.; Schmidt, Anja
2016-07-01
Industrial chlorofluorocarbons that cause ozone depletion have been phased out under the Montreal Protocol. A chemically driven increase in polar ozone (or “healing”) is expected in response to this historic agreement. Observations and model calculations together indicate that healing of the Antarctic ozone layer has now begun to occur during the month of September. Fingerprints of September healing since 2000 include (i) increases in ozone column amounts, (ii) changes in the vertical profile of ozone concentration, and (iii) decreases in the areal extent of the ozone hole. Along with chemistry, dynamical and temperature changes have contributed to the healing but could represent feedbacks to chemistry. Volcanic eruptions have episodically interfered with healing, particularly during 2015, when a record October ozone hole occurred after the Calbuco eruption.
Modeling MAC layer for powerline communications networks
NASA Astrophysics Data System (ADS)
Hrasnica, Halid; Haidine, Abdelfatteh
2001-02-01
The usage of electrical power distribution networks for voice and data transmission, called Powerline Communications, becomes nowadays more and more attractive, particularly in the telecommunication access area. The most important reasons for that are the deregulation of the telecommunication market and a fact that the access networks are still property of former monopolistic companies. In this work, first we analyze a PLC network and system structure as well as a disturbance scenario in powerline networks. After that, we define a logical structure of the powerline MAC layer and propose the reservation MAC protocols for the usage in the PLC network which provides collision free data transmission. This makes possible better network utilization and realization of QoS guarantees which can make PLC networks competitive to other access technologies.
Sánchez, Antonio; Blanc, Sara; Yuste, Pedro; Perles, Angel; Serrano, Juan José
2012-01-01
This paper is focused on the description of the physical layer of a new acoustic modem called ITACA. The modem architecture includes as a major novelty an ultra-low power asynchronous wake-up system implementation for underwater acoustic transmission that is based on a low-cost off-the-shelf RFID peripheral integrated circuit. This feature enables a reduced power dissipation of 10 μW in stand-by mode and registers very low power values during reception and transmission. The modem also incorporates clear channel assessment (CCA) to support CSMA-based medium access control (MAC) layer protocols. The design is part of a compact platform for a long-life short/medium range underwater wireless sensor network.
Delay and Disruption Tolerant Networking MACHETE Model
NASA Technical Reports Server (NTRS)
Segui, John S.; Jennings, Esther H.; Gao, Jay L.
2011-01-01
To verify satisfaction of communication requirements imposed by unique missions, as early as 2000, the Communications Networking Group at the Jet Propulsion Laboratory (JPL) saw the need for an environment to support interplanetary communication protocol design, validation, and characterization. JPL's Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in Simulator of Space Communication Networks (NPO-41373) NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various commercial, non-commercial, and in-house custom tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. As NASA is expanding its Space Communications and Navigation (SCaN) capabilities to support planned and future missions, building infrastructure to maintain services and developing enabling technologies, an important and broader role is seen for MACHETE in design-phase evaluation of future SCaN architectures. To support evaluation of the developing Delay Tolerant Networking (DTN) field and its applicability for space networks, JPL developed MACHETE models for DTN Bundle Protocol (BP) and Licklider/Long-haul Transmission Protocol (LTP). DTN is an Internet Research Task Force (IRTF) architecture providing communication in and/or through highly stressed networking environments such as space exploration and battlefield networks. Stressed networking environments include those with intermittent (predictable and unknown) connectivity, large and/or variable delays, and high bit error rates. To provide its services over existing domain specific protocols, the DTN protocols reside at the application layer of the TCP/IP stack, forming a store-and-forward overlay network. The key capabilities of the Bundle Protocol include custody-based reliability, the ability to cope with intermittent connectivity, the ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses.
Transaction aware tape-infrastructure monitoring
NASA Astrophysics Data System (ADS)
Nikolaidis, Fotios; Kruse, Daniele Francesco
2014-06-01
Administrating a large scale, multi protocol, hierarchical tape infrastructure like the CERN Advanced STORage manager (CASTOR)[2], which stores now 100 PB (with an increasing step of 25 PB per year), requires an adequate monitoring system for quick spotting of malfunctions, easier debugging and on demand report generation. The main challenges for such system are: to cope with CASTOR's log format diversity and its information scattered among several log files, the need for long term information archival, the strict reliability requirements and the group based GUI visualization. For this purpose, we have designed, developed and deployed a centralized system consisting of four independent layers: the Log Transfer layer for collecting log lines from all tape servers to a single aggregation server, the Data Mining layer for combining log data into transaction context, the Storage layer for archiving the resulting transactions and finally the Web UI layer for accessing the information. Having flexibility, extensibility and maintainability in mind, each layer is designed to work as a message broker for the next layer, providing a clean and generic interface while ensuring consistency, redundancy and ultimately fault tolerance. This system unifies information previously dispersed over several monitoring tools into a single user interface, using Splunk, which also allows us to provide information visualization based on access control lists (ACL). Since its deployment, it has been successfully used by CASTOR tape operators for quick overview of transactions, performance evaluation, malfunction detection and from managers for report generation.
NASA Astrophysics Data System (ADS)
Sportelli, M. C.; Picca, R. A.; Manoli, K.; Re, M.; Pesce, E.; Tapfer, L.; Di Franco, C.; Cioffi, N.; Torsi, L.
2017-10-01
The analytical performance of bioelectronic devices is highly influenced by their fabrication methods. In particular, the final architecture of field-effect transistor biosensors combining spin-cast poly(3-hexylthiophene) (P3HT) film and a biomolecule interlayer deposited on a SiO2/Si substrate can lead to the development of highly performing sensing systems, such as for the case of streptavidin (SA) used for biotin sensing. To gain a better understanding of the quality of the interfacial area, critical is the assessment of the morphological features characteristic of the adopted biolayer deposition protocol, namely: the layer-by-layer (LbL) approach and the spin coating technique. The present study relies on a combined surface spectroscopic and morphological characterization. Specifically, X-ray photoelectron spectroscopy operated in the parallel angle-resolved mode allowed the non-destructive investigation of the in-depth chemical composition of the SA film, alone or in the presence of the P3HT overlayer. Spectroscopic data were supported and corroborated by the results obtained with a Scanning Electron and a Helium Ion microscope investigation performed on the SA layer that provided relevant information on the protein structural arrangement or on its surface morphology. Clear differences emerged between the SA layers prepared by the two approaches, with the layer-by-layer deposition resulting in a smoother and better defined bio-electronic interface. Such findings support the superior analytical performance shown by bioelectronic devices based on LbL-deposited protein layers over spin coated ones.
Choroidal Haller's and Sattler's Layers Thickness in Normal Indian Eyes.
Roy, Rupak; Saurabh, Kumar; Vyas, Chinmayi; Deshmukh, Kaustubh; Sharma, Preeti; Chandrasekharan, Dhileesh P; Bansal, Aditya
2018-01-01
This study aims to study normative choroidal thickness (CT) and Haller's and Sattler's layers thickness in normal Indian eyes. The choroidal imaging of 73 eyes of 43 healthy Indian individuals was done using enhanced depth imaging feature of spectralis optical coherence tomography. Rraster scan protocol centered at fovea was used for imaging separately by two observers. CT was defined as the length of the perpendicular line drown from the outer border of hypereflective RPE-Bruch's complex to inner margin of choroidoscleral junction. Choroidal vessel layer thickness was measured after defining a largest choroidal vessel lumen within 750 μ on either side of the subfoveal CT vector. A perpendicular line was drawn to the innermost border of this lumen, and the distance between the perpendicular line and innermost border of choroidoscleral junction gave large choroidal vessel layer thickness (LCVLT, Haller's layer). Medium choroidal vessel layer thickness (MCVLT, Sattler's layer) was measured as the distance between same perpendicular line and outer border of hypereflective RPE-Bruch's complex. The mean age of individuals was 28.23 ± 15.29 years (range 14-59 years). Overall, the mean subfoveal CT was 331.6 ± 63.9 μ. Mean LCVLT was 227.08 ± 51.24 μ and the mean MCVLT was 95.65 ± 23.62 μ. CT was maximum subfoveally with gradual reduction in the thickness as the distance from the fovea increased. This is the first study describing the choroidal sublayer thickness, i.e., Haller's and Sattler's layer thickness along with CT in healthy Indian population.
Choroidal Haller's and Sattler's Layers Thickness in Normal Indian Eyes
Roy, Rupak; Saurabh, Kumar; Vyas, Chinmayi; Deshmukh, Kaustubh; Sharma, Preeti; Chandrasekharan, Dhileesh P.; Bansal, Aditya
2018-01-01
AIM: This study aims to study normative choroidal thickness (CT) and Haller's and Sattler's layers thickness in normal Indian eyes. MATERIALS AND METHODS: The choroidal imaging of 73 eyes of 43 healthy Indian individuals was done using enhanced depth imaging feature of spectralis optical coherence tomography. Rraster scan protocol centered at fovea was used for imaging separately by two observers. CT was defined as the length of the perpendicular line drown from the outer border of hypereflective RPE-Bruch's complex to inner margin of choroidoscleral junction. Choroidal vessel layer thickness was measured after defining a largest choroidal vessel lumen within 750 μ on either side of the subfoveal CT vector. A perpendicular line was drawn to the innermost border of this lumen, and the distance between the perpendicular line and innermost border of choroidoscleral junction gave large choroidal vessel layer thickness (LCVLT, Haller's layer). Medium choroidal vessel layer thickness (MCVLT, Sattler's layer) was measured as the distance between same perpendicular line and outer border of hypereflective RPE-Bruch's complex. RESULTS: The mean age of individuals was 28.23 ± 15.29 years (range 14–59 years). Overall, the mean subfoveal CT was 331.6 ± 63.9 μ. Mean LCVLT was 227.08 ± 51.24 μ and the mean MCVLT was 95.65 ± 23.62 μ. CT was maximum subfoveally with gradual reduction in the thickness as the distance from the fovea increased. CONCLUSION: This is the first study describing the choroidal sublayer thickness, i.e., Haller's and Sattler's layer thickness along with CT in healthy Indian population. PMID:29899646
NASA Astrophysics Data System (ADS)
Primo, Fernando L.; Rodrigues, Marcilene M. A.; Simioni, Andreza R.; Bentley, Maria V. L. B.; Morais, Paulo C.; Tedesco, Antonio C.
In this study was developed a new nano drug delivery system (NDDS) based on association of biodegradable surfactants with biocompatible magnetic fluid of maguemita citrate derivative. This formulation consists in a magnetic emulsion with nanostructured colloidal particles. Preliminary in vitro experiments showed that the formulation presents a great potential for synergic application in the topical release of photosensitizer drug (PS) and excellent target tissue properties in the photodynamic therapy (PDT) combined with hyperthermia (HPT) protocols. The physical chemistry characterization and in vitro assays were carried out by Zn(II) Phtalocyanine (ZnPc) photosensitizer incorporated into NDDS in the absence and the presence of magnetic fluid, showed good results and high biocompatibility. In vitro experiments were accomplished by tape-stripping protocols for quantification of drug association with different skin tissue layers. This technique is a classical method for analyses of drug release in stratum corneum and epidermis+ dermis skin layers. The NDDS formulations were applied directly in pig skin (tissue model) fixed in the cell's Franz device with receptor medium container with a PBS/EtOH 20% solution (10 mM, pH 7.4) at 37 °C. After 12 h of topical administration stratum corneum was removed from fifty tapes and the ZnPc retained was evaluated by solvent extraction in dimetil-sulphoxide under ultrasonic bath. These results indicated that magnetic nanoemulsion (MNE) increase the drug release on the deeper skin layers when compared with classical formulation in the absence of magnetic particles. This could be related with the increase of biocompatibility of NDDS due to the great affinity for the polar extracelullar matrix in the skin and also for the increase in the drug partition inside of corneocites wall.
Integration of the White Sands Complex into a Wide Area Network
NASA Technical Reports Server (NTRS)
Boucher, Phillip Larry; Horan, Sheila, B.
1996-01-01
The NASA White Sands Complex (WSC) satellite communications facility consists of two main ground stations, an auxiliary ground station, a technical support facility, and a power plant building located on White Sands Missile Range. When constructed, terrestrial communication access to these facilities was limited to copper telephone circuits. There was no local or wide area communications network capability. This project incorporated a baseband local area network (LAN) topology at WSC and connected it to NASA's wide area network using the Program Support Communications Network-Internet (PSCN-I). A campus-style LAN is configured in conformance with the International Standards Organization (ISO) Open Systems Interconnect (ISO) model. Ethernet provides the physical and data link layers. Transmission Control Protocol and Internet Protocol (TCP/IP) are used for the network and transport layers. The session, presentation, and application layers employ commercial software packages. Copper-based Ethernet collision domains are constructed in each of the primary facilities and these are interconnected by routers over optical fiber links. The network and each of its collision domains are shown to meet IEEE technical configuration guidelines. The optical fiber links are analyzed for the optical power budget and bandwidth allocation and are found to provide sufficient margin for this application. Personal computers and work stations attached to the LAN communicate with and apply a wide variety of local and remote administrative software tools. The Internet connection provides wide area network (WAN) electronic access to other NASA centers and the world wide web (WWW). The WSC network reduces and simplifies the administrative workload while providing enhanced and advanced inter-communications capabilities among White Sands Complex departments and with other NASA centers.
Electron hole tracking PIC simulation
NASA Astrophysics Data System (ADS)
Zhou, Chuteng; Hutchinson, Ian
2016-10-01
An electron hole is a coherent BGK mode solitary wave. Electron holes are observed to travel at high velocities relative to bulk plasmas. The kinematics of a 1-D electron hole is studied using a novel Particle-In-Cell simulation code with fully kinetic ions. A hole tracking technique enables us to follow the trajectory of a fast-moving solitary hole and study quantitatively hole acceleration and coupling to ions. The electron hole signal is detected and the simulation domain moves by a carefully designed feedback control law to follow its propagation. This approach has the advantage that the length of the simulation domain can be significantly reduced to several times the hole width, which makes high resolution simulations tractable. We observe a transient at the initial stage of hole formation when the hole accelerates to several times the cold-ion sound speed. Artificially imposing slow ion speed changes on a fully formed hole causes its velocity to change even when the ion stream speed in the hole frame greatly exceeds the ion thermal speed, so there are no reflected ions. The behavior that we observe in numerical simulations agrees very well with our analytic theory of hole momentum conservation and energization effects we call ``jetting''. The work was partially supported by the NSF/DOE Basic Plasma Science Partnership under Grant DE-SC0010491. Computer simulations were carried out on the MIT PSFC parallel AMD Opteron/Infiniband cluster Loki.
Automatic Energy Schemes for High Performance Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundriyal, Vaibhav
Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-allmore » and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.« less
A Simple XML Producer-Consumer Protocol
NASA Technical Reports Server (NTRS)
Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)
2001-01-01
There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.
Photoreceptor layer map using spectral-domain optical coherence tomography.
Lee, Ji Eun; Lim, Dae Won; Bae, Han Yong; Park, Hyun Jin
2009-12-01
To develop a novel method for analysis of the photoreceptor layer map (PLM) generated using spectral-domain optical coherence tomography (OCT). OCT scans were obtained from 20 eyes, 10 with macular holes (MH) and 10 with central serous chorioretinopathy (CSC) using the Macular Cube (512 x 128) protocol of the Cirrus HD-OCT (Carl Zeiss). The scanned data were processed using embedded tools of the advanced visualization. A partial thickness OCT fundus image of the photoreceptor layer was generated by setting the region of interest to a 50-microm thick layer that was parallel and adjacent to the retinal pigment epithelium. The resulting image depicted the photoreceptor layer as a map of the reflectivity in OCT. The PLM was compared with fundus photography, auto-fluorescence, tomography, and retinal thickness map. The signal from the photoreceptor layer of every OCT scan in each case was demonstrated as a single image of PLM in a fundus photograph fashion. In PLM images, detachment of the sensory retina is depicted as a hypo-reflective area, which represents the base of MH and serous detachment in CSC. Relative hypo-reflectivity, which was also noted at closed MH and at recently reattached retina in CSC, was associated with reduced signal from the junction between the inner and outer segments of photoreceptors in OCT images. Using PLM, changes in the area of detachment and reflectivity of the photoreceptor layer could be efficiently monitored. The photoreceptor layer can be analyzed as a map using spectral-domain OCT. In the treatment of both MH and CSC, PLM may provide new pathological information about the photoreceptor layer to expand our understanding of these diseases.
Simioni, Andreza Ribeiro; de Jesus, Priscila Costa Carvalho; Tedesco, Antonio Claudio
2018-06-01
Microcapsules fabricated using layer-by-layer self-assembly have unique properties, making them attractive for drug delivery applications. The technique has been improved, allowing the deposition of multiple layers of oppositely charged polyelectrolytes on spherical, colloidal templates. These templates can be decomposed by coating multiple layers, resulting in hollow shells. In this paper, we describe a novel drug delivery system for loading photosensitizer drugs into hollow multilayered microcapsules for photoprocess applications. Manganese carbonate particles were prepared by mixing NH 4 HCO 3 and MnSO 4 and performing consecutive polyelectrolyte adsorption processes onto these templates using poly-(sodium 4-styrene sulfonate) and poly-(allylamine hydrocholoride). A photosensitizer was also incorporated into the layers. Hollow spheres were fabricated by removing the cores in the acidic solution. The hollow, multilayered microcapsules were studied by scanning electron microscopy, steady-state, and time-resolved techniques. Their biological activity was evaluated in vitro with cancer cells using a conventional MTT assay. The synthesized CaCO 3 microparticles were uniform, non-aggregated, and highly porous spheres. The phthalocyanine derivatives loaded in the microcapsules maintained their photophysical behaviour after encapsulation. The spectroscopic results presented here showed excellent photophysical behaviour of the studied drug. We observed a desirable increase in singlet oxygen production, which is favourable for the PDT protocol. Cell viability after treatment was determined and the proposed microcapsules caused 80% cell death compared to the control. The results demonstrate that photosensitizer adsorption into the CaCO 3 microparticle voids together with the layer-by-layer assembly of biopolymers provide a method for the fabrication of biocompatible microcapsules for use as biomaterials. Copyright © 2018 Elsevier B.V. All rights reserved.
Corrosion and mechanical performance of AZ91 exposed to simulated inflammatory conditions.
Brooks, Emily K; Der, Stephanie; Ehrensberger, Mark T
2016-03-01
Magnesium (Mg) and its alloys, including Mg-9%Al-1%Zn (AZ91), are biodegradable metals with potential use as temporary orthopedic implants. Invasive orthopedic procedures can provoke an inflammatory response that produces hydrogen peroxide (H2O2) and an acidic environment near the implant. This study assessed the influence of inflammation on both the corrosion and mechanical properties of AZ91. The AZ91 samples in the inflammatory protocol were immersed for three days in a complex biologically relevant electrolyte (AMEM culture media) that contained serum proteins (FBS), 150 mM of H2O2, and was titrated to a pH of 5. The control protocol immersed AZ91 samples in the same biologically relevant electrolyte (AMEM & FBS) but without H2O2 and the acid titration. After 3 days all samples were switched into fresh AMEM & FBS for an additional 3-day immersion. During the initial immersion, inflammatory protocol samples showed increased corrosion rate determined by mass loss testing, increased Mg and Al ion released to solution, and a completely corroded surface morphology as compared to the control protocol. Although corrosion in both protocols slowed once the test electrolyte solution was replaced at 3 days, the samples originally exposed to the simulated inflammatory conditions continued to display enhanced corrosion rates as compared to the control protocol. These lingering effects may indicate the initial inflammatory corrosion processes modified components of the surface oxide and corrosion film or initiated aggressive localized processes that subsequently left the interface more vulnerable to continued enhanced corrosion. The electrochemical properties of the interfaces were also evaluated by EIS, which found that the corrosion characteristics of the AZ91 samples were potentially influenced by the role of intermediate adsorption layer processes. The increased corrosion observed for the inflammatory protocol did not affect the flexural mechanical properties of the AZ91 at any time point assessed. Copyright © 2015 Elsevier B.V. All rights reserved.
Overview Environmental Assessment for the Space Based Infrared System (SBIRS)
1996-12-01
to HCl at various concentrations and durations (NASA, 1980). These insects were the honey bee , corn earworm, and lacewing. At the concentrations...bodily injury regardless of age, gender, or child-bearing status. Air Force Occupational Safety and Health Standard 48-9 establishes PELs for RF...contribute to harmful effects on the O3 layer. The actions detailed in Title VI carry out the United States obligations under the “Montreal Protocol on
An Analysis of the Computer Security Ramifications of Weakened Asymmetric Cryptographic Algorithms
2012-06-01
OpenVPN (Yonan). TLS (and by extension SSL) obviously rely on encryption to provide the confidentiality, integrity and authentication services it...Secure Shell (SSH) Transport Layer Protocol.” IETF, Jan. 2006. <tools.ietf.org/html/rfc4253> Yonan, James, and Mattock. " OpenVPN ." SourceForge...11 May 2012. <http://sourceforge.net/projects/ openvpn /> 92 REPORT DOCUMENTATION PAGE Form Approved OMB No. 074-0188 The public reporting
A Scalable and Dynamic Testbed for Conducting Penetration-Test Training in a Laboratory Environment
2015-03-01
entry point through which to execute a payload to accomplish a higher-level goal: executing arbitrary code, escalating privileges , pivoting...Mobile Ad Hoc Network Emulator (EMANE)26 can emulate the entire network stack (physical to application -layer protocols). 2. Methodology To build a...to host Windows, Linux, MacOS, Android , and other operating systems without much effort. 4 E. A simple and automatic “restore” function: Many
2001-09-30
microscopic imaging techniques, and microscopic video- cinematography protocols for both phytoplankton and zooplankton for use in current laboratory...phytoplankton, zooplankton and bioluminescence papers, and examined data/figures for layered structures. Imaging and Cinematography : Off-the-shelf...to preview it as a work-in-progress, email me (jrines@gso.uri.edu), and I will provide you with a temporary URL. Imaging and Cinematography
ECHO Services: Foundational Middleware for a Science Cyberinfrastructure
NASA Technical Reports Server (NTRS)
Burnett, Michael
2005-01-01
This viewgraph presentation describes ECHO, an interoperability middleware solution. It uses open, XML-based APIs, and supports net-centric architectures and solutions. ECHO has a set of interoperable registries for both data (metadata) and services, and provides user accounts and a common infrastructure for the registries. It is built upon a layered architecture with extensible infrastructure for supporting community unique protocols. It has been operational since November, 2002 and it available as open source.
NASA Astrophysics Data System (ADS)
Muhi, Daniel; Dulai, Tibor; Jaskó, Szilárd
2008-11-01
SIP is a general-purpose application layer protocol which is able to establish sessions between two or more parties. These sessions are mainly telephone calls and multimedia conferences. However it can be used for other purposes like instant messaging and presence service. SIP has a very important role in mobile communication as more and more communicating applications are going mobile. In this paper we would like to show how SIP can be used for instant messaging purposes.
Cortical Isolation from Xenopus laevis Oocytes and Eggs.
Sive, Hazel L; Grainger, Robert M; Harland, Richard M
2007-06-01
INTRODUCTIONIn Xenopus laevis, the cortex is the layer of gelatinous cytoplasm that lies just below the plasma membrane of the egg. Rotation of the cortex relative to the deeper cytoplasm soon after fertilization is intimately linked to normal dorsal axis specification. The cortex can be dissected from the egg to analyze its composition and activity or to clone associated RNAs. This protocol describes a procedure for isolating the vegetal cortex of the fertilized egg.
Development of an e-VLBI Data Transport Software Suite with VDIF
NASA Technical Reports Server (NTRS)
Sekido, Mamoru; Takefuji, Kazuhiro; Kimura, Moritaka; Hobiger, Thomas; Kokado, Kensuke; Nozawa, Kentarou; Kurihara, Shinobu; Shinno, Takuya; Takahashi, Fujinobu
2010-01-01
We have developed a software library (KVTP-lib) for VLBI data transmission over the network with the VDIF (VLBI Data Interchange Format), which is the newly proposed standard VLBI data format designed for electronic data transfer over the network. The software package keeps the application layer (VDIF frame) and the transmission layer separate, so that each layer can be developed efficiently. The real-time VLBI data transmission tool sudp-send is an application tool based on the KVTP-lib library. sudp-send captures the VLBI data stream from the VSI-H interface with the K5/VSI PC-board and writes the data to file in standard Linux file format or transmits it to the network using the simple- UDP (SUDP) protocol. Another tool, sudp-recv , receives the data stream from the network and writes the data to file in a specific VLBI format (K5/VSSP, VDIF, or Mark 5B). This software system has been implemented on the Wettzell Tsukuba baseline; evaluation before operational employment is under way.
Evaluation for Practical Application of HFC Refrigerants
NASA Astrophysics Data System (ADS)
Uemura, Shigehiro; Noguchi, Masahiro; Inagaki, Sadayasu; Teraoka, Takuya
Production restriction of CFCs which are used for refrigerators and air conditioners has been implemented through the international mutual agreement approved by the Montreal Protocol. Due to the less impact on the ozone layer dep1etion, alternative refrigerants for CFCs had included HCFC-123 and HCFC-22. However, H CFC-123 and HCFC-22 do not completely prevent the ozone layer depletion. This paper presents the investigation results of HFC-125, H FC-143a, HFC-152a, and HFC-32 which prevent the ozone layer depletion and are candidates for alternatives of CFCs and HCFCs. The test results of thermal stability of these refrigerants are similar to those of CFC-12 and HCFC-22. The test results show that each refrigerant has different material compatibility. The test results of lubricant solubility show that synthetic oi1s are soluble in these refrigerants, but the mineral oils currently in use for CFCs and HCFCs are not. The refrigeration performance based on the calculated thermodynamic properties corresponds with that of the experimental results.
Tuning the density profile of surface-grafted hyaluronan and the effect of counter-ions.
Berts, Ida; Fragneto, Giovanna; Hilborn, Jöns; Rennie, Adrian R
2013-07-01
The present paper investigates the structure and composition of grafted sodium hyaluronan at a solid-liquid interface using neutron reflection. The solvated polymer at the surface could be described with a density profile that decays exponentially towards the bulk solution. The density profile of the polymer varied depending on the deposition protocol. A single-stage deposition resulted in denser polymer layers, while layers created with a two-stage deposition process were more diffuse and had an overall lower density. Despite the diffuse density profile, two-stage deposition leads to a higher surface excess. Addition of calcium ions causes a strong collapse of the sodium hyaluronan chains, increasing the polymer density near the surface. This effect is more pronounced on the sample prepared by two-stage deposition due to the initial less dense profile. This study provides an understanding at a molecular level of how surface functionalization alters the structure and how surface layers respond to changes in calcium ions in the solvent.