Science.gov

Sample records for alc reliable multicast

  1. A reliable multicast for XTP

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.

  2. WWW media distribution via Hopwise reliable multicast

    SciTech Connect

    Donnelley, J.E.

    1994-12-01

    Repeated access to WWW pages currently makes inefficient use of available network bandwidth. A Distribution Point Model is proposed where large and relatively static sets of pages (e.g. magazines or other such media) are distributed via bulk multicast to LAN distribution points for local access. Some access control issues are discussed. Hopwise Reliable Multicast (HRM) is proposed to simplify reliable multicast of non real time bulk data between LANs. HRM uses TCP for reliability and flow control on a hop by hop basis throughout a multicast distribution tree created by today`s Internet MBone.

  3. Issues in providing a reliable multicast facility

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Strayer, W. Timothy; Weaver, Alfred C.

    1990-01-01

    Issues involved in point-to-multipoint communication are presented and the literature for proposed solutions and approaches surveyed. Particular attention is focused on the ideas and implementations that align with the requirements of the environment of interest. The attributes of multicast receiver groups that might lead to useful classifications, what the functionality of a management scheme should be, and how the group management module can be implemented are examined. The services that multicasting facilities can offer are presented, followed by mechanisms within the communications protocol that implements these services. The metrics of interest when evaluating a reliable multicast facility are identified and applied to four transport layer protocols that incorporate reliable multicast.

  4. The reliable multicast protocol application programming interface

    NASA Technical Reports Server (NTRS)

    Montgomery , Todd; Whetten, Brian

    1995-01-01

    The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.

  5. Reliable multicasting in the Xpress Transport Protocol

    SciTech Connect

    Atwood, J.W.; Catrina, O.; Fenton, J.; Strayer, W.T.

    1996-12-01

    The Xpress Transport Protocol (XTP) is designed to meet the needs of distributed, real-time, and multimedia systems. This paper describes the genesis of recent improvements to XTP that provide mechanisms for reliable management of multicast groups, and gives details of the mechanisms used.

  6. Fault recovery in the reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian

    1995-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  7. Reliable multicast protocol specifications protocol operations

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd; Whetten, Brian

    1995-01-01

    This appendix contains the complete state tables for Reliable Multicast Protocol (RMP) Normal Operation, Multi-RPC Extensions, Membership Change Extensions, and Reformation Extensions. First the event types are presented. Afterwards, each RMP operation state, normal and extended, is presented individually and its events shown. Events in the RMP specification are one of several things: (1) arriving packets, (2) expired alarms, (3) user events, (4) exceptional conditions.

  8. The Specification-Based Validation of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  9. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  10. Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd L.

    1995-01-01

    This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.

  11. High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications

    NASA Technical Reports Server (NTRS)

    Callahan, John; Montgomery, Todd; Whetten, Brian

    1997-01-01

    Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.

  12. Reliable multicast protocol specifications flow control and NACK policy

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian

    1995-01-01

    This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.

  13. A Loss Tolerant Rate Controller for Reliable Multicast

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd

    1997-01-01

    This paper describes the design, specification, and performance of a Loss Tolerant Rate Controller (LTRC) for use in controlling reliable multicast senders. The purpose of this rate controller is not to adapt to congestion (or loss) on a per loss report basis (such as per received negative acknowledgment), but instead to use loss report information and perceived state to decide more prudent courses of action for both the short and long term. The goal of this controller is to be responsive to congestion, but not overly reactive to spurious independent loss. Performance of the controller is verified through simulation results.

  14. Verification and validation of a reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  15. Reliable on-demand multicast routing with congestion control in wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Tang, Ken; Gerla, Mario

    2001-07-01

    In this paper, we address the congestion control multicast routing problem in wireless ad hoc networks through the medium access control (MAC) layer. We first introduce the Broadcast Medium Window (BMW) MAC protocol, which provides reliable delivery to broadcast packets at the MAC layer. We then extend the wireless On-Demand Multicast Routing Protocol (ODMRP) to facilitate congestion control in ad hoc networks using BMW. Through simulation, we show that ODMRP with congestion control adapts well to multicast sources that are aggressive in data transmissions.

  16. The multidriver: A reliable multicast service using the Xpress Transfer Protocol

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.

    1990-01-01

    A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.

  17. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    NASA Astrophysics Data System (ADS)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  18. Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems

    NASA Technical Reports Server (NTRS)

    Shu, Wei Wennie; Porter, John

    2000-01-01

    The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.

  19. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  20. Analyzing and designing of reliable multicast based on FEC in distributed switch

    NASA Astrophysics Data System (ADS)

    Luo, Ting; Yu, Shaohua; Wang, Xueshun

    2008-11-01

    AS businesses become more dependent on IP networks, lots of real-time services are adopted and high availability in networks has become increasingly critical. With the development of carrier grade ethernet, the requirements of high speed metro ethernet device are more urgently. In order to reach the capacity of hundreds of Gbps or Tbps, most of core ethernet switches almost adopted distributed control architecture and large capacity forwarding fabric. When distributed switch works, they always have one CE and many FE. There for, it shows the feature of multicast with one sender and many receivers. It is deserved research for us how to apply reliable multicast to distributed switch inner communication system. In this paper, we present the general architecture of a distributed ethernet switch, focusing on analysis the model of internal communication subsystem. According to its character, a novel reliable multicast communication mechanism based on FEC recovery algorithm has been applied and evaluated in experiment.

  1. PVM and IP multicast

    SciTech Connect

    Dunigan, T.H.; Hall, K.A.

    1996-12-01

    This report describes a 1994 demonstration implementation of PVM that uses IP multicast. PVM`s one-to-many unicast implementation of its pvm{_}mcast() function is replaced with reliable IP multicast. Performance of PVM using IP multicast over local and wide-area networks is measured and compared with the original unicast implementation. Current limitations of IP multicast are noted.

  2. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  3. SM_TCP: a new reliable multicast transport protocol for satellite IP networks

    NASA Astrophysics Data System (ADS)

    Liu, Gongliang; Gu, Xuemai; Li, Shizhong

    2005-11-01

    A new reliable multicast transport protocol SM_TCP is proposed for satellite IP networks in this paper. In SM_TCP, the XOR scheme with the aid of on-board buffering and processing is used for error recovery and an optimal retransmission algorithm is designed, which can reduce the recovery time by half of the RTT and minimize the number of retransmissions. In order to avoid the unnecessary decrease of congestion window in the high BER satellite channels, the occupied buffer sizes at bottlenecks are measured in adjusting the congestion window, instead of depending on the packet loss information. The average session rate of TCP sessions and of multicast sessions passing through the satellite are also measured and compared in adjusting the congestion window, which contributes to bandwidth fairness. Analysis and simulation results show fairness with TCP flows and scalability.

  4. An approach to verification and validation of a reliable multicasting protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  5. An Approach to Verification and Validation of a Reliable Multicasting Protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1994-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  6. An approach to verification and validation of a reliable multicasting protocol: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this

  7. Unidata LDM-7: a Hybrid Multicast/unicast System for Highly Efficient and Reliable Real-Time Data Distribution

    NASA Astrophysics Data System (ADS)

    Emmerson, S. R.; Veeraraghavan, M.; Chen, S.; Ji, X.

    2015-12-01

    Results of a pilot deployment of a major new version of the Unidata Local Data Manager (LDM-7) are presented. The Unidata LDM was developed by the University Corporation for Atmospheric Research (UCAR) and comprises a suite of software for the distribution and local processing of data in near real-time. It is widely used in the geoscience community to distribute observational data and model output, most notably as the foundation of the Unidata Internet Data Distribution (IDD) system run by UCAR, but also in private networks operated by NOAA, NASA, USGS, etc. The current version, LDM-6, uses at least one unicast TCP connection per receiving host. With over 900 connections, the bit-rate of total outgoing IDD traffic from UCAR averages approximately 3.0 GHz, with peak data rates exceeding 6.6 GHz. Expected increases in data volume suggest that a more efficient distribution mechanism will be required in the near future. LDM-7 greatly reduces the outgoing bandwidth requirement by incorporating a recently-developed "semi-reliable" IP multicast protocol while retaining the unicast TCP mechanism for reliability. During the summer of 2015, UCAR and the University of Virginia conducted a pilot deployment of the Unidata LDM-7 among U.S. university participants with access to the Internet2 network. Results of this pilot program, along with comparisons to the existing Unidata LDM-6 system, are presented.

  8. Fast casual multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Schiper, Andre; Stephenson, Pat

    1990-01-01

    A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.

  9. MTP: An atomic multicast transport protocol

    NASA Technical Reports Server (NTRS)

    Freier, Alan O.; Marzullo, Keith

    1990-01-01

    Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.

  10. Issues in designing transport layer multicast facilities

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined.

  11. Research on loss recovery of application layer multicast

    NASA Astrophysics Data System (ADS)

    Li, Xinfeng; Shi, Huiling; Niu, Zhenghao; Lei, Wenqing; Chen, Jin

    2014-04-01

    As an alternative of IP Multicast, ALM implements multicast functionality at the application layer instead of the IP layer, which addresses the problem of non-ubiquitous deployment of IP multicast. However, the reliability of ALM is low because dynamic hosts forward the data. This paper analyzes the error and delivery features of ALM trees, and further presents a data loss recovery solution (called HBHLR) for application layer multicast.

  12. Evaluation Study of a Broadband Multicasting Service over a Gigabit Ethernet Delivery Network

    NASA Astrophysics Data System (ADS)

    Stergiou, E.; Meletiou, G.; Vasiliadis, D. C.; Rizos, G. E.; Margariti, S. V.

    2008-11-01

    Multicasting networks are usually implemented for delivering audio and video. Consequently, the performance evaluation of a reliable multicasting architecture is useful in active delivery systems. In this paper we analyze and present a broadband multicasting system under an Internet environment using a typical IP multicasting mechanism. The test-bed multicasting scheme was based on both IGMP and MCOP protocols, where a Gigabit Ethernet was used as delivery network at client's segment. The evaluation study provides measurements for the two most significant performance metrics, the required Bandwidth and the Round Trip Time (RTT) of a packet versus the number of multicasting clients over 2.4 Mbps multicasting service rate.

  13. A high performance totally ordered multicast protocol

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd; Whetten, Brian; Kaplan, Simon

    1995-01-01

    This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.

  14. A proposed group management scheme for XTP multicast

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.

  15. Integrating concast and multicast communication models

    NASA Astrophysics Data System (ADS)

    Wen, Su; Griffioen, James; Yavatkar, Rajendra

    1998-12-01

    This paper defines a new group communication model called concast communication. Being the counterpart to multicast, concast involves multiple senders transmitting to a single receiver. Concast communication is used in a wide range of applications including collaborative applications, report-in style applications, or just end-to-end acknowledgements in a reliable multicast protocol. This paper explores the issues involved in designing concast communication services. We examine various message combination methods including concatenation, compression, and reduction to reduce the traffic loads imposed on the network and packet implosion at the receiver. Group management operations such as group creation/deletion, joining/leaving, and concast routing are discussed. We also address transmission issues such as reliable delivery, flow control, congestion control, and QoS. We conclude the paper by presenting a concast communication model that we have been developing in the context of TMTP5. The model uses concast communication to implement reliable multicast and it shares concast trees with the multicast group whenever possible to reduce overhead costs.

  16. VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast

    PubMed Central

    Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu

    2015-01-01

    Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152

  17. VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.

    PubMed

    Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu

    2015-01-01

    Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152

  18. A Scalable Media Multicasting Scheme

    NASA Astrophysics Data System (ADS)

    Youwei, Zhang

    IP multicast has been proved to be unfeasible for deployment, Application Layer Multicast (ALM) Based on end multicast system is practical and more scalable than IP multicast in Internet. In this paper, an ALM protocol called Scalable multicast for High Definition streaming media (SHD) is proposed in which end to end transmission capability is fully cultivated for HD media transmission without increasing much control overhead. Similar to the transmission style of BiTtorrent, hosts only forward part of data piece according to the available bandwidth that improves the usage of bandwidth greatly. On the other hand, some novel strategies are adopted to overcome the disadvantages of BiTtorrent protocol in streaming media transmission. Data transmission between hosts is implemented in many-one transmission style in Hierarchical architecture in most circumstances. Simulations implemented on Internet-like topology indicate that SHD achieves low link stress, end to end latency and stability.

  19. Web Portal for Multicast Delivery Management.

    ERIC Educational Resources Information Center

    Mannaert, H.; De Gruyter, B.; Adriaenssens, P.

    2003-01-01

    Presents a Web portal for multicast communication management, which provides fully automatic service management with integrated provisioning of hardware equipment. Describes the software architecture, the implementation, and the application usage of the Web portal for multicast delivery. (Author/AEF)

  20. Mitochondrial phylogeography of moose (Alces alces) in North America

    USGS Publications Warehouse

    Hundertmark, Kris J.; Bowyer, R. Terry; Shields, Gerald F.; Schwartz, Charles C.

    2003-01-01

    Nucleotide variation was assessed from the mitochondrial control region of North American moose (Alces alces) to test predictions of a model of range expansion by stepping-stone dispersal and to determine whether patterns of genetic variation support the current recognition of 4 subspecies. Haplotypes formed a star phylogeny indicative of a recent expansion of populations. Values of nucleotide and haplotype diversity were low continentwide but were greatest in the central part of the continent and lowest in peripheral populations. Despite low mitochondrial diversity, moose exhibited a high degree of differentiation regionally, which was not explained by isolation by distance. Our data indicate a pattern of colonization consistent with a large central population that supplied founders to peripheral populations (other than Alaska), perhaps through rare, long-distance dispersal events (leptokurtic dispersal) rather than mass dispersal by a stepping-stone model. The colonization scenario does not account for the low haplotype diversity observed in Alaska, which may be derived from a postcolonization bottleneck. Establishment of peripheral populations by leptokurtic dispersal and subsequent local adaptation may have been sufficient for development of morphological differentiation among extant subspecies.

  1. Anaplasma phagocytophilum infection in moose (Alces alces) in Norway.

    PubMed

    Pūraitė, Irma; Rosef, Olav; Paulauskas, Algimantas; Radzijevskaja, Jana

    2015-01-01

    Anaplasma phagocytophilum is a tick-borne bacterium that infects a wide range of animal species. The aim of our study was to investigate the prevalence of A. phagocytophilum in Norwegian moose Alces alces and to characterize the bacteria by sequencing of partial msp4 and 16S rRNA genes. Hunters collected spleen samples from 99 moose of different ages during 2013 and 2014 in two areas: Aust-Agder County (n = 70) where Ixodes ricinus ticks are abundant and Oppland County (n = 29) where ticks were either absent, or abundance very low. A. phagocytophilum was detected only in moose from the I. ricinus - abundant area. The overall prevalence of infection according to 16S rRNA and msp4 gene-based PCR was 41.4% and 31.4% respectively. Sequence analysis of the partial 16S rRNA and msp4 gene revealed two and eight different sequence types respectively. Four of eight msp4 sequence types determined in this study were unique, while others were identical to sequences derived from other ruminants and ticks. The present study indicates that moose could be a potential wildlife reservoir of A. phagocytophilum in Norway. PMID:26428857

  2. Multicast Routing of Hierarchical Data

    NASA Technical Reports Server (NTRS)

    Shacham, Nachum

    1992-01-01

    The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.

  3. Optical multicast system for data center networks.

    PubMed

    Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren

    2015-08-24

    We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network. PMID:26368190

  4. Key management approach of multicast

    NASA Astrophysics Data System (ADS)

    Jiang, Zhen; Wang, Xi-lian; Zhang, Hong-ke; Zhang, Li-yong

    2002-09-01

    A key management approach of multicast is provided in this paper. It is based on the approach of assignment key to every group member through key center. In view of some management schemes where members join, leave or are deleted, key service center must distribute new key through unicast another time. The bigger amount of members the greater expenses will be spent. In this paper with member varying their upper key service center still distribute the new keythrough multicast and an ID is assigned to every member to identify their transmission message so as to implement data origin authentication. The essential principle of this approach is distributing a key generator for each member. For example a random number generator depending on certain algorithm can be distributed. And every member needs store a seed table. In this project key can automatically be renewed as time goes by or immediately renewed. Replace unicast by multicast to renew key decrease the spending. It is not only suitable for the key centralized management scheme with fewer members but also for the key separated management scheme with large group members and member frequently changed.

  5. Meningoencephalitis associated with disseminated sarcocystosis in a free-ranging moose (Alces alces) calf.

    PubMed

    Ravi, Madhu; Patel, Jagdish; Pybus, Margo; Coleman, James K; Childress, April L; Wellehan, James F X

    2015-08-01

    A wild moose (Alces alces) calf was presented for necropsy due to severe neurologic signs. Histopathologic examination revealed multisystemic inflammation with intralesional mature and immature schizonts. Schizonts in the brain reacted positively to Sarcocystis spp. polyclonal antibodies. Gene sequencing of PCR-amplified DNA identified the species as Sarcocystis alceslatrans. PMID:26246636

  6. Meningoencephalitis associated with disseminated sarcocystosis in a free-ranging moose (Alces alces) calf

    PubMed Central

    Ravi, Madhu; Patel, Jagdish; Pybus, Margo; Coleman, James K.; Childress, April L.; Wellehan, James F.X.

    2015-01-01

    A wild moose (Alces alces) calf was presented for necropsy due to severe neurologic signs. Histopathologic examination revealed multisystemic inflammation with intralesional mature and immature schizonts. Schizonts in the brain reacted positively to Sarcocystis spp. polyclonal antibodies. Gene sequencing of PCR-amplified DNA identified the species as Sarcocystis alceslatrans. PMID:26246636

  7. A geographical cluster of malignant catarrhal fever in Moose (Alces alces)in Norway

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Three cases of lethal sheep-associated malignant catarrhal fever (SA-MCF) in free-ranging moose (Alces alces) were diagnosed in Lesja, Norway, December 2008 – February 2010. The diagnosis was based on PCR identification of ovine herpesvirus 2 DNA (n=3) and typical histopathological lesions (n=1). To...

  8. Resurrection and redescription of Varestrongylus alces (Nematoda; Protostrongylidae), a lungworm of Eurasian elk (Alces alces), with a report on associated pathology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Varestrongylus alces Demidova & Naumitscheva, 1953 is resurrected for protostrongylid nematodes of Eurasian elk in Europe. Descriptions of males (11.36-16.95 mm) and females (16.25- 21.52 mm) are based on specimens collected from the terminal bronchioles in the lungs of Eurasian elk, Alces alces (L...

  9. Many-to-many multicast routing schemes under a fixed topology.

    PubMed

    Ding, Wei; Wang, Hongfa; Wei, Xuerui

    2013-01-01

    Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706

  10. Multicast Reduction Network Source Code

    SciTech Connect

    Lee, G.

    2006-12-19

    MRNet is a software tree-based overlay network developed at the University of Wisconsin, Madison that provides a scalable communication mechanism for parallel tools. MRNet, uses a tree topology of networked processes between a user tool and distributed tool daemons. This tree topology allows scalable multicast communication from the tool to the daemons. The internal nodes of the tree can be used to distribute computation and alalysis on data sent from the tool daemons to the tool. This release covers minor implementation to port this software to the BlueGene/L architecuture and for use with a new implementation of the Dynamic Probe Class Library.

  11. Point-to-Point Multicast Communications Protocol

    NASA Technical Reports Server (NTRS)

    Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.

    1987-01-01

    This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.

  12. WDM Network and Multicasting Protocol Strategies

    PubMed Central

    Zaim, Abdul Halim

    2014-01-01

    Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM), it is easier to take the advantage of optical networks and optical burst switching (OBS) and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET) and Just In Time (JIT) reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes. PMID:24744683

  13. WDM network and multicasting protocol strategies.

    PubMed

    Kirci, Pinar; Zaim, Abdul Halim

    2014-01-01

    Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM), it is easier to take the advantage of optical networks and optical burst switching (OBS) and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET) and Just In Time (JIT) reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes. PMID:24744683

  14. Compressed sensing based video multicast

    NASA Astrophysics Data System (ADS)

    Schenkel, Markus B.; Luo, Chong; Frossard, Pascal; Wu, Feng

    2010-07-01

    We propose a new scheme for wireless video multicast based on compressed sensing. It has the property of graceful degradation and, unlike systems adhering to traditional separate coding, it does not suffer from a cliff effect. Compressed sensing is applied to generate measurements of equal importance from a video such that a receiver with a better channel will naturally have more information at hands to reconstruct the content without penalizing others. We experimentally compare different random matrices at the encoder side in terms of their performance for video transmission. We further investigate how properties of natural images can be exploited to improve the reconstruction performance by transmitting a small amount of side information. And we propose a way of exploiting inter-frame correlation by extending only the decoder. Finally we compare our results with a different scheme targeting the same problem with simulations and find competitive results for some channel configurations.

  15. Multicast Reduction Network Source Code

    2006-12-19

    MRNet is a software tree-based overlay network developed at the University of Wisconsin, Madison that provides a scalable communication mechanism for parallel tools. MRNet, uses a tree topology of networked processes between a user tool and distributed tool daemons. This tree topology allows scalable multicast communication from the tool to the daemons. The internal nodes of the tree can be used to distribute computation and alalysis on data sent from the tool daemons to themore » tool. This release covers minor implementation to port this software to the BlueGene/L architecuture and for use with a new implementation of the Dynamic Probe Class Library.« less

  16. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  17. The alc-GR system: a modified alc gene switch designed for use in plant tissue culture.

    PubMed

    Roberts, Gethin R; Garoosi, G Ali; Koroleva, Olga; Ito, Masaki; Laufs, Patrick; Leader, David J; Caddick, Mark X; Doonan, John H; Tomsett, A Brian

    2005-07-01

    The ALCR/alcA (alc) two-component, ethanol-inducible gene expression system provides stringent control of transgene expression in genetically modified plants. ALCR is an ethanol-activated transcription factor that can drive expression from the ALCR-responsive promoter (alcA). However, the alc system has been shown to have constitutive expression when used in plant callus or cell suspension cultures, possibly resulting from endogenous inducer produced in response to lowered oxygen availability. To widen the use of the alc system in plant cell culture conditions, the receptor domain of the rat glucocorticoid receptor (GR) was translationally fused to the C terminus of ALCR to produce ALCR-GR, which forms the basis of a glucocorticoid-inducible system (alc-GR). The alc-GR switch system was tested in tobacco (Nicotiana tabacum) Bright Yellow-2 suspension cells using a constitutively expressed ALCR-GR with four alternative alcA promoter-driven reporter genes: beta-glucuronidase, endoplasmic reticulum-targeted green fluorescent protein, haemagglutinin, and green fluorescent protein-tagged Arabidopsis (Arabidopsis thaliana) Arath;CDKA;1 cyclin-dependent kinase. Gene expression was shown to be stringently dependent on the synthetic glucocorticoid dexamethasone and, in cell suspensions, no longer required ethanol for induction. Thus, the alc-GR system allows tight control of alcA-driven genes in cell culture and complements the conventional ethanol switch used in whole plants. PMID:16010000

  18. The alc-GR System. A Modified alc Gene Switch Designed for Use in Plant Tissue Culture1[w

    PubMed Central

    Roberts, Gethin R.; Garoosi, G. Ali; Koroleva, Olga; Ito, Masaki; Laufs, Patrick; Leader, David J.; Caddick, Mark X.; Doonan, John H.; Tomsett, A. Brian

    2005-01-01

    The ALCR/alcA (alc) two-component, ethanol-inducible gene expression system provides stringent control of transgene expression in genetically modified plants. ALCR is an ethanol-activated transcription factor that can drive expression from the ALCR-responsive promoter (alcA). However, the alc system has been shown to have constitutive expression when used in plant callus or cell suspension cultures, possibly resulting from endogenous inducer produced in response to lowered oxygen availability. To widen the use of the alc system in plant cell culture conditions, the receptor domain of the rat glucocorticoid receptor (GR) was translationally fused to the C terminus of ALCR to produce ALCR-GR, which forms the basis of a glucocorticoid-inducible system (alc-GR). The alc-GR switch system was tested in tobacco (Nicotiana tabacum) Bright Yellow-2 suspension cells using a constitutively expressed ALCR-GR with four alternative alcA promoter-driven reporter genes: β-glucuronidase, endoplasmic reticulum-targeted green fluorescent protein, haemagglutinin, and green fluorescent protein-tagged Arabidopsis (Arabidopsis thaliana) Arath;CDKA;1 cyclin-dependent kinase. Gene expression was shown to be stringently dependent on the synthetic glucocorticoid dexamethasone and, in cell suspensions, no longer required ethanol for induction. Thus, the alc-GR system allows tight control of alcA-driven genes in cell culture and complements the conventional ethanol switch used in whole plants. PMID:16010000

  19. Network coding for quantum cooperative multicast

    NASA Astrophysics Data System (ADS)

    Xu, Gang; Chen, Xiu-Bo; Li, Jing; Wang, Cong; Yang, Yi-Xian; Li, Zongpeng

    2015-11-01

    Cooperative communication is starting to attract substantial research attention in quantum information theory. However, given a specific network, it is still unknown whether quantum cooperative communication can be successfully performed. In this paper, we investigate network coding for quantum cooperative multicast (QCM) over the classic butterfly network. A very reasonable definition of QCM is first introduced. It not only perfectly focuses on the basic idea of quantum cooperative communication, but also wonderfully reflects the characteristic of classical multicast over a specific network structure. Next, we design QCM protocol for two-level systems and generalize the protocol into d-dimensional Hilbert space. It is shown that our protocols have significant advantages in terms of resource cost and compatibility with classical multicast. Besides, the success probability, which only depends on the coefficients of the initial quantum states, is carefully analyzed. In particular if the source nodes choose the quantum equatorial states, success probability can reach 1.

  20. Integrated nonlinear interferometer with wavelength multicasting functionality.

    PubMed

    Yang, Weili; Yu, Yu; Zhang, Xinliang

    2016-08-01

    Nonlinear interference based on four wave mixing (FWM) is extremely attractive due to its phase sensitivity. On the other hand, wavelength multicasting, which supports data point-to-multipoint connections, is a key functionality to increase the network efficiency and simplify the transmitter and receiver in the wavelength-division multiplexing systems. We propose and experimentally demonstrate a nonlinear interferometer with wavelength multicasting functionality based on single-stage FWM in an integrated silicon waveguide. With a three-pump and dual-signal input, four phase sensitive idlers are obtained at the interferometer output. For a proof-of-concept application, we further theoretically demonstrate the multicasting logic exclusive-OR (XOR) gate for both intensity and phase modulated signals. The proposed scheme would be potentially applied in various on-chip applications for future optical communication system. PMID:27505786

  1. The infection of reintroduced ruminants - Bison bonasus and Alces alces - with Anaplasma phagocytophilum in northern Poland.

    PubMed

    Karbowiak, Grzegorz; Víchová, Bronislava; Werszko, Joanna; Demiaszkiewicz, Aleksander W; Pyziel, Anna M; Sytykiewicz, Hubert; Szewczyk, Tomasz; Peťko, Branislav

    2015-12-01

    The north-eastern part of Poland is considered an area of high risk for infection with tick-borne diseases, including with human granulocytic ehrlichiosis (HGE) agents. The etiological agent of HGE is Anaplasma phagocytophilum. As the animal reservoir for A. phagocytophilum in the environment serve the species from Cervidae and Bovidae families. European bison (Bison bonasus) and elk (Alces alces) are the big ruminant species, reintroduced to the forests of Middle Europe after many decades of absence. In the foci of zoonotic diseases they are able to play a role as natural reservoir to pathogens, however, their status as protected animals means their study has been rare and fragmentary. The studies of B. bonasus were conducted in Białowieża Primeval Forest and A. alces in Biebrza National Park. PCR amplifications were performed using primers amplifing the end of the groES gene, the intergenic spacer and approximately two-thirds of the groEL gene in the first round, and primers that span a 395-bp region of the groEL gene were used in the second round. The positive results were obtained in B. bonasus and A. alces, the prevalence of infection was 66.7 and 20.0%, respectively. Randomly selected samples were sequenced, sequences were compared with GenBank entries using Blast N2.2.13 and determined as A. phagocytophilum. The results presented herein are the first record of the presence of Anaplasma phagocytophilum in A. alces, and at the same time confirm the previous observations regarding the infection of B. bonasus with A. phagocytophilum. PMID:26408585

  2. Multicast Routing in Structured Overlays and Hybrid Networks

    NASA Astrophysics Data System (ADS)

    Wählisch, Matthias; Schmidt, Thomas C.

    Key-based routing has enabled efficient group communication on the application or service middleware layer, stimulated by the need of applications to access multicast. These developments follow a continuous debate about network layer multicast that had lasted for about 30 years history of the Internet. The IP host group model today still faces a strongly divergent state of deployment. In this chapter, we first review the key concepts of multicast and broadcast data distribution on structured overlays. Second, we perform a comprehensive theoretical analysis examining the different distribution trees constructed on top of a key-based routing layer. Characteristic performance measures of the multicast approaches are compared in detail and major structural differences are identified. Overlay multicast overcomes deployment problems on the price of a performance penalty. Hybrid approaches, which dynamically combine multicast in overlay and underlay, adaptively optimize group communication. We discuss current schemes along with its integration in common multicast routing protocols in the third part of this chapter. Finally, we reconsider and enhance approaches to a common API for group communication, which serves the requirements of data distribution and maintenance for multicast and broadcast on a middleware abstraction layer, and in particular facilitates hybrid multicast schemes.

  3. Harvest-induced phenotypic selection in an island population of moose, Alces alces.

    PubMed

    Kvalnes, Thomas; Saether, Bernt-Erik; Haanes, Hallvard; Røed, Knut H; Engen, Steinar; Solberg, Erling J

    2016-07-01

    Empirical evidence strongly indicates that human exploitation has frequently led to rapid evolutionary changes in wild populations, yet the mechanisms involved are often poorly understood. Here, we applied a recently developed demographic framework for analyzing selection to data from a 20-year study of a wild population of moose, Alces alces. In this population, a genetic pedigree has been established all the way back to founders. We demonstrate harvest-induced directional selection for delayed birth dates in males and reduced body mass as calf in females. During the study period, birth date was delayed by 0.81 days per year for both sexes, whereas no significant changes occurred in calf body mass. Quantitative genetic analyses indicated that both traits harbored significant additive genetic variance. These results show that selective harvesting can induce strong selection that oppose natural selection. This may cause evolution of less favorable phenotypes that become maladaptive once harvesting ceases. PMID:27174031

  4. Lightweight causal and atomic group multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Schiper, Andre; Stephenson, Pat

    1991-01-01

    The ISIS toolkit is a distributed programming environment based on support for virtually synchronous process groups and group communication. A suite of protocols is presented to support this model. The approach revolves around a multicast primitive, called CBCAST, which implements a fault-tolerant, causally ordered message delivery. This primitive can be used directly or extended into a totally ordered multicast primitive, called ABCAST. It normally delivers messages immediately upon reception, and imposes a space overhead proportional to the size of the groups to which the sender belongs, usually a small number. It is concluded that process groups and group communication can achieve performance and scaling comparable to that of a raw message transport layer. This finding contradicts the widespread concern that this style of distributed computing may be unacceptably costly.

  5. Multi-area layered multicast scheme for MPLS networks

    NASA Astrophysics Data System (ADS)

    Ma, Yajie; Yang, Zongkai; Wang, Yuming; Chen, Jingwen

    2005-02-01

    Multi-protocol label switching (MPLS) is multiprotocols both at layer 2 and layer 3. It is suggested to overcome the shortcomings of performing complex longest prefix matching in layer 3 routing by using short, fixed length labels. The MPLS community has put more effort into the label switching of unicast IP traffic, but less in the MPLS multicast mechanism. The reasons are the higher label consumption, the dynamical mapping of L3 multicast tree to L2 LSPs and the 20-bit shim header which is much fewer than the IPv4 IP header. On the other hand, heterogeneity of node capability degrades total performance of a multicast group. In order to achieve the scalability as well as the heterogeneity in MPLS networks, a novel scheme of MPLS-based Multi-area Layered Multicast Scheme (MALM) is proposed. Unlike the existing schemes which focus on aggregating the multicast stream, we construct the multicast tree based on the virtual topology aggregation. The MPLS area is divided into different sub-areas to form the hierarchical virtual topology and the multicast group is reconstructed into multiple layers according to the node capability. At the same time, the label stack is used to save the label space. For stability of the MALM protocol, a multi-layer protection scheme is also discussed. The experiment results show that the proposed scheme saves label space and decrease the Multicast Forwarding Table in much degree.

  6. Extending ACNET communication types to include multicast semantics

    SciTech Connect

    Neswold, R.; King, C.; /Fermilab

    2009-10-01

    In Fermilab's accelerator control system, multicast communication wasn't properly incorporated into ACNET's transport layer, nor in its programming API. We present some recent work that makes multicasts naturally fit in the ACNET network environment. We also show how these additions provide high-availability for ACNET services.

  7. Adaptive power-controllable orbital angular momentum (OAM) multicasting

    PubMed Central

    Li, Shuhui; Wang, Jian

    2015-01-01

    We report feedback-assisted adaptive multicasting from a single Gaussian mode to multiple orbital angular momentum (OAM) modes using a single phase-only spatial light modulator loaded with a complex phase pattern. By designing and optimizing the complex phase pattern through the adaptive correction of feedback coefficients, the power of each multicast OAM channel can be arbitrarily controlled. We experimentally demonstrate power-controllable multicasting from a single Gaussian mode to two and six OAM modes with different target power distributions. Equalized power multicasting, “up-down” power multicasting and “ladder” power multicasting are realized in the experiment. The difference between measured power distributions and target power distributions is assessed to be less than 1 dB. Moreover, we demonstrate data-carrying OAM multicasting by employing orthogonal frequency-division multiplexing 64-ary quadrature amplitude modulation (OFDM 64-QAM) signal. The measured bit-error rate curves and observed optical signal-to-noise ratio penalties show favorable operation performance of the proposed adaptive power-controllable OAM multicasting. PMID:25989251

  8. Adaptive power-controllable orbital angular momentum (OAM) multicasting.

    PubMed

    Li, Shuhui; Wang, Jian

    2015-01-01

    We report feedback-assisted adaptive multicasting from a single Gaussian mode to multiple orbital angular momentum (OAM) modes using a single phase-only spatial light modulator loaded with a complex phase pattern. By designing and optimizing the complex phase pattern through the adaptive correction of feedback coefficients, the power of each multicast OAM channel can be arbitrarily controlled. We experimentally demonstrate power-controllable multicasting from a single Gaussian mode to two and six OAM modes with different target power distributions. Equalized power multicasting, "up-down" power multicasting and "ladder" power multicasting are realized in the experiment. The difference between measured power distributions and target power distributions is assessed to be less than 1 dB. Moreover, we demonstrate data-carrying OAM multicasting by employing orthogonal frequency-division multiplexing 64-ary quadrature amplitude modulation (OFDM 64-QAM) signal. The measured bit-error rate curves and observed optical signal-to-noise ratio penalties show favorable operation performance of the proposed adaptive power-controllable OAM multicasting. PMID:25989251

  9. Demonstration of optical multicasting using Kerr frequency comb lines.

    PubMed

    Bao, Changjing; Liao, Peicheng; Kordts, Arne; Karpov, Maxim; Pfeiffer, Martin H P; Zhang, Lin; Yan, Yan; Xie, Guodong; Cao, Yinwen; Almaiman, Ahmed; Ziyadi, Morteza; Li, Long; Zhao, Zhe; Mohajerin-Ariaei, Amirhossein; Wilkinson, Steven R; Tur, Moshe; Fejer, Martin M; Kippenberg, Tobias J; Willner, Alan E

    2016-08-15

    We experimentally demonstrate optical multicasting using Kerr frequency combs generated from a Si3N4 microresonator. We obtain Kerr combs in two states with different noise properties by varying the pump wavelength in the resonator and investigate the effect of Kerr combs on multicasting. Seven-fold multicasting of 20 Gbaud quadrature phase-shift-keyed signals and four-fold multicasting of 16-quadrature amplitude modulation signals have been achieved when low-phase-noise combs are input into a periodically poled lithium niobate waveguide. In addition, we find that the wavelength conversion efficiency in the PPLN waveguide for chaotic combs with high noise is similar to that for low-noise combs, while the signal quality of the multicast copy is significantly degraded. PMID:27519112

  10. Effects of simulated moose Alces alces browsing on the morphology of rowan Sorbus aucuparia

    USGS Publications Warehouse

    Jager, N.R.D.; Pastor, J.

    2010-01-01

    In much of northern Sweden moose Alces alces browse rowan Sorbus aucuparia heavily and commonly revisit previously browsed plants. Repeated browsing of rowan by moose has created some concern for its long-term survival in heavily browsed areas. We therefore measured how four years of simulated moose browsing at four population densities (0, 10, 30 and 50 moose/1,000 ha) changed plant height, crown width, available bite mass, the number of bites per plant and per plant forage biomass of rowan saplings. Increased biomass removal led to a significant decline in plant height (P < 0.001), but a significant increase in the number of bites per plant (P = 0.012). Increases in the number of bites per plant more than compensated for weak decreases in bite mass, leading to a weak increase in per plant forage biomass (P = 0.072). With the decline in plant height and increase in the number of stems per plant, a greater number of bites remain within the height reach of moose relative to unbrowsed controls. Moose therefore stand to benefit from revisiting previously browsed plants, which may result in feeding loops between moose and previously browsed rowan saplings. ?? 2010 Wildlife Biology, NKV.

  11. The first detection of species of Babesia Starcovici, 1893 in moose, Alces alces (Linnaeus), in Norway.

    PubMed

    Puraite, Irma; Rosef, Olav; Radzijevskaja, Jana; Lipatova, Indre; Paulauskas, Algimantas

    2016-01-01

    Babesiosis is an emerging zoonotic disease and various wildlife species are reservoir hosts for zoonotic species of Babesia Starcovici, 1893. The objective of the present study was to investigate the presence and prevalence of Babesia spp. in moose Alces alces (Linnaeus) in two regions of Norway. A total of 99 spleen samples were collected from animals of various ages from an area with the occurrence of the tick Ixodes ricinus (Linnaeus, 1758), and from an area where the ticks are known to be absent. Infection was detected by the amplification of different regions of the 18S rRNA gene by using two different PCR primer sets specific of Babesia. Babesia spp. were found in the spleen samples of four moose. All Babesia-infected animals were from an area where ticks occur, with an infection rate of 6% (4 of 70). Babesia-positive samples were obtained from a five-month old moose calf and three adults. Two Babesia species, Babesia capreoli (Enigk et Friedhoff, 1962) and a B. odocoilei-like, were identified. Co-infection with Anaplasma phagocytophilum was obtained in two animals. This is the first report of the occurrence of B. capreoli and B. odocoilei-like species in moose. PMID:27188749

  12. Population genetic structure of moose (Alces Alces) of South-central Alaska.

    USGS Publications Warehouse

    Wilson, Robert E.; McDonough, John T.; Barboza, Perry S.; Talbot, Sandra L.; Farley, Sean D.

    2015-01-01

    The location of a population can influence its genetic structure and diversity by impacting the degree of isolation and connectivity to other populations. Populations at range margins areoften thought to have less genetic variation and increased genetic structure, and a reduction in genetic diversity can have negative impacts on the health of a population. We explored the genetic diversity and connectivity between 3 peripheral populations of moose (Alces alces) with differing potential for connectivity to other areas within interior Alaska. Populations on the Kenai Peninsula and from the Anchorage region were found to be significantly differentiated (FST= 0.071, P < 0.0001) with lower levels of genetic diversity observed within the Kenai population. Bayesian analyses employing assignment methodologies uncovered little evidence of contemporary gene flow between Anchorage and Kenai, suggesting regional isolation. Although gene flow outside the peninsula is restricted, high levels of gene flow were detected within the Kenai that is explained by male-biased dispersal. Furthermore, gene flow estimates differed across time scales on the Kenai Peninsula which may have been influenced by demographic fluctuations correlated, at least in part, with habitat change.

  13. GEOGRAPHIC DISTRIBUTION AND MOLECULAR DIVERSITY OF BARTONELLA SPP. INFECTIONS IN MOOSE (ALCES ALCES) IN FINLAND.

    PubMed

    Pérez Vera, Cristina; Aaltonen, Kirsi; Spillmann, Thomas; Vapalahti, Olli; Sironen, Tarja

    2016-04-28

    Moose, Alces alces (Artiodactyla: Cervidae) in Finland are heavily infested with deer keds, Lipoptena cervi (Diptera: Hippoboschidae). The deer ked, which carries species of the genus Bartonella, has been proposed as a vector for the transmission of bartonellae to animals and humans. Previously, bartonella DNA was found in deer keds as well as in moose blood collected in Finland. We investigated the prevalence and molecular diversity of Bartonella spp. infection from blood samples collected from free-ranging moose. Given that the deer ked is not present in northernmost Finland, we also investigated whether there were geographic differences in the prevalence of bartonella infection in moose. The overall prevalence of bartonella infection was 72.9% (108/148). Geographically, the prevalence was highest in the south (90.6%) and lowest in the north (55.9%). At least two species of bartonellae were identified by multilocus sequence analysis. Based on logistic regression analysis, there was no significant association between bartonella infection and either age or sex; however, moose from outside the deer ked zone were significantly less likely to be infected (P<0.015) than were moose hunted within the deer ked zone. PMID:26967131

  14. A genetic discontinuity in moose (Alces alces) in Alaska corresponds with fenced transportation infrastructure

    USGS Publications Warehouse

    Wilson, Robert E.; Farley, Sean D.; McDonough, Thomas J.; Talbot, Sandra L.; Barboza, Perry S.

    2015-01-01

    The strength and arrangement of movement barriers can impact the connectivity among habitat patches. Anthropogenic barriers (e.g. roads) are a source of habitat fragmentation that can disrupt these resource networks and can have an influence on the spatial genetic structure of populations. Using microsatellite data, we evaluated whether observed genetic structure of moose (Alces alces) populations were associated with human activities (e.g. roads) in the urban habitat of Anchorage and rural habitat on the Kenai Peninsula, Alaska. We found evidence of a recent genetic subdivision among moose in Anchorage that corresponds to a major highway and associated infrastructure. This subdivision is most likely due to restrictions in gene flow due to alterations to the highway (e.g. moose-resistant fencing with one-way gates) and a significant increase in traffic volume over the past 30 years; genetic subdivision was not detected on the Kenai Peninsula in an area not bisected by a major highway. This study illustrates that anthropogenic barriers can substructure wildlife populations within a few generations and highlights the value of genetic assessments to determine the effects on connectivity among habitat patches in conjunction with behavioral and ecological data..

  15. Bartonella Infections in Deer Keds (Lipoptena cervi) and Moose (Alces alces) in Norway

    PubMed Central

    Duodu, Samuel; Madslien, Knut; Hjelm, Eva; Molin, Ylva; Paziewska-Harris, Anna; Harris, Philip D.; Colquhoun, Duncan J.

    2013-01-01

    Infections with Bartonella spp. have been recognized as emerging zoonotic diseases in humans. Large knowledge gaps exist, however, relating to reservoirs, vectors, and transmission of these bacteria. We describe identification by culture, PCR, and housekeeping gene sequencing of Bartonella spp. in fed, wingless deer keds (Lipoptena cervi), deer ked pupae, and blood samples collected from moose, Alces alces, sampled within the deer ked distribution range in Norway. Direct sequencing from moose blood sampled in a deer ked-free area also indicated Bartonella infection but at a much lower prevalence. The sequencing data suggested the presence of mixed infections involving two species of Bartonella within the deer ked range, while moose outside the range appeared to be infected with a single species. Bartonella were not detected or cultured from unfed winged deer keds. The results may indicate that long-term bacteremia in the moose represents a reservoir of infection and that L. cervi acts as a vector for the spread of infection of Bartonella spp. Further research is needed to evaluate the role of L. cervi in the transmission of Bartonella to animals and humans and the possible pathogenicity of these bacteria for humans and animals. PMID:23104416

  16. A geographic cluster of malignant catarrhal fever in moose (Alces alces) in Norway.

    PubMed

    Vikøren, Turid; Klevar, Siv; Li, Hong; Hauge, Anna Germundsson

    2015-04-01

    Three cases of lethal sheep-associated malignant catarrhal fever (SA-MCF) in free-ranging moose (Alces alces) were diagnosed in Lesja, Norway, December 2008-February 2010. The diagnosis was based on PCR identification of ovine herpesvirus 2 (OvHV-2) DNA (n = 3) and typical histopathologic lesions (n = 1). To study the possibility of subclinical or latent MCF virus (MCFV) infection in this moose population and in red deer (Cervus elaphus), we examined clinically normal animals sampled during hunting in Lesja 2010 by serology and PCR. Sera from 63 moose and 33 red deer were tested for antibodies against MCFV by competitive-inhibition enzyme-linked immunosorbent assay. To test for MCFVs, a consensus PCR for herpesviral DNA was run on spleen samples from 23 moose and 17 red deer. All samples were antibody and PCR negative. Thus, there is no evidence of previous exposure, subclinical infection, or latent infection in this sample. This seasonal cluster of SA-MCF cases (2008-10) may be attributable to exposure of moose to lambs when OvHV-2 shedding is presumed to be maximal, compounded by an unusual extended grazing period by sheep in the autumn. PMID:25574807

  17. Detection of antibodies to Neospora caninum in moose (Alces alces): the first report in Europe.

    PubMed

    Moskwa, Bozena; Goździk, Katarzyna; Bień, Justyna; Kornacka, Aleksandra; Cybulska, Aleksandra; Reiterová, Katarína; Cabaj, Władysław

    2014-02-01

    Neospora caninum Dubey, Carpenter, Speer, Topper et Uggla, 1988 is a protozoan parasite originally reported as a major cause of bovine abortions worldwide. It is documented that the parasite is widely spread among non-carnivorous cervids. The purpose of this study was to investigate the seroprevalence of N. caninum in moose (Alces alces Linnaeus). Blood samples collected in 2010 and 2012 in the northeastern Poland were tested for antibodies to N. caninum by agglutination test (NAT), a commercial competitive screening enzyme-linked immunosorbent assay (cELISA) and enzyme-linked immunoassay (ELISA). Sera that gave a positive result were further investigated by western blot (WB) analysis to verify the presence of antibodies. Antibodies to N. caninum were detected in one of seven moose. The antibody titer was confirmed by NAT (1 : 1 280), cELISA (I = 91%) and ELISA (OD = 0.736). The main immunodominant antigens detected by WB were 120, 70, 55, 35 and 16 kDa proteins. This is the first evidence of N. caninum seropositivity in moose living in a natural environment in Europe. PMID:24684051

  18. Insight into the bacterial gut microbiome of the North American moose (Alces alces)

    PubMed Central

    2012-01-01

    Background The work presented here provides the first intensive insight into the bacterial populations in the digestive tract of the North American moose (Alces alces). Eight free-range moose on natural pasture were sampled, producing eight rumen samples and six colon samples. Second generation (G2) PhyloChips were used to determine the presence of hundreds of operational taxonomic units (OTUs), representing multiple closely related species/strains (>97% identity), found in the rumen and colon of the moose. Results A total of 789 unique OTUs were used for analysis, which passed the fluorescence and the positive fraction thresholds. There were 73 OTUs, representing 21 bacterial families, which were found exclusively in the rumen samples: Lachnospiraceae, Prevotellaceae and several unclassified families, whereas there were 71 OTUs, representing 22 bacterial families, which were found exclusively in the colon samples: Clostridiaceae, Enterobacteriaceae and several unclassified families. Overall, there were 164 OTUs that were found in 100% of the samples. The Firmicutes were the most dominant bacteria phylum in both the rumen and the colon. Microarray data available at ArrayExpress, accession number E-MEXP-3721. Conclusions Using PhyloTrac and UniFrac computer software, samples clustered into two distinct groups: rumen and colon, confirming that the rumen and colon are distinct environments. There was an apparent correlation of age to cluster, which will be validated by a larger sample size in future studies, but there were no detectable trends based upon gender. PMID:22992344

  19. Multicast traffic grooming in flexible optical WDM networks

    NASA Astrophysics Data System (ADS)

    Patel, Ankitkumar N.; Ji, Philip N.; Jue, Jason P.; Wang, Ting

    2012-12-01

    In Metropolitan Area Networks (MANs), point-to-multipoint applications, such as IPTV, video-on-demand, distance learning, and content distribution, can be efficiently supported through light-tree-based multicastcommunications instead of lightpath-based unicast-communications. The application of multicasting for such traffic is justified by its inherent benefits of reduced control and management overhead and elimination of redundant resource provisioning. Supporting such multicast traffic in Flexible optical WDM (FWDM) networks that can provision light-trees using optimum amount of spectrum within flexible channel spacing leads to higher wavelength and spectral efficiencies compared to the conventional ITU-T fixed grid networks. However, in spite of such flexibility, the residual channel capacity of stranded channels may not be utilized if the network does not offer channels with arbitrary line rates. Additionally, the spectrum allocated to guard bands used to isolate finer granularity channels remains unutilized. These limitations can be addressed by using traffic grooming in which low-rate multicast connections are aggregated and switched over high capacity light-trees. In this paper, we address the multicast traffic grooming problem in FWDM networks, and propose a novel auxiliary graph-based algorithm for the first time. The performance of multicast traffic grooming is evaluated in terms of spectral, cost, and energy efficiencies compared to lightpath-based transparent FWDM networks, lightpathbased traffic grooming-capable FWDM networks, multicast-enabled transparent FWDM networks, and multicast traffic grooming-capable fixed grid networks. Simulation results demonstrate that multicast traffic grooming in FWDM networks not only improves spectral efficiency, but also cost, and energy efficiencies compared to other multicast traffic provisioning approaches of FWDM and fixed grid networks.

  20. An Economic Case for End System Multicast

    NASA Astrophysics Data System (ADS)

    Analoui, Morteza; Rezvani, Mohammad Hossein

    This paper presents a non-strategic model for the end-system multicast networks based on the concept of replica exchange economy. We believe that microeconomics is a good candidate to investigate the problem of selfishness of the end-users (peers) in order to maximize the aggregate throughput. In this solution concept, the decisions that a peer might make, does not affect the actions of the other peers at all. The proposed mechanism tunes the price of the service in such a way that general equilibrium holds.

  1. Optimal multicast communication in wormhole-routed torus networks

    SciTech Connect

    Robinson, D.F.; McKinley, P.K.; Cheng, B.H.C.

    1994-12-31

    This paper presents efficient algorithms that implement one-to-many, or multicast, communication in wormhole-routed torus networks. By exploiting the properties of the switching technology and the use of virtual channels, a minimum-time multicast algorithm is presented for n-dimensional torus networks that use deterministic, dimension-ordered routing of unicast messages. The algorithm can deliver a multicast message to m-1 destinations in [log{sub 2}m] message-passing steps, while avoiding contention among the constituent unicast messages. Performance results of a simulation study on torus networks are also given.

  2. Efficient multicast routing in wavelength-division-multiplexing networks with light splitting and wavelength conversion

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Tian, Jinwen; Liu, Jian

    2005-04-01

    We propose wavelength-division-multiplexing (WDM) networks with light splitting and wavelength conversion that can efficiently support multicast routing between nodes. Our iterative algorithm analyzes the original multicast routing network by decomposing it into multicast subgroups. These subgroups have the same wavelength, and the individual subgroup is combined to build a multicast tree. From the multicast tree, we can compute efficiently to multicast for short paths. Numerical results obtained for the ARPANET show that our algorithm can greatly reduce the optical blocking probability and the number of required wavelength conversions.

  3. Multilayer multicast key management with threshold cryptography

    NASA Astrophysics Data System (ADS)

    Dexter, Scott D.; Belostotskiy, Roman; Eskicioglu, Ahmet M.

    2004-06-01

    The problem of distributing multimedia securely over the Internet is often viewed as an instance of secure multicast communication, in which multicast messages are protected by a group key shared among the group of clients. One important class of key management schemes makes use of a hierarchical key distribution tree. Constructing a hierarchical tree based on secret shares rather than keys yields a scheme that is both more flexible and provably secure. Both the key-based and share-based hierarchical key distribution tree techniques are designed for managing keys for a single data stream. Recent work shows how redundancies that arise when this scheme is extended to multi-stream (e.g. scalable video) applications may be exploited in the key-based system by viewing the set of clients as a "multi-group". In this paper, we present results from an adaptation of a multi-group key management scheme using threshold cryptography. We describe how the multi-group scheme is adapted to work with secret shares, and compare this scheme with a naíve multi-stream key-management solution by measuring performance across several critical parameters, including tree degree, multi-group size, and number of shares stored at each node.

  4. Multimedia Multicast Based on Multiterminal Source Coding

    NASA Astrophysics Data System (ADS)

    Aghagolzadeh, Ali; Nooshyar, Mahdi; Rabiee, Hamid R.; Mikaili, Elhameh

    Multimedia multicast with two servers based on the multiterminal source coding is studied in some previous researches. Due to the possibility of providing an approach for practical code design for more than two correlated sources in IMTSC/CEO setup, in this paper, the framework of Slepian-Wolf coded quantization is extended and a practical code design is presented for IMTSC/CEO with the number of encoders greater than two. Then the multicast system based on the IMTSC/CEO is applied to the cases with three, four and five servers. Since the underlying code design approach for the IMTSC/CEO problem has the capability of applying to an arbitrary number of active encoders, the proposed MMBMSC method can also be used with an arbitrary number of servers easily. Also, explicit expressions of the expected distortion with an arbitrary number of servers in the MMBMSC system are presented. Experimental results with data, image and video signals show the superiority of our proposed method over the conventional solutions and over the MMBMSC system with two servers.

  5. The Nutritional Balancing Act of a Large Herbivore: An Experiment with Captive Moose (Alces alces L).

    PubMed

    Felton, Annika M; Felton, Adam; Raubenheimer, David; Simpson, Stephen J; Krizsan, Sophie J; Hedwall, Per-Ola; Stolter, Caroline

    2016-01-01

    The nutrient balancing hypothesis proposes that, when sufficient food is available, the primary goal of animal diet selection is to obtain a nutritionally balanced diet. This hypothesis can be tested using the Geometric Framework for nutrition (GF). The GF enables researchers to study patterns of nutrient intake (e.g. macronutrients; protein, carbohydrates, fat), interactions between the different nutrients, and how an animal resolves the potential conflict between over-eating one or more nutrients and under-eating others during periods of dietary imbalance. Using the moose (Alces alces L.), a model species in the development of herbivore foraging theory, we conducted a feeding experiment guided by the GF, combining continuous observations of six captive moose with analysis of the macronutritional composition of foods. We identified the moose's self-selected macronutrient target by allowing them to compose a diet by mixing two nutritionally complementary pellet types plus limited access to Salix browse. Such periods of free choice were intermixed with periods when they were restricted to one of the two pellet types plus Salix browse. Our observations of food intake by moose given free choice lend support to the nutrient balancing hypothesis, as the moose combined the foods in specific proportions that provided a particular ratio and amount of macronutrients. When restricted to either of two diets comprising a single pellet type, the moose i) maintained a relatively stable intake of non-protein energy while allowing protein intakes to vary with food composition, and ii) increased their intake of the food item that most closely resembled the self-selected macronutrient intake from the free choice periods, namely Salix browse. We place our results in the context of the nutritional strategy of the moose, ruminant physiology and the categorization of food quality. PMID:26986618

  6. The Nutritional Balancing Act of a Large Herbivore: An Experiment with Captive Moose (Alces alces L)

    PubMed Central

    Felton, Annika M.; Felton, Adam; Raubenheimer, David; Simpson, Stephen J.; Krizsan, Sophie J.; Hedwall, Per-Ola; Stolter, Caroline

    2016-01-01

    The nutrient balancing hypothesis proposes that, when sufficient food is available, the primary goal of animal diet selection is to obtain a nutritionally balanced diet. This hypothesis can be tested using the Geometric Framework for nutrition (GF). The GF enables researchers to study patterns of nutrient intake (e.g. macronutrients; protein, carbohydrates, fat), interactions between the different nutrients, and how an animal resolves the potential conflict between over-eating one or more nutrients and under-eating others during periods of dietary imbalance. Using the moose (Alces alces L.), a model species in the development of herbivore foraging theory, we conducted a feeding experiment guided by the GF, combining continuous observations of six captive moose with analysis of the macronutritional composition of foods. We identified the moose’s self-selected macronutrient target by allowing them to compose a diet by mixing two nutritionally complementary pellet types plus limited access to Salix browse. Such periods of free choice were intermixed with periods when they were restricted to one of the two pellet types plus Salix browse. Our observations of food intake by moose given free choice lend support to the nutrient balancing hypothesis, as the moose combined the foods in specific proportions that provided a particular ratio and amount of macronutrients. When restricted to either of two diets comprising a single pellet type, the moose i) maintained a relatively stable intake of non-protein energy while allowing protein intakes to vary with food composition, and ii) increased their intake of the food item that most closely resembled the self-selected macronutrient intake from the free choice periods, namely Salix browse. We place our results in the context of the nutritional strategy of the moose, ruminant physiology and the categorization of food quality. PMID:26986618

  7. Mitochondrial phylogeography of moose (Alces alces): Late Pleistocene divergence and population expansion

    USGS Publications Warehouse

    Hundertmark, Kris J.; Shields, Gerald F.; Udina, Irina G.; Bowyer, R. Terry; Danilkin, Alexei A.; Schwartz, Charles C.

    2002-01-01

    We examined phylogeographic relationships of moose (Alces alces) worldwide to test the proposed existence of two geographic races and to infer the timing and extent of demographic processes underpinning the expansion of this species across the Northern Hemisphere in the late Pleistocene. Sequence variation within the left hypervariable domain of the control region occurred at low or moderate levels worldwide and was structured geographically. Partitioning of genetic variance among regions indicated that isolation by distance was the primary agent for differentiation of moose populations but does not support the existence of distinct eastern and western races. Levels of genetic variation and structure of phylogenetic trees identify Asia as the origin of all extant mitochondrial lineages. A recent coalescence is indicated, with the most recent common ancestor dating to the last ice age. Moose have undergone two episodes of population expansion, likely corresponding to the final interstade of the most recent ice age and the onset of the current interglacial. Timing of expansion for the population in the Yakutia–Manchuria region of eastern Asia indicates that it is one of the oldest populations of moose and may represent the source of founders of extant populations in North America, which were colonized within the last 15,000 years. Our data suggest an extended period of low population size or a severe bottleneck prior to the divergence and expansion of extant lineages and a recent, less-severe bottleneck among European lineages. Climate change during the last ice age, acting through contraction and expansion of moose habitat and the flooding of the Bering land bridge, undoubtedly was a key factor influencing the divergence and expansion of moose populations.

  8. Factors affecting deer ked (Lipoptena cervi) prevalence and infestation intensity in moose (Alces alces) in Norway

    PubMed Central

    2012-01-01

    Background The deer ked (Lipoptena cervi), a hematophagous ectoparasite of Cervids, is currently spreading in Scandinavia. In Norway, keds are now invading the south-eastern part of the country and the abundant and widely distributed moose (Alces alces) is the definitive host. However, key factors for ked abundance are poorly elucidated. The objectives of our study were to (i) determine deer ked infestation prevalence and intensity on moose and (ii) evaluate if habitat characteristics and moose population density are determinants of deer ked abundance on moose. Methods In order to identify key factors for deer ked abundance, a total of 350 skin samples from the neck of hunted moose were examined and deer keds counted. Infestation intensity was analyzed in relation to moose age and sex, moose population density and landscape characteristics surrounding the killing site. Results Deer ked infestation prevalence was 100%, but infestation intensity varied from 0.001 to 1.405 keds/cm2. Ked intensity was highest in male yearlings (~1.5 years) and positively associated with longitude and Scots pine (Pinus sylvestris) dominated habitat and negatively associated with bogs and latitude. Moose population density during autumn showed a tendency to be positively associated, while altitude tended to be negatively associated with ked intensity. Conclusions Deer keds exploit the whole moose population within our study area, but are most prevalent in areas dominated by Scots pine. This is probably a reflection of Scots pine being the preferred winter browse for moose in areas with highest moose densities in winter. Ked intensity decreases towards the northwest and partly with increasing altitude, probably explained by the direction of dispersal and reduced temperature, respectively. Abundant deer ked harm humans and domestic animals. Moose management authorities should therefore be aware of the close relationship between moose, deer ked and habitat, using the knowledge as a

  9. NMR study of the ternary carbides M2 AlC (M=Ti,V,Cr)

    NASA Astrophysics Data System (ADS)

    Lue, C. S.; Lin, J. Y.; Xie, B. X.

    2006-01-01

    We have performed a systematic study of the layered ternary carbides Ti2AlC , V2AlC , and Cr2AlC using Al27 NMR spectroscopy. The quadrupole splittings, Knight shifts, as well as spin-lattice relaxation times on each material have been identified. The sign of the isotropic Knight shift varies from positive for Ti2AlC and V2AlC to negative for Cr2AlC , attributed to the enhancement of hybridization with increasing valence electron count in the transition metal. Universally long relaxation times are found for these alloys. Results provide a measure of Al-s Fermi-level density of states Ns(EF) for Ti2AlC and V2AlC . In addition, the evidence that Ns(EF) correlates with the transition metal d -electron count has been explored in the present NMR investigation.

  10. Horizontal protection for multicast optical virtual private networks

    NASA Astrophysics Data System (ADS)

    Peng, Yunfeng; Long, Keping

    2008-11-01

    New concepts of horizontal protection and half-mesh topology for survivability design in multicast optical virtual private networks are presented. All issues mentioned in this paper are expected to form targets for further investigations.

  11. Mobile Multicast in Hierarchical Proxy Mobile IPV6

    NASA Astrophysics Data System (ADS)

    Hafizah Mohd Aman, Azana; Hashim, Aisha Hassan A.; Mustafa, Amin; Abdullah, Khaizuran

    2013-12-01

    Mobile Internet Protocol Version 6 (MIPv6) environments have been developing very rapidly. Many challenges arise with the fast progress of MIPv6 technologies and its environment. Therefore the importance of improving the existing architecture and operations increases. One of the many challenges which need to be addressed is the need for performance improvement to support mobile multicast. Numerous approaches have been proposed to improve mobile multicast performance. This includes Context Transfer Protocol (CXTP), Hierarchical Mobile IPv6 (HMIPv6), Fast Mobile IPv6 (FMIPv6) and Proxy Mobile IPv6 (PMIPv6). This document describes multicast context transfer in hierarchical proxy mobile IPv6 (H-PMIPv6) to provide better multicasting performance in PMIPv6 domain.

  12. Authenticated IGMP for Controlling Access to Multicast Distribution Tree

    NASA Astrophysics Data System (ADS)

    Park, Chang-Seop; Kang, Hyun-Sun

    A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.

  13. Multicast Services over Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Manzanares-Lopez, Pilar; Malgosa-Sanahuja, Josemaria; Muñoz-Gea, Juan Pedro; Sanchez-Aarnoutse, Juan Carlos

    IP multicast functionality was defined as an efficient method to transmit datagrams to a group of receivers. However, although a lot of research work has been done in this technology, IP multicast has not spread out over the Internet as much as expected, reducing its use for local environments (i.e., LANs). The peer-to-peer networks paradigm can be used to overcome the IP multicast limitations. In this new scenario (called Application Layer Multicast or ALM), the multicast functionality is changed from network to application layer. Although ALM solution can be classified into unstructured and structured solutions, the last ones are the best option to offer multicast services due to the effectiveness in the discovery nodes, their mathematical definition and the totally decentralized management. In this chapter we are going to offer a tutorial of the main structured ALM solutions, but introducing two novelties with respect to related surveys in the past: first, the systematic description of most representative structured ALM solution in OverSim (one of the most popular p2p simulation frameworks). Second, some simulation comparatives between flooding-based and tree-based structured ALM solution are also presented.

  14. Multicast traffic grooming in WDM networks

    NASA Astrophysics Data System (ADS)

    Kamal, Ahmed E.; Ul-Mustafa, Raza

    2003-10-01

    This paper considers the problem of grooming multicast traffic in WDM networks, with arbitrary mesh topologies. The problem is different from grooming of unicast traffic, since traffic can be delivered to destinations through other destinations in the same set, or through branching points. The paper presents an optimal Integer Linear Programming (ILP) formulation in order to minimize the cost of the network in terms of the number of SONET Add/Drop Multiplexers (ADM). The formulation also minimizes the number of wavelength channels used in the network, and does not allow bifurcation of traffic. Since the ILP formulation is able to solve limited size problems, the paper also introduces a heuristic approach to solve the problem.

  15. Digital Multicasting of Multiple Audio Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell; Bullock, John

    2007-01-01

    The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer

  16. A Stateful Multicast Access Control Mechanism for Future Metro-Area-Networks.

    ERIC Educational Resources Information Center

    Sun, Wei-qiang; Li, Jin-sheng; Hong, Pei-lin

    2003-01-01

    Multicasting is a necessity for a broadband metro-area-network; however security problems exist with current multicast protocols. A stateful multicast access control mechanism, based on MAPE, is proposed. The architecture of MAPE is discussed, as well as the states maintained and messages exchanged. The scheme is flexible and scalable. (Author/AEF)

  17. IP over optical multicasting for large-scale video delivery

    NASA Astrophysics Data System (ADS)

    Jin, Yaohui; Hu, Weisheng; Sun, Weiqiang; Guo, Wei

    2007-11-01

    In the IPTV systems, multicasting will play a crucial role in the delivery of high-quality video services, which can significantly improve bandwidth efficiency. However, the scalability and the signal quality of current IPTV can barely compete with the existing broadcast digital TV systems since it is difficult to implement large-scale multicasting with end-to-end guaranteed quality of service (QoS) in packet-switched IP network. China 3TNet project aimed to build a high performance broadband trial network to support large-scale concurrent streaming media and interactive multimedia services. The innovative idea of 3TNet is that an automatic switched optical networks (ASON) with the capability of dynamic point-to-multipoint (P2MP) connections replaces the conventional IP multicasting network in the transport core, while the edge remains an IP multicasting network. In this paper, we will introduce the network architecture and discuss challenges in such IP over Optical multicasting for video delivery.

  18. Broadcast/multicast MPEG-2 video over wireless channels using header redundancy FEC strategies

    NASA Astrophysics Data System (ADS)

    Ma, Hairuo; El Zarki, Magda

    1999-01-01

    In this paper, we address the issue of error control in transmitting MPEG-2 encoded video streams over broadband fixed wireless access networks for broadcast or multicast services. Because of the error-prone nature of wireless channels, error control is mandatory when MPEG-2 video streams are transported over wireless access networks to end user. To prevent overloading the reliable wireline networks error control has to be applied locally. FEC is a must for broadcast or multicast services. Because of the important role of MPEG-2 control information in the decoding process, it must be given priority service in the form of excess error protection in order to achieve the desired QoS. In this paper, a header redundancy FEC (HRFEC) strategy is introduced and an implementation of it (type-I HRFEC scheme) is described. The overhead and delay jitter associated with the type-I HRFEC is also estimated. Simulation results on the performance of type-I HRFEC indicates that it improves the reception statistics of MPEG-2 control. As a direct, the quality, measured in terms of objective grade point and PSNR of the reconstructed video sequence, is improved.

  19. Age structure of moose (Alces alces) killed by gray wolves (Canis lupus) in northeastern Minnesota, 1967-2011

    USGS Publications Warehouse

    Mech, L. David; Nelson, Michael E.

    2013-01-01

    The age structure of Moose (Alces alces) killed by gray Wolves (Canis lupus) is available from only two national parks in the united States where hunting by people is not allowed and from three areas in Alaska where Moose are hunted (Mech 1966; Peterson et al.1984; Ballard et al. 1987; Mech et al. 1998). The samples of Moose killed by gray Wolves from each hunted area are relatively small (47–117), given that Moose live to 20 or more years (Passmore et al. 1955). This article adds age data from another 77 Moose killed by gray Wolves from a fourth (lightly) human-hunted area and assesses the age structure of all the samples.

  20. Effect of Dermacentor albipictus (Acari:Ixodidae) on blood composition, weight gain and hair coat of moose, Alces alces.

    PubMed

    Glines, M V; Samuel, W M

    1989-04-01

    The physiological effects of the winter tick, Dermacentor albipictus, on moose, Alces alces, were investigated. Blood composition, weight gain, food intake and change in the hair coat of moose calves, four infested with D. albipictus larvae, and eight uninfested, were monitored. Infested moose groomed extensively, apparently in response to feeding nymphal and adult ticks, and developed alopecia. Other clinical signs included: chronic weight loss, anemia, hypoalbuminemia, hypophosphatemia, and transient decreases in serum aspartate transaminase and calcium during the period of nymphal and adult female tick engorgement. Infested animals did not become anorexic. Two moose with severe hair loss had increases in gamma globulin shortly after the onset of female tick engorgement. Results suggest that alopecia is associated with tick resistance. Animals that groom and develop hair loss likely carry fewer ticks and therefore suffer less severely from blood loss. PMID:2714121

  1. Optimal software multicast in wormhole-routed multistage networks

    SciTech Connect

    Xu, H.; Gui, Y.D.; Ni, L.M.

    1994-12-31

    Multistage interconnection networks are a popular class of interconnection architecture for constructing scalable parallel computers (SPCs). The focus of this paper is on wormhole routed multistage networks supporting turnaround routing. Existing machines characterized by such a system model include the IBM SP-1, TMC CM-5, and Meiko CS-2. Efficient collective communication among processor nodes is critical to the performance of SPCS. A system-level multicast service, in which the same message is delivered from a source node to an arbitrary number of destination nodes, is fundamental in supporting collective communication primitives including the application-level broadcast, reduction, and barrier synchronization. This paper addresses how to efficiently implement multicast services in wormhole routed multistage networks, in the absence of hardware multicast support, by exploiting the properties of the switching technology. An optimal multicast algorithm is proposed. The results of implementations on a 64-node SP-1 show that the proposed algorithm significantly out performs the application-level broadcast primitives provided by currently existing collective communication libraries including the public domain MPI.

  2. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    NASA Astrophysics Data System (ADS)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  3. Tests of anaerobic alactacid and lactacid capacities: description and reliability.

    PubMed

    Simoneau, J A; Lortie, G; Boulay, M R; Bouchard, C

    1983-12-01

    The purpose of this paper is 1) to describe maximal anaerobic alactacid (AAC) and lactacid (ALC) capacity test and 2) to determine their reliabilities in men and women. The subjects were submitted to either a 10-s (2 trials) or a 90-s (1 trial) all-out ergocycle test for AAC and ALC respectively. Thirty-four male and 24 female subjects were tested for AAC, while 21 males and 19 females took part in the ALC test. A modified bicycle ergometer allowed the exact measurement of the distance and the work load for the computation of the work performed. load for the computation of the work performed. Each subject was tested and retested within 7 days. In both AAC and ALC, male subjects performed more work than women. AAC was 108 +/- 16 (mean +/- SD) and 90 +/- 14 J/kg for males and females respectively while ALC was 486 +/- 50 and 377 +/- 34 J/kg. Although the work load was designed to be 0.09 kp/kg for the AAC and 0.05 kp/kg for the ALC tests, there were wide variations between subjects with respect to the optimal load (AAC: from 0.05 to 0.11; ALC: from 0.03 to 0.06 kp/kg). Reproducibility was consistently high, with intraclass correlations of 0.98 and 0.99 for AAC (AAC-Max) and ALC respectively, with no difference between the male and female subgroups. It is concluded that these AAC and ALC tests, designed under assumptions of face validity, allow for differences between males and females and are highly repeatable. PMID:6652864

  4. Morphological and molecular characteristics of four Sarcocystis spp. in Canadian moose (Alces alces), including Sarcocystis taeniata n. sp.

    PubMed

    Gjerde, Bjørn

    2014-04-01

    Individual sarcocysts were isolated from fresh or alcohol-fixed muscle samples of two moose from Alberta, Canada, and examined by light (LM) and scanning electron microscopy (SEM) and molecular methods, comprising polymerase chain reaction (PCR) amplification and sequencing of the complete18S rRNA gene and the partial cytochrome c oxidase subunit I gene (cox1). By LM, four sarcocyst types were recognized, and the sequencing results showed that each type represented a distinct species, i.e. Sarcocystis alces, Sarcocystis alceslatrans, Sarcocystis ovalis and Sarcocystis taeniata n. sp. The finding of S. alceslatrans and S. ovalis has been reported briefly previously, but further details are provided here, including the ultrastructure of sarcoysts of S. alceslatrans as seen by SEM. The species S. alces was found for the first time in Canadian moose, whereas the finding of S. taeniata is the first record of this species in any host. The sarcocysts of S. taeniata were sac-like and about 1,000-1,100 × 60-80 μm in size. By LM, the cysts had a thin and smooth wall with no visible protrusions, whereas SEM revealed that the cyst surface had sparsely but regularly distributed, thin ribbon-like protrusions, about 2 μm long and 0.2 μm wide, lying flat against the surface and leaving most of the cyst surface naked. Similar protrusions have previously been reported from Sarcocystis grueneri in reindeer, which was found by sequence comparisons and phylogenetic analyses to be the species most closely related to S. taeniata. The phylogenetic analyses further suggested that S. taeniata, like S. alces and S. alceslatrans, use canids as definitive hosts, whereas corvid birds are known definitive hosts for S. ovalis. In contrast to the three other species found, S. taeniata displayed considerable intra-specific and intra-isolate sequence variation (substitutions, insertions/deletions) in certain regions of the 18S rRNA gene. PMID:24535735

  5. MECHANISMS UNDERLYING ALC13 INHIBITION OF AGONIST-STIMULATED INOSITOL PHOSPHATE ACCUMULATION

    EPA Science Inventory

    Possible mechanisms of AlC13-induced inhibition of agonist-stimulated inositol phosphate (IP) accumulation were investigated using rat brain cortex slices, synaptosomes or homogenates. nder conditions in which AlC13 inhibits carbachol (CARB) stimulated IP accumulation (Gp-mediate...

  6. Control of interfaces in Al-C fibre composites

    NASA Technical Reports Server (NTRS)

    Warrier, S. G.; Blue, C. A.; Lin, R. Y.

    1993-01-01

    The interface of Al-C fiber composite was modified by coating a silver layer on the surface of carbon fibres prior to making composites, in an attempt to improve the wettability between molten aluminum and carbon fibers during infiltration. An electroless plating technique was adopted and perfected to provide a homogeneous silver coating on the carbon fiber surface. Al-C fiber composites were prepared using a liquid infiltration technique in a vacuum. It was found that silver coating promoted the wetting between aluminum and carbon fibers, particularly with polyacrylonitrile-base carbon fibers. However, due to rapid dissolution of silver in molten aluminum, it was believed that the improved infiltration was not due to the wetting behavior between molten aluminum and silver. The cleaning of the fiber surface and the preservation of the cleaned carbon surface with silver coating was considered to be the prime reason for the improved wettability. Interfacial reactions between aluminum and carbon fibers were observed. Amorphous carbon was found to react more with aluminum than graphitic carbon. This is believed to be because of the inertness of the graphitic basal planes.

  7. Physiological evaluation of free-ranging moose (Alces alces) immobilized with etorphine-xylazine-acepromazine in Northern Sweden

    PubMed Central

    2012-01-01

    Background Evaluation of physiology during capture and anesthesia of free-ranging wildlife is useful for determining the effect that capture methods have on both ecological research results and animal welfare. This study evaluates capture and anesthesia of moose (Alces alces) with etorphine-xylazine-acepromazine in Northern Sweden. Methods Fifteen adult moose aged 3–15 years were darted from a helicopter with a combination of 3.37 mg etorphine, 75 mg xylazine, and 15 mg acepromazine. Paired arterial blood samples were collected 15 minutes apart with the first sample at 15–23 minutes after darting and were analyzed immediately with an i-STAT®1 Portable Clinical Analyzer. Results All animals developed hypoxemia (PaO2 <10 kPa) with nine animals having marked hypoxemia (PaO2 5.5-8 kPa). All moose were acidemic (ph<7.35) with nine moose having marked acidemia (pH<7.20). For PaCO2, 14 moose had mild hypercapnia (PaCO2 6-8 kPa) and two had marked hypercapnia (PaCO2>8 kPa). Pulse, respiratory rate, pH and HCO3 increased significantly over time from darting whereas lactate decreased. Conclusions The hypoxemia found in this study is a strong indication for investigating alternative drug doses or combinations or treatment with supplemental oxygen. PMID:23276208

  8. Reproductive characteristics in female Swedish moose (Alces alces), with emphasis on puberty, timing of oestrus, and mating

    PubMed Central

    2014-01-01

    Background The moose (Alces alces) is an intensively managed keystone species in Fennoscandia. Several aspects of reproduction in moose have not been fully elucidated, including puberty, timing of mating and oestrus, and the length of the oestrus period. These aspects are relevant for an adaptive management of moose with respect to harvest, population size, demography and environmental conditions. Therefore, an investigation of female moose reproduction was conducted during the moose-hunting period in southern Sweden from 2008 to 2011. Results A total of 250 reproductive organs and information on carcass weight and age was collected from four different hunting areas (provinces of Öland, Småland, Södermanland, and Västergötland) in southern Sweden. The results showed that puberty in female moose varied with carcass weight, age, and time of season. The period for oestrous/mating lasted from about mid September to the beginning of November. Conclusions The oestrus period (predominantly for heifers) is longer than previously reported and was not finished when the hunting period started. Sampling the uterine cervix to detect spermatozoa was a useful method to determine if mating had occurred. To avoid hunting of moose during oestrus, we suggest that the hunting period should be postponed by at least 14 days in southern Sweden. PMID:24735953

  9. Adaptive live multicast video streaming of SVC with UEP FEC

    NASA Astrophysics Data System (ADS)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  10. Group-multicast capable optical virtual private ring with contention avoidance

    NASA Astrophysics Data System (ADS)

    Peng, Yunfeng; Du, Shu; Long, Keping

    2008-11-01

    A ring based optical virtual private network (OVPN) employing contention sensing and avoidance is proposed to deliver multiple-to-multiple group-multicast traffic. The network architecture is presented and its operation principles as well as performance are investigated. The main contribution of this article is the presentation of an innovative group-multicast capable OVPN architecture with technologies available today.

  11. 40-Gb/s FSK modulated WDM-PON with variable-rate multicast overlay.

    PubMed

    Xin, Xiangjun; Liu, Bo; Zhang, Lijia; Yu, Jianjun

    2011-06-20

    This paper proposes a novel conjugate-driven frequency shift keying (FSK) modulated wavelength division multiplexing passive network (WDM-PON) with variable-rate multicast services. Optical orthogonal frequency division multiplexing (OFDM) is adopted for multicast overlay services with different rate requirements. A differential detection is used for the demodulation of FSK signal, which can eliminate the crosstalk from the OFDM signal. A total 40-Gb/s FSK point to point (P2P) signal and 6.3-Gb/s OFDM overlay with three kinds of variable-rate multicast services are experimentally demonstrated. A physical-layer adaptive identification is proposed for the variable-rate multicast services. After 25 km single mode fiber (SMF) transmission, the power penalties of FSK P2P signal and OFDM multicast overlay are 1.3 dB and 1.7 dB respectively. PMID:21716492

  12. Simultaneous multichannel wavelength multicasting and XOR logic gate multicasting for three DPSK signals based on four-wave mixing in quantum-dot semiconductor optical amplifier.

    PubMed

    Qin, Jun; Lu, Guo-Wei; Sakamoto, Takahide; Akahane, Kouichi; Yamamoto, Naokatsu; Wang, Danshi; Wang, Cheng; Wang, Hongxiang; Zhang, Min; Kawanishi, Tetsuya; Ji, Yuefeng

    2014-12-01

    In this paper, we experimentally demonstrate simultaneous multichannel wavelength multicasting (MWM) and exclusive-OR logic gate multicasting (XOR-LGM) for three 10Gbps non-return-to-zero differential phase-shift-keying (NRZ-DPSK) signals in quantum-dot semiconductor optical amplifier (QD-SOA) by exploiting the four-wave mixing (FWM) process. No additional pump is needed in the scheme. Through the interaction of the input three 10Gbps DPSK signal lights in QD-SOA, each channel is successfully multicasted to three wavelengths (1-to-3 for each), totally 3-to-9 MWM, and at the same time, three-output XOR-LGM is obtained at three different wavelengths. All the new generated channels are with a power penalty less than 1.2dB at a BER of 10(-9). Degenerate and non-degenerate FWM components are fully used in the experiment for data and logic multicasting. PMID:25606876

  13. Improvement of arterial oxygenation in free-ranging moose (Alces alces) immobilized with etorphine-acepromazine-xylazine

    PubMed Central

    2014-01-01

    Background The effect of intranasal oxygen and/or early reversal of xylazine with atipamezole on arterial oxygenation in free-ranging moose (Alces alces) immobilized with etorphine-acepromazine-xylazine with a cross-sectional clinical study on 33 adult moose was evaluated. Moose were darted from a helicopter with 3.37 mg etorphine, 15 mg acepromazine and 75 mg xylazine. Intranasal oxygen at a flow rate of 4 L/min and/or early reversal of xylazine with 7.5 mg atipamezole to improve oxygenation was evaluated, using four treatment regimens; intranasal oxygen (n = 10), atipamezole intramuscularly (n = 6), atipamezole intravenously (n = 10), or a combination of atipamezole intravenously and intranasal oxygen (n = 7). Arterial blood was collected 7–30 minutes (min) after darting, and again 15 min after institution of treatment and immediately analyzed using an i-STAT®1 Portable Clinical Analyzer. Results Before treatment the mean ± SD (range) partial pressure of arterial oxygen (PaO2) was 62 ± 17 (26–99) mmHg. Twenty-six animals had a PaO2 < 80 mmHg. Ten had a PaO2 of 40–60 mmHg and three animals had a PaO2 < 40 mmHg. Intranasal oxygen and intravenous administration of atipamezole significantly increased the mean PaO2, as did the combination of the two. In contrast, atipamezole administered intramuscularly at the evaluated dose had no significant effect on arterial oxygenation. Conclusions This study shows that intranasal oxygen effectively improved arterial oxygenation in immobilized moose, and that early intravenous reversal of the sedative component, in this case xylazine, in an opioid-based immobilization drug-protocol significantly improves arterial oxygenation. PMID:25124367

  14. Programming with process groups: Group and multicast semantics

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.

  15. Compensation of a distorted N-fold orbital angular momentum multicasting link using adaptive optics.

    PubMed

    Li, Shuhui; Wang, Jian

    2016-04-01

    By using an adaptive feedback correction technique, we experimentally demonstrate turbulence compensation for free-space four-fold and eight-fold 16-ary quadrature amplitude modulation (16-QAM) carrying orbital angular momentum (OAM) multicasting links. The performance of multicasted OAM beams through emulated atmospheric turbulence and adaptive optics assisted compensation loop is investigated. The experimental results show that the scheme can efficiently compensate for the atmospheric turbulence induced distortions, i.e., reducing power fluctuation of multicasted OAM channels, suppressing inter-channel crosstalk, and improving the bit-error rate (BER) performance. PMID:27192267

  16. Layered Multicast Encryption of Motion JPEG2000 Code Streams for Flexible Access Control

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Toyoshima, Kan; Tonomura, Yoshihide; Fujii, Tatsuya

    In this paper, we propose a layered multicast encryption scheme that provides flexible access control to motion JPEG2000 code streams. JPEG2000 generates layered code streams and offers flexible scalability in characteristics such as resolution and SNR. The layered multicast encryption proposal allows a sender to multicast the encrypted JPEG2000 code streams such that only designated groups of users can decrypt the layered code streams. While keeping the layering functionality, the proposed method offers useful properties such as 1) video quality control using only one private key, 2) guaranteed security, and 3) low computational complexity comparable to conventional non-layered encryption. Simulation results show the usefulness of the proposed method.

  17. Abacus switch: a new scalable multicast ATM switch

    NASA Astrophysics Data System (ADS)

    Chao, H. Jonathan; Park, Jin-Soo; Choe, Byeong-Seog

    1995-10-01

    This paper describes a new architecture for a scalable multicast ATM switch from a few tens to thousands of input ports. The switch, called Abacus switch, has a nonblocking memoryless switch fabric followed by small switch modules at the output ports; the switch has input and output buffers. Cell replication, cell routing, output contention resolution, and cell addressing are all performed distributedly in the Abacus switch so that it can be scaled up to thousnads input and output ports. A novel algorithm has been proposed to resolve output port contention while achieving input and output ports. A novel algorithm has been proposed to reolve output port contention while achieving input buffers sharing, fairness among the input ports, and multicast call splitting. The channel grouping concept is also adopted in the switch to reduce the hardware complexity and improve the switch's throughput. The Abacus switch has a regular structure and thus has the advantages of: 1) easy expansion, 2) relaxed synchronization for data and clock signals, and 3) building the switch fabric using existing CMOS technology.

  18. Time slot assignment in TDM multicast switching systems

    NASA Astrophysics Data System (ADS)

    Chen, Wen-Tsuen; Sheu, Pi-Rong; Yu, Jiunn-Hwa

    1994-01-01

    The time slot assignment problem in time-division multiplexed switching systems which can support multicast transmissions is studied. It is shown that this problem is NP-complete, i.e., computationally intractable. Two effective heuristic algorithms are proposed, and computer simulations are also performed to evaluate the performance of both algorithms. The results of the simulations indicate that the solutions generated by these heuristic algorithms are very close to the optimal on the average. In addition, this problem is also examined under a more restrictive condition that the destination sets of any two multicast packets are either identical or disjoint, a situation often encountered in many practical applications. It is proved that this special problem is still NP-complete. Two fast heuristic algorithms are given which can find solutions not greater than twice the optimal solution. Computer simulations for evaluating these two heuristic algorithms are also performed. Experimental results demonstrate that the solutions of the two algorithms are almost equal to the optimal.

  19. Game and Balance Multicast Architecture Algorithms for Sensor Grid

    PubMed Central

    Fan, Qingfeng; Wu, Qiongli; Magoulés, Frèdèric; Xiong, Naixue; Vasilakos, Athanasios V.; He, Yanxiang

    2009-01-01

    We propose a scheme to attain shorter multicast delay and higher efficiency in the data transfer of sensor grid. Our scheme, in one cluster, seeks the central node, calculates the space and the data weight vectors. Then we try to find a new vector composed by linear combination of the two old ones. We use the equal correlation coefficient between the new and old vectors to find the point of game and balance of the space and data factorsbuild a binary simple equation, seek linear parameters, and generate a least weight path tree. We handled the issue from a quantitative way instead of a qualitative way. Based on this idea, we considered the scheme from both the space and data factor, then we built the mathematic model, set up game and balance relationship and finally resolved the linear indexes, according to which we improved the transmission efficiency of sensor grid. Extended simulation results indicate that our scheme attains less average multicast delay and number of links used compared with other well-known existing schemes. PMID:22399992

  20. High-pressure x-ray diffraction study of Ta4AlC3

    NASA Astrophysics Data System (ADS)

    Manoun, Bouchaib; Saxena, S. K.; El-Raghy, T.; Barsoum, M. W.

    2006-05-01

    Using a synchrotron radiation source and a diamond anvil cell, we measured the pressure dependence of the lattice parameters of a recently discovered phase, Ta4AlC3. This phase adopts a hexagonal structure with the space group P63/mmc; at room conditions, the a and c parameters are 3.087(5) and 23.70(4)Å, respectively. Up to a pressure of 47GPa, no phase transformations were observed. Like Ta2AlC, but unlike many related phases such as Ti4AlN3, Ti3SiC2, Ti3GeC2, and Zr2InC, the compressibility of Ta4AlC3 along the c and a axes are almost identical. The bulk modulus of Ta4AlC3, 261±2GPa, is ≈4% greater than that of Ta2AlC. Both, however, are ≈37% lower than the 345±9GPa of TaC.

  1. Structure of V{sub 2}AlC studied by theory and experiment

    SciTech Connect

    Schneider, Jochen M.; Mertens, Raphael; Music, Denis

    2006-01-01

    We have studied V{sub 2}AlC (space group P6{sub 3}/mmc, prototype Cr{sub 2}AlC) by ab initio calculations. The density of states (DOS) of V{sub 2}AlC for antiferromagnetic, ferromagnetic, and paramagnetic configurations have been discussed. According to the analysis of DOS and cohesive energy, no significant stability differences between spin-polarized and non-spin-polarized configurations were found. Based on the partial DOS analysis, V{sub 2}AlC can be classified as a strongly coupled nanolaminate according to our previous work [Z. Sun, D. Music, R. Ahuja, S. Li, and J. M. Schneider, Phys. Rev. B 70, 092102 (2004)]. Furthermore, this phase has been synthesized in the form of thin films by magnetron sputtering. The equilibrium volume, determined by x-ray diffraction, is in good agreement with the theoretical data, implying that ab initio calculations provide an accurate description of V{sub 2}AlC.

  2. Research on the multicast mechanism based on physical-layer-impairment awareness model for OpenFlow optical network

    NASA Astrophysics Data System (ADS)

    Bai, Hui-feng; Zhou, Zi-guan; Song, Yan-bin

    2016-05-01

    A physical-layer-impairment (PLI)-awareness based optical multicast mechanism is proposed for OpenFlow controlled optical networks. This proposed approach takes the PLI models including linear and non-linear factors into optical multicast controlled by OpenFlow protocol. Thus, the proposed scheme is able to cover nearly all PLI factors of each optical link and to conduct optical multicast with better communication quality. Simulation results show that the proposed scheme can obtain the better performance of OpenFlow controlled optical multicast services.

  3. On the heat capacities of M2AlC (M=Ti,V,Cr) ternary carbides

    NASA Astrophysics Data System (ADS)

    Drulis, Monika K.; Drulis, H.; Gupta, S.; Barsoum, M. W.; El-Raghy, T.

    2006-05-01

    In this paper, we report on the heat capacities cp of bulk polycrystalline samples of Ti2AlC, V2AlC, and Cr2AlC in the 3-260 K temperature range. Given the structural and chemical similarities of these compounds it is not surprising that the cp's and their temperature dependencies were quite similar. Nevertheless, at all temperatures the heat capacity of Cr2AlC was higher than the other two. The density of states at the Fermi level were 3.9, 7.5, and 14.6 (eV unit cell)-1 for Ti2AlC, V2AlC, and Cr2AlC, respectively. The results obtained are analyzed using the Debye and Einstein model approximations for cp. Good description of cp is obtained if one assumes that nine phonon modes vibrate according to the Debye model approximation whereas the remaining 3 of 12 modes expected for M2AlC formula unit fulfill an Einstein-like phonon vibration pattern. Debye temperatures θD describing acoustic phonon and Einstein temperature θE describing optical phonon contributions have been estimated for the studied compounds. The Debye temperatures are reasonably high and fall in the range of 600-700 K. A linear dependence was found between the number of d electrons along the row Ti, V, and Cr and the density of states at the Fermi level.

  4. 40 CFR 721.10457 - 1,2-Benzenedicarboxylic acid, mixed esters with benzyl alc., cyclohexanol, 2-ethyl-1-hexanol...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... esters with benzyl alc., cyclohexanol, 2-ethyl-1-hexanol, fumaric acid and propylene glycol. 721.10457...-hexanol, fumaric acid and propylene glycol. (a) Chemical substance and significant new uses subject to... alc., cyclohexanol, 2-ethyl-1-hexanol, fumaric acid and propylene glycol (PMN P-03-154; CAS No....

  5. 40 CFR 721.10457 - 1,2-Benzenedicarboxylic acid, mixed esters with benzyl alc., cyclohexanol, 2-ethyl-1-hexanol...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... esters with benzyl alc., cyclohexanol, 2-ethyl-1-hexanol, fumaric acid and propylene glycol. 721.10457...-hexanol, fumaric acid and propylene glycol. (a) Chemical substance and significant new uses subject to... alc., cyclohexanol, 2-ethyl-1-hexanol, fumaric acid and propylene glycol (PMN P-03-154; CAS No....

  6. Short-Term Evaluation of a Web-Based College Alcohol Misuse and Harm Prevention Course (College Alc)

    ERIC Educational Resources Information Center

    Paschal, Mallie J.; Bersamin, Melina; Fearnow-Kenney, Melodie; Wyrick, David; Currey, David

    2006-01-01

    This study examined the short-term effects of a web-based alcohol misuse and harm prevention course (College Alc) among incoming freshmen at a California public university. Analysis results indicated that at the end of the fall semester, students randomly assigned to College Alc (n = 173) had a higher level of alcohol-related knowledge and less…

  7. MDP: Reliable File Transfer for Space Missions

    NASA Technical Reports Server (NTRS)

    Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990's, MDP is now in daily use by both the US Post Office and the DoD. This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Neck Oriented Reliable Multicast) is in the process of becoming an IETF standard.

  8. MDP: Reliable File Transfer for Space Missions

    NASA Technical Reports Server (NTRS)

    Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    This paper presents work being done at NASA/GSFC (Goddard Space Flight Center) by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP (User Datagram Protocol)-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990s, MDP is now in daily use by both the US Post Office and the DoD (Department of Defense). This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Nack Oriented Reliable Multicast) is in the process of becoming an IETF (Internet Engineering Task Force) standard.

  9. ALC/50/ values for some polymeric materials. [Apparent Lethal Concentration fire toxicity

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Cumming, H. J.; Schneider, J. E.; Kourtides, D. A.; Parker, J. A.

    1978-01-01

    Apparent lethal concentrations for 50 per cent of the test animals within a 30-min exposure period (ALC/50/) were determined for seventeen samples of polymeric materials, using the screening test method. The materials evaluated included resin-glass composites, film composites, and miscellaneous resins. ALC(50) values, based on weight of original sample charged, ranged from 24 to 110 mg/l. Modified phenolic resins seemed to exhibit less toxicity than the baseline epoxy resins. Among the film composites evaluated, only flame modified polyvinyl fluoride appeared to exhibit less toxicity than the baseline polyvinyl fluoride film.

  10. A one-shot access scheme for a multicast switch

    NASA Astrophysics Data System (ADS)

    Chen, Xing; Hayes, Jeremiah F.; Ali, M. K. Mehmet

    The capability of handling multipoint connections is essential for many communication needs. Input port queueing along with a contention resolution algorithm is used to resolve output request conflict. A study of the performance of a one-shot access scheme for a multicast packet switch is presented. The analysis is based on an assumption of random traffic, modeled by a Bernoulli process of packet arrival and Bernoulli trials of copy distribution patterns. Input port queueing along with random-selection policy is used to resolve the output request conflict. The primary performance measurement is the packet delay. A key assumption is that all copies of the same packet must be switched in the same slot. This one-shot discipline is easier to implement than one which disperses transmission over several time slots in small or medium size switching. Simulation results agree almost perfectly with the analysis.

  11. Potential Vertical Transmission of Winter Ticks (Dermacentor albipictus) from Moose (Alces americanus) Dams to Neonates.

    PubMed

    Severud, William J; DelGiudice, Glenn D

    2016-01-01

    North American moose (Alces americanus) frequently become infested with winter ticks (Dermacentor albipictus). During capture of neonatal moose in northeastern Minnesota, US, in May-June 2013 and 2014, we recovered adult ticks from neonates, presumably vertically transferred from dams, heretofore, not documented. Infestations on neonates may have population-level implications. PMID:26555113

  12. Asymmetric Directional Multicast for Capillary Machine-to-Machine Using mmWave Communications

    PubMed Central

    Kwon, Jung-Hyok; Kim, Eui-Jik

    2016-01-01

    The huge demand for high data rate machine-to-machine (M2M) services has led to the use of millimeter Wave (mmWave) band communications with support for a multi-Gbps data rate through the use of directional antennas. However, unnecessary sector switching in multicast transmissions with directional antennas results in a long delay, and consequently a low throughput. We propose asymmetric directional multicast (ADM) for capillary M2M to address this problem in mmWave communications. ADM provides asymmetric sectorization that is optimized for the irregular deployment pattern of mulicast group members. In ADM, an M2M gateway builds up asymmetric sectors with a beamwidth of a different size to cover all multicast group members with the minimum number of directional transmissions. The performance of ADM under various simulation environments is evaluated through a comparison with legacy mmWave multicast. The results of the simulation indicate that ADM achieves a better performance in terms of the transmission sectors, the transmission time, and the aggregate throughput when compared with the legacy multicast method. PMID:27077859

  13. Mobility Based Key Management Technique for Multicast Security in Mobile Ad Hoc Networks

    PubMed Central

    Madhusudhanan, B.; Chitra, S.; Rajan, C.

    2015-01-01

    In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality. PMID:25834838

  14. Asymmetric Directional Multicast for Capillary Machine-to-Machine Using mmWave Communications.

    PubMed

    Kwon, Jung-Hyok; Kim, Eui-Jik

    2016-01-01

    The huge demand for high data rate machine-to-machine (M2M) services has led to the use of millimeter Wave (mmWave) band communications with support for a multi-Gbps data rate through the use of directional antennas. However, unnecessary sector switching in multicast transmissions with directional antennas results in a long delay, and consequently a low throughput. We propose asymmetric directional multicast (ADM) for capillary M2M to address this problem in mmWave communications. ADM provides asymmetric sectorization that is optimized for the irregular deployment pattern of mulicast group members. In ADM, an M2M gateway builds up asymmetric sectors with a beamwidth of a different size to cover all multicast group members with the minimum number of directional transmissions. The performance of ADM under various simulation environments is evaluated through a comparison with legacy mmWave multicast. The results of the simulation indicate that ADM achieves a better performance in terms of the transmission sectors, the transmission time, and the aggregate throughput when compared with the legacy multicast method. PMID:27077859

  15. Noise performance of phase-insensitive multicasting in multi-stage parametric mixers.

    PubMed

    Huynh, Christopher K; Tong, Zhi; Myslivets, Evgeny; Wiberg, Andreas O J; Adleman, James R; Zlatanovic, Sanja; Jacobs, Everett W; Radic, Stojan

    2013-01-14

    Noise properties of large-count spectral multicasting in a phase-insensitive parametric mixer were investigated. Scalable multicasting was achieved using two-tone continuous-wave seeded mixers capable of generating more than 20 frequency non-degenerate copies. The mixer was constructed using a multistage architecture to simultaneously manage high Figure-of-Merit frequency generation and suppress noise generation. The performance was characterized by measuring the conversion efficiency and noise figure of all signal copies. Minimum noise figure of 8.09dB was measured. Experimental findings confirm that noise of the multicasted signal does not grow linearly with copy count and that it can be suppressed below this limit. PMID:23388973

  16. Wavelength Requirements for a Scalable Multicast Single-Hop Wdm Network

    NASA Astrophysics Data System (ADS)

    Yousif, Rabi W.; Ali, Borhanuddin Mohd; Abdullah, Mohd Khazani; Seman, Kamaruzzaman Bin; Baba, Mohd Dani

    2010-06-01

    In this paper, we present a method for designing a passive optical based single-hop wavelength division multiplexing multicast architecture that can achieve a scalable structure and form the basis of a wavelength efficient single-hop WDM network. The proposed architecture minimizes the number of wavelengths required for efficient multicast service and also minimizes tunability requirement of the transceivers. The network size scalability is achieved by adding transmitters and receivers to the designated groups. We show that the proposed system can accommodate large tuning delays and keeps with suitable throughput when the number of wavelength is equal to the number of nodes. We also show that the design can lead to a scalable structure while minimizing the number of wavelengths and tunability of the transceivers required for an efficient multicast service resulting in an improved system throughput and delay performance.

  17. Event-driven approach of layered multicast to network adaptation in RED-based IP networks

    NASA Astrophysics Data System (ADS)

    Nahm, Kitae; Li, Qing; Kuo, C.-C. J.

    2003-11-01

    In this work, we investigate the congestion control problem for layered video multicast in IP networks of active queue management (AQM) using a simple random early detection (RED) queue model. AQM support from networks improves the visual quality of video streaming but makes network adaptation more di+/-cult for existing layered video multicast proticols that use the event-driven timer-based approach. We perform a simplified analysis on the response of the RED algorithm to burst traffic. The analysis shows that the primary problem lies in the weak correlation between the network feedback and the actual network congestion status when the RED queue is driven by burst traffic. Finally, a design guideline of the layered multicast protocol is proposed to overcome this problem.

  18. Fragmentation-aware service provisioning for advance reservation multicast in SD-EONs.

    PubMed

    Li, Shengru; Lu, Wei; Liu, Xiahe; Zhu, Zuqing

    2015-10-01

    In this paper, we study the service provisioning schemes for dynamic advance reservation (AR) multicast requests in elastic optical networks (EONs). We first propose several algorithms that can handle the service scheduling and routing and spectrum assignment (RSA) of AR multicast requests jointly, including an integrated two-dimensional fragmentation-aware RSA (2D-FMA) that can alleviate the 2D fragmentation caused by light-tree provisioning. Then, we leverage the idea of software-defined EONs (SD-EONs) that utilizes OpenFlow (OF) in the control plane to demonstrate and evaluate the proposed algorithms. Specifically, we build an SD-EON control plane testbed, implement the algorithms in it, and perform control plane experiments on dynamic AR multicast provisioning. The results indicate that 2D-FMA achieves the best blocking performance and provides the shortest average setup delay. PMID:26480094

  19. Digital multi-channel stabilization of four-mode phase-sensitive parametric multicasting.

    PubMed

    Liu, Lan; Tong, Zhi; Wiberg, Andreas O J; Kuo, Bill P P; Myslivets, Evgeny; Alic, Nikola; Radic, Stojan

    2014-07-28

    Stable four-mode phase-sensitive (4MPS) process was investigated as a means to enhance two-pump driven parametric multicasting conversion efficiency (CE) and signal to noise ratio (SNR). Instability of multi-beam, phase sensitive (PS) device that inherently behaves as an interferometer, with output subject to ambient induced fluctuations, was addressed theoretically and experimentally. A new stabilization technique that controls phases of three input waves of the 4MPS multicaster and maximizes CE was developed and described. Stabilization relies on digital phase-locked loop (DPLL) specifically was developed to control pump phases to guarantee stable 4MPS operation that is independent of environmental fluctuations. The technique also controls a single (signal) input phase to optimize the PS-induced improvement of the CE and SNR. The new, continuous-operation DPLL has allowed for fully stabilized PS parametric broadband multicasting, demonstrating CE improvement over 20 signal copies in excess of 10 dB. PMID:25089457

  20. Scalable Multicast Protocols for Overlapped Groups in Broker-Based Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Chayoung; Ahn, Jinho

    In sensor networks, there are lots of overlapped multicast groups because of many subscribers, associated with their potentially varying specific interests, querying every event to sensors/publishers. And gossip based communication protocols are promising as one of potential solutions providing scalability in P(Publish)/ S(Subscribe) paradigm in sensor networks. Moreover, despite the importance of both guaranteeing message delivery order and supporting overlapped multicast groups in sensor or P2P networks, there exist little research works on development of gossip-based protocols to satisfy all these requirements. In this paper, we present two versions of causally ordered delivery guaranteeing protocols for overlapped multicast groups. The one is based on sensor-broker as delegates and the other is based on local views and delegates representing subscriber subgroups. In the sensor-broker based protocol, sensor-broker might lead to make overlapped multicast networks organized by subscriber's interests. The message delivery order has been guaranteed consistently and all multicast messages are delivered to overlapped subscribers using gossip based protocols by sensor-broker. Therefore, these features of the sensor-broker based protocol might be significantly scalable rather than those of the protocols by hierarchical membership list of dedicated groups like traditional committee protocols. And the subscriber-delegate based protocol is much stronger rather than fully decentralized protocols guaranteeing causally ordered delivery based on only local views because the message delivery order has been guaranteed consistently by all corresponding members of the groups including delegates. Therefore, this feature of the subscriber-delegate protocol is a hybrid approach improving the inherent scalability of multicast nature by gossip-based technique in all communications.

  1. All-optical UWB signal generation and multicasting using a nonlinear optical loop mirror.

    PubMed

    Huang, Tianye; Li, Jia; Sun, Junqiang; Chen, Lawrence R

    2011-08-15

    An all-optical scheme for ultra-wideband (UWB) signal generation (positive and negative monocycle and doublet pulses) and multicasting using a nonlinear optical loop mirror (NOLM) is proposed and demonstrated. Five UWB signals (1 monocycle and 4 doublet pulses) are generated simultaneously from a single Gaussian optical pulse. The fractional bandwidths of the monocycle pulses are approximately 100% while those of the doublet pulses range from 100% to 133%. The UWB signals are then modulated using a 2(15)-1 pseudorandom bit sequence (PRBS) and error-free performance for each multicast channel is obtained. PMID:21934951

  2. All-optical UWB signal generation and multicasting using a nonlinear optical loop mirror

    NASA Astrophysics Data System (ADS)

    Huang, Tianye; Li, Jia; Sun, Junqiang; Chen, Lawrence R.

    2011-08-01

    An all-optical scheme for ultra-wideband (UWB) signal generation (positive and negative monocycle and doublet pulses) and multicasting using a nonlinear optical loop mirror (NOLM) is proposed and demonstrated. Five UWB signals (1 monocycle and 4 doublet pulses) are generated simultaneously from a single Gaussian optical pulse. The fractional bandwidths of the monocycle pulses are approximately 100% while those of the doublet pulses range from 100% to 133%. The UWB signals are then modulated using a 215 - 1 pseudorandom bit sequence (PRBS) and error-free performance for each multicast channel is obtained.

  3. Rate allocation protocol using competitive pricing for improving performance of multicast sessions

    NASA Astrophysics Data System (ADS)

    Levy, Zohar; Dolev, Danny

    1998-10-01

    Rate allocation using the Max-Min fairness criterion may highly discriminate against multicast and long unicast sessions and may lead to sever network underutilization. In this paper, we present a solution for rate allocation that is based on competitive pricing. The resultant allocation increases fairness towards multicast sessions and improves network utilization considerably. The solution requires no re-routing of sessions. The economy on which we base our solution is simple enough, enabling its implementation for practical use. We present a distributed asynchronous protocol suitable for the ATM ABR service, which achieves the economy's allocation efficiently and with short convergence time.

  4. Design alternatives for process group membership and multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.

  5. Caching multicast protocol for on-demand video delivery

    NASA Astrophysics Data System (ADS)

    Hua, Kien A.; Tran, Duc A.; Villafane, Roy

    1999-12-01

    Despite advances in networking technology, the limitation of the server bandwidth prevents multimedia applications from taking full advantage of next-generation networks. This constraint sets a hard limit on the number of users the server is able to support simultaneously. To address this bottleneck, we propose a Caching Multicast Protocol (CMP) to leverage the in-network bandwidth. Our solution caches video streams in the routers to facilitate regional services in the immediate future. In other words, the network storage is managed as a huge `video server' to allow the application to scale far beyond the physical limitation of its video server. The tremendous increase in the service bandwidth also enables the system to provide true on-demand services. To assess the effectiveness of this technique, we develop a detailed simulator to compare its performance with that of our earlier scheme called Chaining. The simulation results indicates that CMP is substantially better with many desirable properties as follows: (1) it is optimized to reduce traffic congestion; (2) it uses much less caching space; (3) client workstations are not involved in the caching protocol; (4) it can work on the network layer to leverage modern routers.

  6. Protocol design for scalable and reliable group rekeying

    NASA Astrophysics Data System (ADS)

    Zhang, Xincheng B.; Lam, Simon S.; Lee, Dong Y.; Yang, Yang R.

    2001-07-01

    We present the design and specification of a scalable and reliable protocol for group rekeying together with performance evaluation results. The protocol is based upon the use of key trees for secure groups and periodic batch rekeying. At the beginning of each rekey period, the key server sends a rekey message to all users consisting of encrypted new keys (encryptions, in short) carried in a sequence of packets. We present a simple strategy for identifying keys, encryptions, and users, and a key assignment algorithm which ensures that the encryptions needed by a user are in the same packet. Our protocol provides reliable delivery of new keys to all users eventually. It also attempts to deliver new keys to all users with a high probability by the end of the rekeying period. For each rekey message, the protocol runs in two steps: a multicast step followed by a unicast step. Proactive FEC multicast is used to control NACK implosion and reduce delivery latency. Our experiments show that a small FEC block size can be used to reduce encoding time at the server without increasing server bandwidth overhead. Early transition to unicast, after at most two multicast rounds, further reduces the worst-case delivery latency as well as user bandwidth requirement. The key server adaptively adjusts the proactivity factor based upon past feedback information; our experiments show that the number of NACKs after a multicast round can be effectively controlled around a target number. Throughout the protocol design, we strive to minimize processing and bandwidth requirements for both the key server and users.

  7. Alcadein Cleavages by Amyloid β-Precursor Protein (APP) α- and γ-Secretases Generate Small Peptides, p3-Alcs, Indicating Alzheimer Disease-related γ-Secretase Dysfunction*

    PubMed Central

    Hata, Saori; Fujishige, Sayaka; Araki, Yoichi; Kato, Naoko; Araseki, Masahiko; Nishimura, Masaki; Hartmann, Dieter; Saftig, Paul; Fahrenholz, Falk; Taniguchi, Miyako; Urakami, Katsuya; Akatsu, Hiroyasu; Martins, Ralph N.; Yamamoto, Kazuo; Maeda, Masahiro; Yamamoto, Tohru; Nakaya, Tadashi; Gandy, Sam; Suzuki, Toshiharu

    2009-01-01

    Alcadeins (Alcs) constitute a family of neuronal type I membrane proteins, designated Alcα, Alcβ, and Alcγ. The Alcs express in neurons dominantly and largely colocalize with the Alzheimer amyloid precursor protein (APP) in the brain. Alcs and APP show an identical function as a cargo receptor of kinesin-1. Moreover, proteolytic processing of Alc proteins appears highly similar to that of APP. We found that APP α-secretases ADAM 10 and ADAM 17 primarily cleave Alc proteins and trigger the subsequent secondary intramembranous cleavage of Alc C-terminal fragments by a presenilin-dependent γ-secretase complex, thereby generating “APP p3-like” and non-aggregative Alc peptides (p3-Alcs). We determined the complete amino acid sequence of p3-Alcα, p3-Alcβ, and p3-Alcγ, whose major species comprise 35, 37, and 31 amino acids, respectively, in human cerebrospinal fluid. We demonstrate here that variant p3-Alc C termini are modulated by FAD-linked presenilin 1 mutations increasing minor β-amyloid species Aβ42, and these mutations alter the level of minor p3-Alc species. However, the magnitudes of C-terminal alteration of p3-Alcα, p3-Alcβ, and p3-Alcγ were not equivalent, suggesting that one type of γ-secretase dysfunction does not appear in the phenotype equivalently in the cleavage of type I membrane proteins. Because these C-terminal alterations are detectable in human cerebrospinal fluid, the use of a substrate panel, including Alcs and APP, may be effective to detect γ-secretase dysfunction in the prepathogenic state of Alzheimer disease subjects. PMID:19864413

  8. Design of On-Chip N-Fold Orbital Angular Momentum Multicasting Using V-Shaped Antenna Array

    PubMed Central

    Du, Jing; Wang, Jian

    2015-01-01

    We design a V-shaped antenna array to realize on-chip multicasting from a single Gaussian beam to four orbital angular momentum (OAM) beams. A pattern search assisted iterative (PSI) algorithm is used to design an optimized continuous phase pattern which is further discretized to generate collinearly superimposed multiple OAM beams. Replacing the designed discrete phase pattern with corresponding V-shaped antennas, on-chip N-fold OAM multicasting is achieved. The designed on-chip 4-fold OAM multicasting exploiting V-shaped antenna array shows favorable operation performance with low crosstalk less than -15 dB. PMID:25951325

  9. Anti-Brucella Antibodies in Moose (Alces alces gigas), Muskoxen (Ovibos moschatus), and Plains Bison (Bison bison bison) in Alaska, USA.

    PubMed

    Nymo, Ingebjørg Helena; Beckmen, Kimberlee; Godfroid, Jacques

    2016-01-01

    We used an indirect enzyme-linked immunosorbent assay (iELISA) and the rose bengal test (RBT) to test for anti-Brucella antibodies in moose (Alces alces gigas), muskoxen (Ovibos moschatus), and plains bison (Bison bison bison) from various game management units (GMUs) in Alaska, US, sampled from 1982 to 2010. A portion of the sera had previously been tested with the standard plate test (SPT), the buffered Brucella antigen (BBA) card test, and the card test (CARD). No antibody-positive plains bison were identified. Anti-Brucella antibodies were detected in moose (iELISA, n=4/87; RBT, n=4/87; SPT, n=4/5; BBA, n=4/4) from GMU 23 captured in 1992, 1993, and 1995 and in muskoxen (iELISA, n=4/52; RBT, n=4/52; CARD, n=4/35) from GMUs 26A and 26B captured in 2004, 2006, and 2007. A negative effect of infection on the health of individuals of these species is probable. The presence of antibody-positive animals from 1992 to 2007 suggests presence of brucellae over time. The antibody-positive animals were found in northern Alaska, an area with a historically higher prevalence of Brucella-positive caribou, and a spillover of Brucella suis biovar 4 from caribou may have occurred. Brucella suis biovar 4 causes human brucellosis, and transmission from consumption of moose and muskoxen is possible. PMID:26540335

  10. Lattice instability of V2AlC at high pressure

    NASA Astrophysics Data System (ADS)

    Yang, ZeJin; Liu, Qiang; Li, Jin; Wang, Zhao; Guo, AiMin; Linghu, RongFeng; Cheng, XinLu; Yang, XiangDong

    2013-05-01

    We investigate the elastic and thermodynamic properties of nanolaminate V2AlC by using the ab initio pseudopotential total energy method. The axial compressibility shows that the c axis is always stiffer than a axis. The elastic constants revealed the structural instability at about 500 and 732 GPa. Furthermore, elastic constants C 44 reached its maximum at about 550 GPa, differing with the other four C 11, C 12, C 13 and C 33 constants. The Poisson's ratio investigations demonstrated that a higher ionic or weaker covalent contribution in intra-atomic bonding and the degree of ionicity increases with pressure. The G/ B and B/ C 44 investigations revealed that V2AlC is brittle and the brittleness decreases with pressure. Also, we found that V2AlC is elastic anisotropic materials and the degree of anisotropy rapidly rises with pressure. Study on Debye temperature and Grüneisen parameter observed weak temperature and strong pressure responses, whereas the sensitive dependence in the thermal expansion coefficient and Helmholtz free energy are clearly seen.

  11. Broadcast and multicast for customer communications and distribution automation. Final report

    SciTech Connect

    Ennis, G.; Lala, T.K.

    1996-06-01

    This document presents the results of a study undertaken by First Pacific Networks as part of EPRI Project RP-3567-01 regarding the support of broadcast services within the EPRI Utility Communications Architecture (UCA) protocols and the use of such services by UCA applications. This report has focused on the requirements and architectural implications of broadcast within UCA. A subsequent phase of this project is to develop specific recommendations for extending CUA so as to support broadcast. The conclusions of this report are presented in Section 5. The authors summarize the major conclusions as follows: broadcast and multicast support would be very useful within UCA, not only for utility-specific applications but also simply to support the network engineering of a large-scale communications system, in this regard, UCA is no different from other large network systems which have found broadcast and multicast to be of substantial benefit for a variety of system management purposes; the primary architectural impact of broadcast and multicast falls on the UCA network level (which would need to be enhanced) and the UCA application level (which would be the user of broadcast); there is a useful subset of MMS services which could take advantage of broadcast; the UCA network level would need to be enhanced both in the areas of addressing and routing so as to properly support broadcast. A subsequent analysis will be required to define the specific enhancements to UCA required to support broadcast and multicast.

  12. Performance investigation of optical multicast overlay system using orthogonal modulation format

    NASA Astrophysics Data System (ADS)

    Singh, Simranjit; Singh, Sukhbir; Kaur, Ramandeep; Kaler, R. S.

    2015-03-01

    We proposed a bandwidth efficient wavelength division multiplexed-passive optical network (WDM-PON) to simultaneously transmit 60 Gb/s unicast and 10 Gb/s multicast services with 10 Gb/s upstream. The differential phase shift keying (DPSK) multicast signal is superimposed onto multiplexed non-return to zero/polarization shift keying (NRZ/PolSK) orthogonal modulated data signals. Upstream amplitude shift keying (ASK) signals formed without use of any additional light source and superimposed onto received unicast NRZ/PolSK signal before being transmitted back to optical line terminal (OLT). We also investigated the proposed WDM-PON system for variable optical input power, transmission distance of single mode fiber in multicast enable and disable mode. The measured Quality factor for all unicast and multicast signal is in acceptable range (>6). The original contribution of this paper is to propose a bandwidth efficient WDM-PON system that could be projected even in high speed scenario at reduced channel spacing and expected to be more technical viable due to use of optical orthogonal modulation formats.

  13. A Secure Multicast Framework in Large and High-Mobility Network Groups

    NASA Astrophysics Data System (ADS)

    Lee, Jung-San; Chang, Chin-Chen

    With the widespread use of Internet applications such as Teleconference, Pay-TV, Collaborate tasks, and Message services, how to construct and distribute the group session key to all group members securely is becoming and more important. Instead of adopting the point-to-point packet delivery, these emerging applications are based upon the mechanism of multicast communication, which allows the group member to communicate with multi-party efficiently. There are two main issues in the mechanism of multicast communication: Key Distribution and Scalability. The first issue is how to distribute the group session key to all group members securely. The second one is how to maintain the high performance in large network groups. Group members in conventional multicast systems have to keep numerous secret keys in databases, which makes it very inconvenient for them. Furthermore, in case that a member joins or leaves the communication group, many involved participants have to change their own secret keys to preserve the forward secrecy and the backward secrecy. We consequently propose a novel version for providing secure multicast communication in large network groups. Our proposed framework not only preserves the forward secrecy and the backward secrecy but also possesses better performance than existing alternatives. Specifically, simulation results demonstrate that our scheme is suitable for high-mobility environments.

  14. Energy-Efficient Algorithm for Multicasting in Duty-Cycled Sensor Networks.

    PubMed

    Chen, Quan; Cheng, Siyao; Gao, Hong; Li, Jianzhong; Cai, Zhipeng

    2015-01-01

    Multicasting is a fundamental network service for one-to-many communications in wireless sensor networks. However, when the sensor nodes work in an asynchronous duty-cycled way, the sender may need to transmit the same message several times to one group of its neighboring nodes, which complicates the minimum energy multicasting problem. Thus, in this paper, we study the problem of minimum energy multicasting with adjusted power (the MEMAP problem) in the duty-cycled sensor networks, and we prove it to be NP-hard. To solve such a problem, the concept of an auxiliary graph is proposed to integrate the scheduling problem of the transmitting power and transmitting time slot and the constructing problem of the minimum multicast tree in MEMAP, and a greedy algorithm is proposed to construct such a graph. Based on the proposed auxiliary graph, an approximate scheduling and constructing algorithm with an approximation ratio of 4 l n K is proposed, where K is the number of destination nodes. Finally, the theoretical analysis and experimental results verify the efficiency of the proposed algorithm in terms of the energy cost and transmission redundancy. PMID:26690446

  15. Energy-Efficient Algorithm for Multicasting in Duty-Cycled Sensor Networks

    PubMed Central

    Chen, Quan; Cheng, Siyao; Gao, Hong; Li, Jianzhong; Cai, Zhipeng

    2015-01-01

    Multicasting is a fundamental network service for one-to-many communications in wireless sensor networks. However, when the sensor nodes work in an asynchronous duty-cycled way, the sender may need to transmit the same message several times to one group of its neighboring nodes, which complicates the minimum energy multicasting problem. Thus, in this paper, we study the problem of minimum energy multicasting with adjusted power (the MEMAP problem) in the duty-cycled sensor networks, and we prove it to be NP-hard. To solve such a problem, the concept of an auxiliary graph is proposed to integrate the scheduling problem of the transmitting power and transmitting time slot and the constructing problem of the minimum multicast tree in MEMAP, and a greedy algorithm is proposed to construct such a graph. Based on the proposed auxiliary graph, an approximate scheduling and constructing algorithm with an approximation ratio of 4lnK is proposed, where K is the number of destination nodes. Finally, the theoretical analysis and experimental results verify the efficiency of the proposed algorithm in terms of the energy cost and transmission redundancy. PMID:26690446

  16. High-temperature neutron diffraction and first-principles study of temperature-dependent crystal structures and atomic vibrations in Ti3AlC2, Ti2AlC, and Ti5Al2C3

    NASA Astrophysics Data System (ADS)

    Lane, Nina J.; Vogel, Sven C.; Caspi, El'ad N.; Barsoum, Michel W.

    2013-05-01

    Herein we report on the thermal expansions and temperature-dependent crystal structures of select ternary carbide Mn +1AXn (MAX) phases in the Ti-Al-C phase diagram in the 100-1000 °C temperature range. A bulk sample containing 38(±1) wt. % Ti5Al2C3 ("523"), 32(±1) wt. % Ti2AlC ("211"), 18(±1) wt. % Ti3AlC2 ("312"), and 12(±1) wt. % (Ti0.5Al0.5)Al is studied by Rietveld analysis of high-temperature neutron diffraction data. We also report on the same for a single-phase sample of Ti3AlC2 for comparison. The thermal expansions of all the MAX phases studied are higher in the c direction than in the a direction. The bulk expansion coefficients—9.3(±0.1)×10-6 K-1 for Ti5Al2C3, 9.2(±0.1) ×10-6 K-1 for Ti2AlC, and 9.0(±0.1)×10-6 K-1 for Ti3AlC2—are comparable within one standard deviation of each other. In Ti5Al2C3, the dimensions of the Ti-C octahedra for the 211-like and 312-like regions are comparable to the Ti-C octahedra in Ti2AlC and Ti3AlC2, respectively. The isotropic mean-squared atomic displacement parameters are highest for the Al atoms in all three phases, and the values predicted from first-principles phonon calculations agree well with those measured.

  17. Demonstration of obstruction-free data-carrying N-fold Bessel modes multicasting from a single Gaussian mode.

    PubMed

    Zhu, Long; Wang, Jian

    2015-12-01

    By designing and optimizing complex phase pattern combining with axicon phase distribution, we report data multicasting from a single Gaussian mode to multiple Bessel modes using a single phase-only spatial light modulator. Under the obstructed path conditions, obstruction-free data-carrying N-fold Bessel modes multicasting is demonstrated in the experiment. We also experimentally study N-fold multicasting of a 20 Gbit/s quadrature phase-shift keying signal from a single Gaussian mode to multiple Bessel modes and measure the link performance. All the multicasted Bessel modes show relatively low crosstalk from their neighboring modes and achieve a bit-error rate of less than 1e-3. PMID:26625026

  18. Polyamine metabolism in ripening tomato fruit. II. Polyamine metabolism and synthesis in relation to enhanced putrescine content and storage life of alc tomato fruit

    SciTech Connect

    Rastogi, R.; Davies, P.J. )

    1991-01-01

    The fruit of the Alcobaca landrace of tomato (Lycopersicon esculentum Mill.) have prolonged keeping qualities (determined by the allele alc) and contain three times as much putrescine as the standard Rutgers variety (Alc) at the ripe stage. Polyamine metabolism and biosynthesis were compared in fruit from Rutgers and Rutgers-alc-a near isogenic line possessing the allele alc, at four different stages of ripening. The levels of soluble polyamine conjugates as well as wall bound polyamines in the pericarp tissue and jelly were very low or nondetectable in both genotypes. The increase in putrescine content in alc pericarp is not related to normal ripening as it occurred with time and whether or not the fruit ripened. Pericarp discs of both normal and alc fruit showed a decrease in the metabolism of (1,4-{sup 14}C)putrescine and (terminal labeled-{sup 3}H)spermidine with ripening, but there were no significant differences between the two genotypes. The activity of ornithine decarboxylase was similar in the fruit pericarp of the two lines. Arginine decarboxylase activity decreased during ripening in Rutgers but decreased and rose again in Rutgers-alc fruit, and as a result it was significantly higher in alc fruit than in the normal fruit at the ripe stage. The elevated putrescine levels in alc fruit appear, therefore, to be due to an increase in the activity of arginine decarboxylase.

  19. Mercury, lead and lead isotope ratios in the teeth of moose (Alces alces) from Isle Royale, U.S. Upper Midwest, from 1952 to 2002.

    PubMed

    Vucetich, John A; Outridge, P M; Peterson, Rolf O; Eide, Rune; Isrenn, Rolf

    2009-07-01

    Assessing the effect of recent reductions in atmospheric pollution on metal concentrations in wildlife in North America has been difficult because of the sparse availability of historical samples with which to establish a "pre-regulation" baseline, and because many ecosystems may be affected by local point sources which could obscure broader-scale trends. Here we report a recent 50 yr annual record of Hg, Pb and Pb isotope ratios in the teeth of a resident population of moose (Alces alces) in Isle Royale National Park, a relatively remote island in Lake Superior, Michigan, USA. During the early 1980s, concentrations of tooth Hg abruptly declined by approximately 65% compared to the previous 30 years (p<0.001), similar to a previous study of Hg in herring gull eggs in the Great Lakes region. Lead declined at the same time, and by 2002 Pb in adult moose teeth was approximately 80% lower than it had been prior to the early 1980s (p<0.001). These trends were unaffected by normalization against the geogenic elements La and Sr, which indicates that the trends in Hg and Pb had an anthropogenic cause. Temporal patterns of Pb isotope ratios suggested that the primary sources of Pb at different times in the moose were combustion of U.S. coal and leaded gasoline. Reductions in emissions from coal combustion might explain the co-incident reductions of Hg and Pb in Isle Royale moose, with elimination of alkyl Pb additives also playing a role in the continued tooth Pb reductions after 1983. PMID:20449224

  20. Fibrolytic Bacteria Isolated from the Rumen of North American Moose (Alces alces) and Their Use as a Probiotic in Neonatal Lambs

    PubMed Central

    Ishaq, Suzanne L.; Kim, Christina J.; Reis, Doug; Wright, André-Denis G.

    2015-01-01

    Fibrolytic bacteria were isolated from the rumen of North American moose (Alces alces), which eat a high-fiber diet of woody browse. It was hypothesized that fibrolytic bacteria isolated from the moose rumen could be used as probiotics to improve fiber degradation and animal production. Thirty-one isolates (Bacillus, n = 26; Paenibacillus, n = 1; and Staphylococcus, n = 4) were cultured from moose rumen digesta samples collected in Vermont. Using Sanger sequencing of the 16S rRNA gene, culturing techniques, and optical densities, isolates were identified and screened for biochemical properties important to plant carbohydrate degradation. Five isolates were selected as candidates for use as a probiotic, which was administered daily to neonate lambs for 9 weeks. It was hypothesized that regular administration of a probiotic to improve fibrolysis to neonate animals through weaning would increase the developing rumen bacterial diversity, increase animal production, and allow for long-term colonization of the probiotic species. Neither weight gain nor wool quality was improved in lambs given a probiotic, however, dietary efficiency was increased as evidenced by the reduced feed intake (and rearing costs) without a loss to weight gain. Experimental lambs had a lower acetate to propionate ratio than control lambs, which was previously shown to indicate increased dietary efficiency. Fibrolytic bacteria made up the majority of sequences, mainly Prevotella, Butyrivibrio, and Ruminococcus. While protozoal densities increased over time and were stable, methanogen densities varied greatly in the first six months of life for lambs. This is likely due to the changing diet and bacterial populations in the developing rumen. PMID:26716685

  1. Novel nuclear protein ALC-INTERACTING PROTEIN1 is expressed in vascular and mesocarp cells in Arabidopsis.

    PubMed

    Wang, Fang; Shi, Dong-Qiao; Liu, Jie; Yang, Wei-Cai

    2008-07-01

    Pod shattering is an agronomical trait that is a result of the coordinated action of cell differentiation and separation. In Arabidopsis, pod shattering is controlled by a complex genetic network in which ALCATRAZ (ALC), a member of the basic helix-loop-helix family, is critical for cell separation during fruit dehiscence. Herein, we report the identification of ALC-INTERACTING PROTEIN1 (ACI1) via the yeast two-hybrid screen. ACI1 encodes a nuclear protein with a lysine-rich domain and a C-terminal serine-rich domain. ACI1 is mainly expressed in the vascular system throughout the plant and mesocarp of the valve in siliques. Our data showed that ACI1 interacts strongly with the N-terminal portion of ALC in yeast cells and in plant cells in the nucleus as demonstrated by bimolecular fluorescence complementation assay. Both ACI1 and ALC share an overlapping expression pattern, suggesting that they likely function together in planta. However, no detectable phenotype was found in plants with reduced ACI1 expression by RNA interference technology, suggesting that ACI1 may be redundant. Taken together, these data indicate that ALC may interact with ACI1 and its homologs to control cell separation during fruit dehiscence in Arabidopsis. PMID:18713402

  2. High-pressure powder x-ray diffraction experiments and ab initio calculation of Ti3AlC2

    NASA Astrophysics Data System (ADS)

    Zhang, Haibin; Wu, Xiang; Nickel, Klaus Georg; Chen, Jixin; Presser, Volker

    2009-07-01

    The structural stability of the layered ternary carbide Ti3AlC2 was studied up to 35 GPa using x-ray diffraction with a Merrill-Basset-type diamond anvil cell and ab initio calculations. The structure (P63/mmc) was stable in the present pressure range without any phase transition. The Birch-Murnaghan equation of state was employed to fit the experimental pressure-volume date, from which the isothermal bulk modulus of Ti3AlC2 was determined as 156±5 GPa, which was also supported by theoretical results. In addition, theoretical calculations described anisotropic pressure dependences of the lattice parameters, electronic structure, and bonding properties of Ti3AlC2.

  3. Effect of neutron irradiation on defect evolution in Ti3SiC2 and Ti2AlC

    NASA Astrophysics Data System (ADS)

    Tallman, Darin J.; He, Lingfeng; Garcia-Diaz, Brenda L.; Hoffman, Elizabeth N.; Kohse, Gordon; Sindelar, Robert L.; Barsoum, Michel W.

    2016-01-01

    Herein we report on the characterization of defects formed in polycrystalline Ti3SiC2 and Ti2AlC samples exposed to neutron irradiation - up to 0.1 displacements per atom (dpa) at 350 ± 40 °C or 695 ± 25 °C, and up to 0.4 dpa at 350 ± 40 °C. Black spots are observed in both Ti3SiC2 and Ti2AlC after irradiation to both 0.1 and 0.4 dpa at 350 °C. After irradiation to 0.1 dpa at 695 °C, small basal dislocation loops, with a Burgers vector of b = 1/2 [0001] are observed in both materials. At 9 ± 3 and 10 ± 5 nm, the loop diameters in the Ti3SiC2 and Ti2AlC samples, respectively, were comparable. At 1 × 1023 loops/m3, the dislocation loop density in Ti2AlC was ≈1.5 orders of magnitude greater than in Ti3SiC2, at 3 × 1021 loops/m3. After irradiation at 350 °C, extensive microcracking was observed in Ti2AlC, but not in Ti3SiC2. The room temperature electrical resistivities increased as a function of neutron dose for all samples tested, and appear to saturate in the case of Ti3SiC2. The MAX phases are unequivocally more neutron radiation tolerant than the impurity phases TiC and Al2O3. Based on these results, Ti3SiC2 appears to be a more promising MAX phase candidate for high temperature nuclear applications than Ti2AlC.

  4. A WDM-OFDM-PON architecture with centralized lightwave and PolSK-modulated multicast overlay.

    PubMed

    Liu, Bo; Xin, Xiangjun; Zhang, Lijia; Yu, Jianjun; Zhang, Qi; Yu, Chongxiu

    2010-02-01

    We propose and demonstrate a novel wavelength-division-multiplexing orthogonal-frequency-division-multiplexing passive-optical-network (WDM-OFDM-PON) architecture with centralized lightwave sources and polarization shift keying (PolSK) multicast overlay. The 10-Gb/s 16QAM-OFDM point to point (P2P) signal, 2.5-Gb/s multicast PolSK signal and 2.5-Gb/s on-off keying (OOK) upstream signal are experimentally demonstrated. After transmission over 25km standard single mode fiber (SMF), 1.5dB crosstalk between the downstream signals is eliminated by employing a low pass electrical filter at the PolSK receiver. The power penalty of the upstream OOK signal at BER of 10(-9) is less than 0.1dB. PMID:20174042

  5. Efficient Transmission of Multicast MAPs in IEEE 802.16e

    NASA Astrophysics Data System (ADS)

    Yeom, Jae-Heung; Lee, Yong-Hwan

    The institute of electrical and electronics engineers (IEEE) 802.16e is designed to support a wide range of applications with various quality of service requirements. Since MAP signaling overhead can unacceptably be large for voice traffic, IEEE 802.16e suggests the use of multicast sub-MAPS whose messages are encoded according to the channel condition. In this case, it is desirable for the base station to properly choose a modulation and coding set associated with the channel condition. In this letter, we consider the use of an adaptive modulation coding scheme for the multicast sub-MAPs without explicit information on the channel condition. The proposed scheme can achieve the same MAP coverage as the broadcast MAP while minimizing the signaling overhead. Simulation results show that when it is applied to voice-over-internet protocol (VoIP) services, the proposed scheme can significantly enhance the VoIP capacity.

  6. MQUAKE multicast software early warning demonstrated for 31 October 2001 Anza Ml5.1 earthquake

    NASA Astrophysics Data System (ADS)

    Eakins, J. A.; Hansen, T.; Vernon, F. L.; Braun, H.

    2003-12-01

    MQUAKE distributes real-time multicast parametric information from individual sensors as well as a summarized location and magnitude based on the data recorded from sensors of the ANZA seismic network with the goal of providing event notification prior to arrival of the actual shock wave at the client's location. The program gathers detection and triggering information from an operational Antelope real-time data collection system and sends them to clients via multicast and unicast UDP packets. Multicast packets are preferred as they allow multiple people to receive event packets in the fastest time possible (however, a unicast mode is available since most IP networks do not support multicast). These packets are decrypted in a client software which then produces a list of triggers/events that will be used in future versions of the code to generate wavefront estimate plots and approximate maximum shock wave travel times based on the client's location and limited current information. This systems works in both a wired and wireless environment, such as HPWREN, the High Performance Wireless Research and Education Network. A real-time example of this system was obtained during the Ml5.1 31 October 2001 earthquake that occurred directly under the ANZA seismic network, approximately 70 km away from an MQUAKE client. The MQUAKE program was able to deliver a warning of a significant "event" 10 seconds after the initial ground motion was recorded and about 4 seconds prior to ground motion reaching the client. An actual event location and magnitude approximation was received 71 seconds after the local ground shaking at the client's location (85 seconds after the event). Had the client been located along the coast of San Diego, they would have had additional warning time prior to the shaking. Clients in San Diego, the closest major metropolitan area to this event, could have received up to 12 seconds of early warning.

  7. Bulk data transfer distributer: a high performance multicast model in ALMA ACS

    NASA Astrophysics Data System (ADS)

    Cirami, R.; Di Marcantonio, P.; Chiozzi, G.; Jeram, B.

    2006-06-01

    A high performance multicast model for the bulk data transfer mechanism in the ALMA (Atacama Large Millimeter Array) Common Software (ACS) is presented. The ALMA astronomical interferometer will consist of at least 50 12-m antennas operating at millimeter wavelength. The whole software infrastructure for ALMA is based on ACS, which is a set of application frameworks built on top of CORBA. To cope with the very strong requirements for the amount of data that needs to be transported by the software communication channels of the ALMA subsystems (a typical output data rate expected from the Correlator is of the order of 64 MB per second) and with the potential CORBA bottleneck due to parameter marshalling/de-marshalling, usage of IIOP protocol, etc., a transfer mechanism based on the ACE/TAO CORBA Audio/Video (A/V) Streaming Service has been developed. The ACS Bulk Data Transfer architecture bypasses the CORBA protocol with an out-of-bound connection for the data streams (transmitting data directly in TCP or UDP format), using at the same time CORBA for handshaking and leveraging the benefits of ACS middleware. Such a mechanism has proven to be capable of high performances, of the order of 800 Mbits per second on a 1Gbit Ethernet network. Besides a point-to-point communication model, the ACS Bulk Data Transfer provides a multicast model. Since the TCP protocol does not support multicasting and all the data must be correctly delivered to all ALMA subsystems, a distributer mechanism has been developed. This paper focuses on the ACS Bulk Data Distributer, which mimics a multicast behaviour managing data dispatching to all receivers willing to get data from the same sender.

  8. Trace elements in moose (Alices alces) found dead in Northwestern Minnesota, USA

    USGS Publications Warehouse

    Custer, T.W.; Cox, E.; Gray, B.

    2004-01-01

    The moose (Alces alces) population in bog and forest areas of Northwestern Minnesota has declined for more than 25 years, and more recently the decline is throughout Northwestern Minnesota. Both deficiencies and elevations in trace elements have been linked to the health of moose worldwide. The objective of this study was to evaluate whether trace element toxicity or deficiency may have contributed to the decline of moose in Northwestern Minnesota. Livers of 81 moose found dead in Northwestern Minnesota in 1998 and 1999 were analyzed for trace elements. With the exception of selenium (Se) and copper (Cu), trace elements were not at toxic or deficient levels based on criteria set for cattle. Selenium concentrations in moose livers based on criteria set for cattle were deficient in 3.7% of livers and at a chronic toxicity level in 16% of livers. Copper concentrations based on criteria set for cattle were deficient in 39.5% of livers, marginally deficient in 29.5% of livers and adequate in 31% of livers. Moose from agricultural areas had higher concentrations, on average, of Cd, Cu, Mo and Se in their livers than moose from bog and forest areas. Older moose had higher concentrations of Cd and Zn, and lower concentrations of Cu than younger moose. Copper deficiency, which has been associated with population declines of moose in Alaska and Sweden, may be a factor contributing to the decline of moose in Northwestern Minnesota. (C) 2004 Elsevier B.V. All rights reserved.

  9. Ab initio study of basal slip in Nb(2)AlC.

    PubMed

    Music, Denis; Sun, Zhimei; Voevodin, Andrey A; Schneider, Jochen M

    2006-05-01

    Using ab initio calculations, we have studied shearing in Nb(2)AlC, where NbC and Al layers are interleaved. The stress-strain analysis of this deformation mode reveals Nb-Al bond breaking, while the Nb-C bond length decreases by 4.1%. Furthermore, there is no evidence for phase transformation during deformation. This is consistent with basal slip and may be understood on the basis of the electronic structure: bands below the Fermi level are responsible for the dd bonding between NbC basal planes and only a single band with a weak dd interaction is not resistant to shearing, while all other bands are unaffected. The Al-Nb bonding character can be described as mainly metallic with weak covalent-ionic contributions. Our study demonstrates that Al layers move with relative ease under shear strain. Phase conservation upon shearing is unusual for carbides and may be due to the layered nature of the phase studied. Here, we describe the electronic origin of basal slip in Nb(2)AlC, the atomic mechanism which enables reversible plasticity in this class of materials. PMID:21690790

  10. Proxy-assisted multicasting of video streams over mobile wireless networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody

    2005-03-01

    This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.

  11. Oxidation Resistance of Materials Based on Ti3AlC2 Nanolaminate at 600 °C in Air

    NASA Astrophysics Data System (ADS)

    Ivasyshyn, Andrij; Ostash, Orest; Prikhna, Tatiana; Podhurska, Viktoriya; Basyuk, Tatiana

    2016-08-01

    The oxidation behavior of Ti3AlC2-based materials had been investigated at 600 °C in static air for 1000 h. It was shown that the intense increase of weight gain per unit surface area for sintered material with porosity of 22 % attributed to oxidation of the outer surface of the specimen and surfaces of pores in the bulk material. The oxidation kinetics of the hot-pressed Ti3AlC2-based material with 1 % porosity remarkably increased for the first 15 h and then slowly decreased. The weight gain per unit surface area for this material was 1.0 mg/cm2 after exposition for 1000 h. The intense initial oxidation of Ti3AlC2-based materials can be eliminated by pre-oxidation treatment at 1200 °C in air for 2 h. As a result, the weight gain per unit surface area for the pre-oxidized material did not exceed 0.11 mg/cm2 after 1000 h of exposition at 600 °C in air. It was demonstrated that the oxidation resistance of Ti3AlC2-based materials can be significantly improved by niobium addition.

  12. Effect of neutron irradiation on defect evolution in Ti3SiC2 and Ti2AlC

    DOE PAGESBeta

    Tallman, Darin J.; He, Lingfeng; Garcia-Diaz, Brenda L.; Hoffman, Elizabeth N.; Kohse, Gordon; Sindelar, Robert L.; Barsoum, Michel W.

    2015-10-23

    Here, we report on the characterization of defects formed in polycrystalline Ti3SiC2 and Ti2AlC samples exposed to neutron irradiation – up to 0.1 displacements per atom (dpa) at 350 ± 40 °C or 695 ± 25 °C, and up to 0.4 dpa at 350 ± 40 °C. Black spots are observed in both Ti3SiC2 and Ti2AlC after irradiation to both 0.1 and 0.4 dpa at 350 °C. After irradiation to 0.1 dpa at 695 °C, small basal dislocation loops, with a Burgers vector of b = 1/2 [0001] are observed in both materials. At 9 ± 3 and 10 ±more » 5 nm, the loop diameters in the Ti3SiC2 and Ti2AlC samples, respectively, were comparable. At 1 × 1023 loops/m3, the dislocation loop density in Ti2AlC was ≈1.5 orders of magnitude greater than in Ti3SiC2, at 3 x 1021 loops/m3. After irradiation at 350 °C, extensive microcracking was observed in Ti2AlC, but not in Ti3SiC2. The room temperature electrical resistivities increased as a function of neutron dose for all samples tested, and appear to saturate in the case of Ti3SiC2. The MAX phases are unequivocally more neutron radiation tolerant than the impurity phases TiC and Al2O3. Based on these results, Ti3SiC2 appears to be a more promising MAX phase candidate for high temperature nuclear applications than Ti2AlC.« less

  13. Carbon diffusion in alumina from carbon and Ti{sub 2}AlC thin films

    SciTech Connect

    Guenette, Mathew C.; Tucker, Mark D.; Bilek, Marcela M. M.; McKenzie, David R.; Ionescu, Mihail

    2011-04-15

    Carbon diffusion is observed in single crystal {alpha}-Al{sub 2}O{sub 3} substrates from carbon and Ti{sub 2}AlC thin films synthesized via pulsed cathodic arc deposition. Diffusion was found to occur at substrate temperatures of 570 deg. C and above. The diffusion coefficient of carbon in {alpha}-Al{sub 2}O{sub 3} is estimated to be of the order 3x10{sup -13} cm{sup 2}/s for deposition temperatures in the 570-770{sup o}C range by examining elastic recoil detection analysis (ERDA) elemental depth profiles. It is suggested that an appropriate diffusion barrier may be useful when depositing carbon containing thin films on {alpha}-Al{sub 2}O{sub 3} substrates at high temperatures.

  14. Cold Spraying of Ti2AlC MAX-Phase Coatings

    NASA Astrophysics Data System (ADS)

    Gutzmann, H.; Gärtner, F.; Höche, D.; Blawert, C.; Klassen, T.

    2013-03-01

    Cold spraying was applied to deposit Ti2AlC on different substrate materials. The study of single impacts by scanning electron microscopy indicates that bonding of the first layer is mainly attributed to the deformation and shear instabilities occurring at substrate sites. Nevertheless, as compared to the feedstock particles, the splats appear flattened by the impact. This deformation seems to be attributed not only to local, internal shear but also to internal fracture. By applying up to five passes under optimized spray parameters, Ti2AlC-coatings with thicknesses of about 110-155 μm were achieved. XRD analysis of the coating proved that the crystallographic structure of the feedstock was retained during cold spraying. The coating microstructures show rather low porosity of about <2%, but several cracks between spray layers. Successful build-up of more than one layer can probably be attributed to local deformation of the highly anisotropic Ti2AlC-phase.

  15. Silicon-organic hybrid slot waveguide based three-input multicasted optical hexadecimal addition/subtraction

    PubMed Central

    Gui, Chengcheng; Wang, Jian

    2014-01-01

    By exploiting multiple non-degenerate four-wave mixing in a silicon-organic hybrid slot waveguide and 16-ary phase-shift keying signals, we propose and simulate three-input (A, B, C) multicasted 40-Gbaud (160-Gbit/s) optical hexadecimal addition/subtraction (A + B − C, A + C − B, B + C − A, A + B + C, A − B − C, B − A − C). The error vector magnitude (EVM) and dynamic range of signal power are analyzed to evaluate the performance of optical hexadecimal addition/subtraction. PMID:25502618

  16. Implementation of Both High-Speed Transmission and Quality of System for Internet Protocol Multicasting Services

    NASA Astrophysics Data System (ADS)

    Son, Byounghee; Park, Youngchoong; Nahm, Euiseok

    The paper introduces both high-speed transmission and quality of system to offer the Internet services on a HFC (Hybrid Fiber Coaxial) network. This utilizes modulating the phase and the amplitude to the signal of the IPMS (Internet Protocol Multicasting Service). An IP-cable transmitter, IP-cable modem, and IP-cable management servers that support 30-Mbps IPMS on the HFC were developed. The system provides a 21Mbps HDTV transporting stream on a cable TV network. It can sustain a clear screen for a long time.

  17. BlueGene/L Specific Modification to MRNet: Multicast/Reduction Network

    SciTech Connect

    Lee, G. L.; Ahn, D.

    2007-05-21

    MRNet is a software tree-based overlay network developed at the University of Wisconsin, Madison that provides a scalable communication mechanism for parallel tools. MRNet uses a tree topology of networked processes between a user tool and distributed tool daemons. This tree topology allows scalable multicast communication from the tool to the daemons. The internal nodes of the tree can be used to distribute computation and analysis on data sent from the tool daemons to the tool. This release covers modifications that we have made to the Wisconsin implementation to port this software to the BlueGene/L architecture. It also covers some additional files that were created for this port.

  18. New broadcasting protocols for video-on-demand systems in multicast environment

    NASA Astrophysics Data System (ADS)

    Feng, Jian; Poon, Wing-Fai; Lo, Kwok-Tung

    2002-09-01

    In this paper, two new broadcasting protocols are proposed for providing video-on-demand (VoD) services in multicast environment. The two protocols are developed by introducing a new first-segment delivery scheme for the skycraper and staggered protocols. With our approach, the first segment of a video is further divided into a number of small pieces so that customers can download the request data and start watching the video in a shorter time. The results show that the start-up latency for users is greatly reduced when using our new protocols.

  19. (Nbx, Zr1-x)4AlC3 MAX Phase Solid Solutions: Processing, Mechanical Properties, and Density Functional Theory Calculations.

    PubMed

    Lapauw, Thomas; Tytko, Darius; Vanmeensel, Kim; Huang, Shuigen; Choi, Pyuck-Pa; Raabe, Dierk; Caspi, El'ad N; Ozeri, Offir; To Baben, Moritz; Schneider, Jochen M; Lambrinou, Konstantina; Vleugels, Jozef

    2016-06-01

    The solubility of zirconium (Zr) in the Nb4AlC3 host lattice was investigated by combining the experimental synthesis of (Nbx, Zr1-x)4AlC3 solid solutions with density functional theory calculations. High-purity solid solutions were prepared by reactive hot pressing of NbH0.89, ZrH2, Al, and C starting powder mixtures. The crystal structure of the produced solid solutions was determined using X-ray and neutron diffraction. The limited Zr solubility (maximum of 18.5% of the Nb content in the host lattice) in Nb4AlC3 observed experimentally is consistent with the calculated minimum in the energy of mixing. The lattice parameters and microstructure were evaluated over the entire solubility range, while the chemical composition of (Nb0.85, Zr0.15)4AlC3 was mapped using atom probe tomography. The hardness, Young's modulus, and fracture toughness at room temperature as well as the high-temperature flexural strength and E-modulus of (Nb0.85, Zr0.15)4AlC3 were investigated and compared to those of pure Nb4AlC3. Quite remarkably, an appreciable increase in fracture toughness was observed from 6.6 ± 0.1 MPa/m(1/2) for pure Nb4AlC3 to 10.1 ± 0.3 MPa/m(1/2) for the (Nb0.85, Zr0.15)4AlC3 solid solution. PMID:27159119

  20. Discovery of carbon-vacancy ordering in Nb4AlC3-x under the guidance of first-principles calculations

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Hu, Tao; Wang, Xiaohui; Li, Zhaojin; Hu, Minmin; Wu, Erdong; Zhou, Yanchun

    2015-09-01

    The conventional wisdom to tailor the properties of binary transition metal carbides by order-disorder phase transformation has been inapplicable for the machinable ternary carbides (MTCs) due to the absence of ordered phase in bulk sample. Here, the presence of an ordered phase with structural carbon vacancies in Nb4AlC3-x (x ≈ 0.3) ternary carbide is predicted by first-principles calculations, and experimentally identified for the first time by transmission electron microscopy and micro-Raman spectroscopy. Consistent with the first-principles prediction, the ordered phase, o-Nb4AlC3, crystalizes in P63/mcm with a = 5.423 Å, c = 24.146 Å. Coexistence of ordered (o-Nb4AlC3) and disordered (Nb4AlC3-x) phase brings about abundant domains with irregular shape in the bulk sample. Both heating and electron irradiation can induce the transformation from o-Nb4AlC3 to Nb4AlC3-x. Our findings may offer substantial insights into the roles of carbon vacancies in the structure stability and order-disorder phase transformation in MTCs.

  1. Discovery of carbon-vacancy ordering in Nb4AlC3–x under the guidance of first-principles calculations

    PubMed Central

    Zhang, Hui; Hu, Tao; Wang, Xiaohui; Li, Zhaojin; Hu, Minmin; Wu, Erdong; Zhou, Yanchun

    2015-01-01

    The conventional wisdom to tailor the properties of binary transition metal carbides by order-disorder phase transformation has been inapplicable for the machinable ternary carbides (MTCs) due to the absence of ordered phase in bulk sample. Here, the presence of an ordered phase with structural carbon vacancies in Nb4AlC3–x (x ≈ 0.3) ternary carbide is predicted by first-principles calculations, and experimentally identified for the first time by transmission electron microscopy and micro-Raman spectroscopy. Consistent with the first-principles prediction, the ordered phase, o-Nb4AlC3, crystalizes in P63/mcm with a = 5.423 Å, c = 24.146 Å. Coexistence of ordered (o-Nb4AlC3) and disordered (Nb4AlC3–x) phase brings about abundant domains with irregular shape in the bulk sample. Both heating and electron irradiation can induce the transformation from o-Nb4AlC3 to Nb4AlC3–x. Our findings may offer substantial insights into the roles of carbon vacancies in the structure stability and order-disorder phase transformation in MTCs. PMID:26388153

  2. A high-temperature neutron diffraction study of Nb2AlC and TiNbAlC

    DOE PAGESBeta

    Bentzel, Grady W.; Lane, Nina J.; Vogel, Sven C.; An, Ke; Barsoum, Michel W.; Caspi, El'ad N.

    2014-12-16

    In this paper, we report on the crystal structures of Nb2AlC and TiNbAlC actual composition (Ti0.45,Nb0.55)2AlC compounds determined from Rietveld analysis of neutron diffraction patterns in the 300-1173 K temperature range. The average linear thermal expansion coefficients of a Nb2AlC sample in the a and c directions are, respectively, 7.9(5)x10-6 K-1 and 7.7(5)x10-6 K-1 on one neutron diffractometer and 7.3(3)x10-6 K-1 and 7.0(2)x10-6 K-1 on a second diffractometer. The respective values for the (Ti0.45,Nb0.55)2AlC composition - only tested on one diffractometer - are 8.5(3)x10-6 K-1 and 7.5(5)x10-6 K-1. These values are relatively low compared to other MAX phases. Like othermore » MAX phases, however, the atomic displacement parameters show that the Al atoms vibrate with higher amplitudes than the Ti and C atoms, and 1 more along the basal planes than normal to them. In addition, when the predictions of the atomic displacement parameters obtained from density functional theory are compared to the experimental results, good quantitative agreement is found for the Al atoms. In case of the Nb and C atoms, the agreement was more qualitative.« less

  3. Experimental Evaluation of Unicast and Multicast CoAP Group Communication.

    PubMed

    Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet

    2016-01-01

    The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262

  4. Collaborative work during interventional radiological procedures based on a multicast satellite-terrestrial network.

    PubMed

    Gortzis, Lefteris G; Papadopoulos, Homer; Roelofs, Theo A; Rakowsky, Stefan; Karnabatidis, Dimitris; Siablis, Dimitris; Makropoulos, Constantinos; Nikiforidis, George; Graschew, Georgi

    2007-09-01

    Collaboration is a key requirement in several contemporary interventional radiology procedures (IRPs). This work proposes a multicast hybrid satellite system capable of supporting advanced IRP collaboration, and evaluates its feasibility and applicability. Following a detailed IRP requirements study, we have developed a system which supports IRP collaboration through the employment of a hybrid satellite-terrestrial network, a prototype multicast version of wavelet based interactive communication system (WinVicos) application, and a partition aggregation and conditional coding (PACC) wavelet codec. A semistructured questionnaire was also used to receive evaluative feedback from collaborating participants. The departments of interventional radiology of University Hospital of Patras, Greece and of Charite Hospital of Berlin, Germany have been connected on the system. Eight interventional radiologists and a vascular surgeon participated periodically in three satellite-terrestrial "fully collaborative" IRPs (average time 90 min) of high complexity and in four terrestrial educational sessions with great success, evidenced by considerable improving the IRP outcomes (clinical and educational). In case of high complexity, where the simultaneous presence of remote interventional expert and/or surgeon is required, advanced collaboration among staff of geographically dispersed international centers is feasible via integration of existing networking and other technologies. PMID:17912978

  5. Comparative Study of Multicast Protection Algorithms Using Shared Links in 100GET Transport Network

    NASA Astrophysics Data System (ADS)

    Sulaiman, Samer; Haidine, Abdelfattah; Lehnert, Ralf; Tuerk, Stefan

    In recent years new challenges have emerged in the telecommunications market resulting from the increase of network traffic and strong competition. Because of that, service providers feel constrained to replace expensive and complex IP-routers with a cheap and simple solution which guarantees the requested quality of services (QoS) with low cost. One of these solutions is to use the Ethernet technology as a switching layer, which results in using the cheap Ethernet services (E-Line, E-LAN and E-Tree) and to replace the expensive IP-routers. To achieve this migration step, new algorithms that support the available as well as the future services have to be developed. In this paper, we investigate the multicast protection issue. Three multicast protection algorithms based on the shared capacity between primary and backup solutions are proposed and evaluated. The blocking probability is used to evaluate the performance of the proposed algorithms. The sub-path algorithm resulted in a low blocking probability compared with the other algorithms.

  6. On the Relationship between Multicast/Broadcast Throughput and Resource Utilizations in Wireless Mesh Networks

    PubMed Central

    Valaee, Shahrokh

    2013-01-01

    This paper deals with the problem of multicast/broadcast throughput in multi-channel multi-radio wireless mesh networks that suffer from the resource constraints. We provide a formulation to capture the utilization of the network resources and derive analytical relationships for the network's throughput in terms of the node utilization, the channel utilization, and the number of transmissions. Our model relies on the on-demand quality of service multicast/broadcast sessions, where each admitted session creates a unique tree with a specific bandwidth. As an advantage, the derived relationships are independent of the type of tree built for each session and can be used for different protocols. The proposed formulation considers the channel assignment strategy and reflects both the wireless broadcast advantage and the interference constraint. We also offer a comprehensive discussion to evaluate the effects of load-balancing and number of transmissions on the network's throughput. Numerical results confirm the accuracy of the presented analysis. PMID:24348188

  7. Design of a Multicast Optical Packet Switch Based on Fiber Bragg Grating Technology for Future Networks

    NASA Astrophysics Data System (ADS)

    Cheng, Yuh-Jiuh; Yeh, Tzuoh-Chyau; Cheng, Shyr-Yuan

    2011-09-01

    In this paper, a non-blocking multicast optical packet switch based on fiber Bragg grating technology with optical output buffers is proposed. Only the header of optical packets is converted to electronic signals to control the fiber Bragg grating array of input ports and the packet payloads should be transparently destined to their output ports so that the proposed switch can reduce electronic interfaces as well as the bit rate. The modulation and the format of packet payloads may be non-standard where packet payloads could also include different wavelengths for increasing the volume of traffic. The advantage is obvious: the proposed switch could transport various types of traffic. An easily implemented architecture which can provide multicast services is also presented. An optical output buffer is designed to queue the packets if more than one incoming packet should reach to the same destination output port or including any waiting packets in optical output buffer that will be sent to the output port at a time slot. For preserving service-packet sequencing and fairness of routing sequence, a priority scheme and a round-robin algorithm are adopted at the optical output buffer. The fiber Bragg grating arrays for both input ports and output ports are designed for routing incoming packets using optical code division multiple access technology.

  8. Experimental Evaluation of Unicast and Multicast CoAP Group Communication

    PubMed Central

    Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet

    2016-01-01

    The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262

  9. Isolation of pristine MXene from Nb4AlC3 MAX phase: a first-principles study.

    PubMed

    Mishra, Avanish; Srivastava, Pooja; Mizuseki, Hiroshi; Lee, Kwang-Ryeol; Singh, Abhishek K

    2016-04-20

    Synthesis of pristine MXene sheets from MAX phase is one of the foremost challenges in getting a complete understanding of the properties of this new technologically important 2D-material. Efforts to exfoliate Nb4AlC3 MAX phase always lead to Nb4C3 MXene sheets, which are functionalized and have several Al atoms attached. Using the first-principles calculations, we perform an intensive study on the chemical transformation of MAX phase into MXene sheets by inserting HF, alkali atoms and LiF in Nb4AlC3 MAX phase. Calculated bond-dissociation energy (BDE) shows that the presence of HF in MAX phase always results in functionalized MXene, as the binding of H with MXene is quite strong while that with F is weak. Insertion of alkali atoms does not facilitate pristine MXene isolation due to the presence of chemical bonds of almost equal strength. In contrast, weak Li-MXene and strong Li-F bonding in Nb4AlC3 with LiF ensured strong anisotropy in BDE, which will result in the dissociation of the Li-MXene bond. Ab initio molecular dynamics calculations capture these features and show that at 500-650 K, the Li-MXene bond indeed breaks leaving a pristine MXene sheet behind. The approach and insights developed here for chemical exfoliation of layered materials bonded by chemical bonds instead of van der Waals can promote their experimental realization. PMID:27045339

  10. Low-power-penalty wavelength multicasting for 36  Gbit/s 16-QAM coherent optical signals in a silicon waveguide.

    PubMed

    Wang, Xiaoyan; Huang, Lingchen; Gao, Shiming

    2014-12-15

    All-optical wavelength multicasting has been experimentally demonstrated for 36 Gbit/s 16-quadrature amplitude modulation signals based on four-wave mixing processes in a silicon waveguide with multiple pumps. In our experiment, dual pumps are injected together with the signal into the waveguide and nine idlers are generated, involving five wavelength multicasting channels. Coherent detection and advanced digital signal processing are employed, and the recovered constellation diagrams of the multicasting idlers show a root-mean-square error vector magnitude degradation as small as 2.74%. The bit error rate (BER) results are measured for these multicasting idlers, and the power penalties are all lower than 0.96 dB at the BER of 3.8×10(-3) (corresponding to the forward error correction threshold). PMID:25503027

  11. Heavy metal contents of paddy fields of Alcácer do Sal, Portugal.

    PubMed

    Fernandes, J C; Henriques, F S

    1990-01-01

    Recent claims of metal contamination in the lower reaches of the Sado River, in the Alcácer do Sal region, Portugal, a major rice-producing area were investigated by carrying out metal surveys in the area. The elements Fe, Mn, Zn, Cu and Pb were measured in the soil and in rice plant parts--roots, shoots and grain--as well as in some weeds growing in the Sado banks, near the paddy fields. Results showed that the metal contents of paddy soils were similar to background concentrations, with the exception of Zn and Cu, which were above those concentrations and reached their highest levels at Vale de Guizo, the monitored station located furthest upstream in the Sado River. At some sites, plant roots accumulated relatively large amounts of Fe, Mn, Zn and Cu, but the shoot levels of these metals were within the normal range for rice plants. It is possible that varying, but significant, amounts of Fe associated with the roots were in the form of ferric hydroxide plaque covering their surfaces. Copper levels in the shoots of rice were below the normal contents cited for this plant in the literature. Metal levels of river sediments collected near Vale de Guizo seem to corroborate the possibility of some metal contamination in the Sado River, most probably derived from pyrites mining activity in the upper zone of the Sado basin. PMID:2305246

  12. On the small angle twist sub-grain boundaries in Ti3AlC2

    PubMed Central

    Zhang, Hui; Zhang, Chao; Hu, Tao; Zhan, Xun; Wang, Xiaohui; Zhou, Yanchun

    2016-01-01

    Tilt-dominated grain boundaries have been investigated in depth in the deformation of MAX phases. In stark contrast, another important type of grain boundaries, twist grain boundaries, have long been overlooked. Here, we report on the observation of small angle twist sub-grain boundaries in a typical MAX phase Ti3AlC2 compressed at 1200 °C, which comprise hexagonal screw dislocation networks formed by basal dislocation reactions. By first-principles investigations on atomic-scale deformation and general stacking fault energy landscapes, it is unequivocally demonstrated that the twist sub-grain boundaries are most likely located between Al and Ti4f (Ti located at the 4f Wyckoff sites of P63/mmc) layers, with breaking of the weakly bonded Al–Ti4f. The twist angle increases with the increase of deformation and is estimated to be around 0.5° for a deformation of 26%. This work may shed light on sub-grain boundaries of MAX phases, and provide fundamental information for future atomic-scale simulations. PMID:27034075

  13. Cold spray deposition of Ti2AlC coatings for improved nuclear fuel cladding

    NASA Astrophysics Data System (ADS)

    Maier, Benjamin R.; Garcia-Diaz, Brenda L.; Hauch, Benjamin; Olson, Luke C.; Sindelar, Robert L.; Sridharan, Kumar

    2015-11-01

    Coatings of Ti2AlC MAX phase compound have been successfully deposited on Zircaloy-4 (Zry-4) test flats, with the goal of enhancing the accident tolerance of LWR fuel cladding. Low temperature powder spray process, also known as cold spray, has been used to deposit coatings ∼90 μm in thickness using powder particles of <20 μm. X-ray diffraction analysis showed the phase-content of the deposited coatings to be identical to the powders indicating that no phase transformation or oxidation had occurred during the coating deposition process. The coating exhibited a high hardness of about 800 HK and pin-on-disk wear tests using abrasive ruby ball counter-surface showed the wear resistance of the coating to be significantly superior to the Zry-4 substrate. Scratch tests revealed the coatings to be well-adhered to the Zry-4 substrate. Such mechanical integrity is required for claddings from the standpoint of fretting wear resistance and resisting wear handling and insertion. Air oxidation tests at 700 °C and simulated LOCA tests at 1005 °C in steam environment showed the coatings to be significantly more oxidation resistant compared to Zry-4 suggesting that such coatings can potentially provide accident tolerance to nuclear fuel cladding.

  14. Cold spray deposition of Ti2AlC coatings for improved nuclear fuel cladding

    NASA Astrophysics Data System (ADS)

    Maier, Benjamin R.; Garcia-Diaz, Brenda L.; Hauch, Benjamin; Olson, Luke C.; Sindelar, Robert L.; Sridharan, Kumar

    2015-11-01

    Coatings of Ti2AlC MAX phase compound have been successfully deposited on Zircaloy-4 (Zry-4) test flats, with the goal of enhancing the accident tolerance of LWR fuel cladding. Low temperature powder spray process, also known as cold spray, has been used to deposit coatings ˜90 μm in thickness using powder particles of <20 μm. X-ray diffraction analysis showed the phase-content of the deposited coatings to be identical to the powders indicating that no phase transformation or oxidation had occurred during the coating deposition process. The coating exhibited a high hardness of about 800 HK and pin-on-disk wear tests using abrasive ruby ball counter-surface showed the wear resistance of the coating to be significantly superior to the Zry-4 substrate. Scratch tests revealed the coatings to be well-adhered to the Zry-4 substrate. Such mechanical integrity is required for claddings from the standpoint of fretting wear resistance and resisting wear handling and insertion. Air oxidation tests at 700 °C and simulated LOCA tests at 1005 °C in steam environment showed the coatings to be significantly more oxidation resistant compared to Zry-4 suggesting that such coatings can potentially provide accident tolerance to nuclear fuel cladding.

  15. On the small angle twist sub-grain boundaries in Ti3AlC2

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Zhang, Chao; Hu, Tao; Zhan, Xun; Wang, Xiaohui; Zhou, Yanchun

    2016-04-01

    Tilt-dominated grain boundaries have been investigated in depth in the deformation of MAX phases. In stark contrast, another important type of grain boundaries, twist grain boundaries, have long been overlooked. Here, we report on the observation of small angle twist sub-grain boundaries in a typical MAX phase Ti3AlC2 compressed at 1200 °C, which comprise hexagonal screw dislocation networks formed by basal dislocation reactions. By first-principles investigations on atomic-scale deformation and general stacking fault energy landscapes, it is unequivocally demonstrated that the twist sub-grain boundaries are most likely located between Al and Ti4f (Ti located at the 4f Wyckoff sites of P63/mmc) layers, with breaking of the weakly bonded Al–Ti4f. The twist angle increases with the increase of deformation and is estimated to be around 0.5° for a deformation of 26%. This work may shed light on sub-grain boundaries of MAX phases, and provide fundamental information for future atomic-scale simulations.

  16. Coordinated increase of γ-secretase reaction products in the plasma of some female Japanese sporadic Alzheimer's disease patients: quantitative analysis of p3-Alcα with a new ELISA system

    PubMed Central

    2011-01-01

    Background Aggregatable amyloid β-peptide (Aβ) and non-aggregatable p3-Alcα are metabolic products of the γ-secretase cleavage of amyloid β-protein precursor (APP) and Alcadeinα (Alcα), respectively. Familial AD (FAD) -linked mutations in the presenilin 1 or 2 (PS1 or PS2) component of γ-secretase can cause alternative intramembranous processing of APP and Alcα, leading to a coordinated generation of variants of both Aβ and p3-Alcα. Variant Alcα peptides have been observed in the cerebrospinal fluid (CSF) of patients with mild cognitive impairment and sporadic Alzheimer's disease (AD). Since, like APP, Alcα is largely expressed in brain, one might predict that alternative processing of Alcα would be reflected in body fluids of some AD patients. These patients with misprocessing of multiple γ-secretase substrates might define an endophenotype of p3-Alcα, in whom AD is due either to dysfunction of γ-secretase or to a disorder of the clearance of hydrophobic peptides such as those derived from transmembrane domains. Results We developed a simple procedure for extraction of p3-Alcα from plasma and for analyzing this extract in a sensitive, p3-Alcα-specific sandwich enzyme-linked immunosorbent assay (ELISA) system. Plasma p3-Alcα levels and Aβ40 levels were examined in sporadic AD subjects from two independent Japanese cohorts. In some of these patients, levels of plasma p3-Alcα were significantly higher, and were accompanied by parallel changes in Aβ40 levels. This AD-related difference was more marked in female subjects, but this phenomenon was not observed in subjects with frontotemporal lobar degeneration (FTLD). Conclusion Reagents and procedures have been established that enable extraction of p3-Alcα from plasma and for quantification of plasma p3-Alcα levels by ELISA. Some populations of AD subjects apparently show increased levels of both p3-Alcα and Aβ40. Quantification of p3-Alcα level may be useful as a readily accessible biomarker

  17. Brief Report: Genetics of Alcoholic Cirrhosis - GenomALC multinational Study

    PubMed Central

    Whitfield, John B.; Rahman, Khairunnessa; Haber, Paul S.; Day, Christopher P.; Masson, Steven; Daly, Ann K.; Cordell, Heather J.; Mueller, Sebastian; Seitz, Helmut K.; Liangpunsakul, Suthat; Westerhold, Chi; Liang, Tiebing; Lumeng, Lawrence; Foroud, Tatiana; Nalpas, Bertrand; Mathurin, Philippe; Stickel, Felix; Soyka, Michael; Botwin, Gregory J.; Morgan, Timothy R.; Seth, Devanshi

    2015-01-01

    Background The risk of alcohol-related liver cirrhosis increases with increasing alcohol consumption, but many people with very high intake escape liver disease. We postulate that susceptibility to alcoholic cirrhosis has a complex genetic component, and propose that this can be dissected through a large and sufficiently-powered genome-wide association study (GWAS). Methods The GenomALC Consortium comprises researchers from Australia, France, Germany, Switzerland, United Kingdom and United States, with a joint aim of exploring the genetic and genomic basis of alcoholic cirrhosis. For this NIH/NIAAA funded study, we are recruiting high-risk drinkers who are either cases (with alcoholic cirrhosis) or controls (drinking comparable amounts over similar time, but free of significant liver disease). Extensive phenotypic data are obtained using semi-structured interviews and patient records, and blood samples are collected. Results We have successfully recruited 859 participants including 538 matched case-control samples as of September 2014, using study specific inclusion-exclusion criteria and data collection protocols. Of these, 580 are cases (442 men, 138 women) and 279 are controls (205 men, 74 women). Duration of excessive drinking was slightly greater in cases than controls and was significantly less in women than men. Cases had significantly lower lifetime alcohol intake than controls. Both cases and controls had a high prevalence of reported parental alcohol problems, but cases were significantly more likely to report that a father with alcohol problems had died from liver disease (Odds Ratio 2.53, 95% CI 1.31–4.87, p = 0.0055). Conclusions Recruitment of participants for a GWAS of alcoholic cirrhosis has proved feasible across countries with multiple sites. Affected patients often consume less alcohol than unaffected ones, emphasising the existence of individual vulnerability factors. Cases are more likely to report liver disease in a father with alcohol

  18. BlueGene/L Specific Modification to MRNet: Multicast/Reduction Network

    2007-05-21

    MRNet is a software tree-based overlay network developed at the University of Wisconsin, Madison that provides a scalable communication mechanism for parallel tools. MRNet uses a tree topology of networked processes between a user tool and distributed tool daemons. This tree topology allows scalable multicast communication from the tool to the daemons. The internal nodes of the tree can be used to distribute computation and analysis on data sent from the tool daemons to themore » tool. This release covers modifications that we have made to the Wisconsin implementation to port this software to the BlueGene/L architecture. It also covers some additional files that were created for this port.« less

  19. Noise performance of phase-insensitive frequency multicasting in parametric mixer with finite dispersion.

    PubMed

    Tong, Zhi; Wiberg, Andreas O J; Myslivets, Evgeny; Huynh, Chris K; Kuo, Bill P P; Alic, Nikola; Radic, Stojan

    2013-07-29

    Noise performance of dual-pump, multi-sideband parametric mixer operated in phase-insensitive mode is investigated theoretically and experimentally. It is shown that, in case when a large number of multicasting idlers are generated, the noise performance is strictly dictated by the dispersion characteristics of the mixer. We find that the sideband noise performance is significantly degraded in anomalous dispersion region permitting nonlinear noise amplification. In contrast, in normal dispersion region, the noise performance converges to the level of four-sideband parametric process, rather than deteriorates with increased sideband creation. Low noise generation mandates precise dispersion-induced phase mismatch among pump and sideband waves in order to control the noise coupling. We measure the noise performance improvement for a many-sideband, multi-stage mixer by incorporating new design technique. PMID:23938638

  20. Context-based user grouping for multi-casting in heterogeneous radio networks

    NASA Astrophysics Data System (ADS)

    Mannweiler, C.; Klein, A.; Schneider, J.; Schotten, H. D.

    2011-08-01

    Along with the rise of sophisticated smartphones and smart spaces, the availability of both static and dynamic context information has steadily been increasing in recent years. Due to the popularity of social networks, these data are complemented by profile information about individual users. Making use of this information by classifying users in wireless networks enables targeted content and advertisement delivery as well as optimizing network resources, in particular bandwidth utilization, by facilitating group-based multi-casting. In this paper, we present the design and implementation of a web service for advanced user classification based on user, network, and environmental context information. The service employs simple and advanced clustering algorithms for forming classes of users. Available service functionalities include group formation, context-aware adaptation, and deletion as well as the exposure of group characteristics. Moreover, the results of a performance evaluation, where the service has been integrated in a simulator modeling user behavior in heterogeneous wireless systems, are presented.

  1. OC-ALC hazardous waste minimization strategy: Reduction of industrial biological sludge from industrial wastewater treatment facilities

    SciTech Connect

    Hall, F.E. Jr.

    1997-12-31

    Oklahoma City Air Logistics Center (OC-ALC) is one of five US Air Force Logistic Centers that perform depot level maintenance of aircraft. As part of the maintenance process, aircraft are cleaned, chemically depainted, repainted, and electroplated. These repair/maintenance processes generate large quantities of dilute liquid effluent which are collected and treated in the Industrial Waste Treatment Plant (IWTP) prior to hazardous waste disposal. OC-ALC is committed to reducing the use of hazardous materials in the repair and maintenance of aircraft and ancillary components. A major Air Force initiative is to reduce the amount of hazardous waste discharged off-site by 25% by the end of CY96 and 50% by CY99 end. During maintenance and repair operations, organic chemicals are employed. These organics are discharged to the IWTP for biological degradation. During the biological digestion process, a biological sludge is generated. OC-ALC engineers are evaluating the applicability of a biosludge acid/heat treatment process. In the acid hydrolysis process, an acid is added to the biosludge and processed through a hot, pressurized reactor where the majority of the biosolids are broken down and solubilized. The resulting aqueous product stream is then recycled back to the traditional biotreatment process for digestion of the solubilized organics. The solid waste stream is dewatered prior to disposal. The objective of the subsequent effort is to achieve a reduction in hazardous waste generation and disposal by focusing primarily on end-of-the-pipe treatment at the IWTP. Acid hydrolysis of biosludge is proving to be a practical process for use in industrial and municipal wastewater biotreatment systems that will lower environmental and economic costs by minimizing the production and disposal of biosludge.

  2. Performance Analysis of Multicast Video Streaming in IEEE 802.11 b/g/n Testbed Environment

    NASA Astrophysics Data System (ADS)

    Kostuch, Aleksander; Gierłowski, Krzysztof; Wozniak, Jozef

    The aim of the work is to analyse capabilities and limitations of different IEEE 802.11 technologies (IEEE 802.11 b/g/n), utilized for both multicast and unicast video streaming transmissions directed to mobile devices. Our preliminary research showed that results obtained with currently popular simulation tools can be drastically different than these possible in real-world environment, so, in order to correctly evaluate performance of video streaming, a simple wireless test-bed infrastructure has been created. The results show a strong dependence of the quality of video streaming on the chosen transmission technology. At the same time there are significant differences in perception quality between multicast (1:n) and unicast (1:1) streams, and also between devices offered by different manufacturers. The overall results seem to demonstrate, that, while multicast support quality in different products is still varied and often requires additional configuration, it is possible to select a WiFi access point model and determine the best system parameters to ensure a good video transfer conditions in terms of acceptable QoP/E (Quality of Perception/Exellence).

  3. Hybrid digital-analog video transmission in wireless multicast and multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin

    2016-01-01

    Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.

  4. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  5. All-Optical 1-to-8 Wavelength Multicasting at 20 Gbit/s Exploiting Self-Phase Modulation in Dispersion Flattened Highly Nonlinear Photonic Crystal Fiber

    PubMed Central

    Hui, Zhan-Qiang

    2014-01-01

    All-optical multicasting of performing data routing from single node to multiple destinations in the optical domain is promising for next generation ultrahigh-peed photonic networks. Based on the self-phase modulation in dispersion flattened highly nonlinear photonic crystal fiber and followed spectral filtering, simultaneous 1-to-8 all-optical wavelength multicasting return-to-zero (RZ) signal at 20 Gbit/s with 100 GHz channel spaced is achieved. Wavelength tunable range and dynamic characteristic of proposed wavelength multicasting scheme is further investigated. The results show our designed scheme achieve operation wavelength range of 25 nm, OSNR of 32.01 dB and Q factor of 12.8. Moreover, the scheme has simple structure as well as high tolerance to signal power fluctuation. PMID:24711738

  6. Substrate orientation effects on the nucleation and growth of the M{sub n+1}AX{sub n} phase Ti{sub 2}AlC

    SciTech Connect

    Tucker, Mark D.; Guenette, Mathew C.; Bilek, Marcela M. M.; McKenzie, David R.; Persson, Per O. A.; Rosen, Johanna

    2011-01-01

    The M{sub n+1}AX{sub n} (MAX) phases are ternary compounds comprising alternating layers of a transition metal carbide or nitride and a third ''A-group'' element. The effect of substrate orientation on the growth of Ti{sub 2}AlC MAX phase films was investigated by studying pulsed cathodic arc deposited samples grown on sapphire cut along the (0001), (1010), and (1102) crystallographic planes. Characterization of these samples was by x-ray diffraction, atomic force microscopy, and cross-sectional transmission electron microscopy. On the (1010) substrate, tilted (1018) growth of Ti{sub 2}AlC was found, such that the TiC octahedra of the MAX phase structure have the same orientation as a spontaneously formed epitaxial TiC sublayer, preserving the typical TiC-Ti{sub 2}AlC epitaxial relationship and confirming the importance of this relationship in determining MAX phase film orientation. An additional component of Ti{sub 2}AlC with tilted fiber texture was observed in this sample; tilted fiber texture, or axiotaxy, has not previously been seen in MAX phase films.

  7. Oxidation Resistance of Materials Based on Ti3AlC2 Nanolaminate at 600 °C in Air.

    PubMed

    Ivasyshyn, Andrij; Ostash, Orest; Prikhna, Tatiana; Podhurska, Viktoriya; Basyuk, Tatiana

    2016-12-01

    The oxidation behavior of Ti3AlC2-based materials had been investigated at 600 °C in static air for 1000 h. It was shown that the intense increase of weight gain per unit surface area for sintered material with porosity of 22 % attributed to oxidation of the outer surface of the specimen and surfaces of pores in the bulk material. The oxidation kinetics of the hot-pressed Ti3AlC2-based material with 1 % porosity remarkably increased for the first 15 h and then slowly decreased. The weight gain per unit surface area for this material was 1.0 mg/cm(2) after exposition for 1000 h. The intense initial oxidation of Ti3AlC2-based materials can be eliminated by pre-oxidation treatment at 1200 °C in air for 2 h. As a result, the weight gain per unit surface area for the pre-oxidized material did not exceed 0.11 mg/cm(2) after 1000 h of exposition at 600 °C in air. It was demonstrated that the oxidation resistance of Ti3AlC2-based materials can be significantly improved by niobium addition. PMID:27506531

  8. Measuring the spectrum of mutation induced by nitrogen ions and protons in the human-hamster hybrid cell line A(L)C

    NASA Technical Reports Server (NTRS)

    Kraemer, S. M.; Kronenberg, A.; Ueno, A.; Waldren, C. A.; Chatterjee, A. (Principal Investigator)

    2000-01-01

    Astronauts can be exposed to charged particles, including protons, alpha particles and heavier ions, during space flights. Therefore, studying the biological effectiveness of these sparsely and densely ionizing radiations is important to understanding the potential health effects for astronauts. We evaluated the mutagenic effectiveness of sparsely ionizing 55 MeV protons and densely ionizing 32 MeV/nucleon nitrogen ions using cells of two human-hamster cell lines, A(L) and A(L)C. We have previously characterized a spectrum of mutations, including megabase deletions, in human chromosome 11, the sole human chromosome in the human-hamster hybrid cell lines A(L)C and A(L). CD59(-) mutants have lost expression of a human cell surface antigen encoded by the CD59 gene located at 11p13. Deletion of genes located on the tip of the short arm of 11 (11p15.5) is lethal to the A(L) hybrid, so that CD59 mutants that lose the entire chromosome 11 die and escape detection. In contrast, deletion of the 11p15.5 region is not lethal in the hybrid A(L)C, allowing for the detection of chromosome loss or other chromosomal mutations involving 11p15.5. The 55 MeV protons and 32 MeV/nucleon nitrogen ions were each about 10 times more mutagenic per unit dose at the CD59 locus in A(L)C cells than in A(L) cells. In the case of nitrogen ions, the mutations observed in A(L)C cells were predominantly due to chromosome loss events or 11p deletions, often containing a breakpoint in the pericentromeric region. The increase in the CD59(-) mutant fraction for A(L)C cells exposed to protons was associated with either translocation of portions of 11q onto a hamster chromosome, or discontinuous or "skipping" mutations. We demonstrate here that A(L)C cells are a powerful tool that will aid in the understanding of the mutagenic effects of different types of ionizing radiation.

  9. Moose (Alces alces) reacts to high summer temperatures by utilizing thermal shelters in boreal forests - an analysis based on airborne laser scanning of the canopy structure at moose locations.

    PubMed

    Melin, Markus; Matala, Juho; Mehtätalo, Lauri; Tiilikainen, Raisa; Tikkanen, Olli-Pekka; Maltamo, Matti; Pusenius, Jyrki; Packalen, Petteri

    2014-04-01

    The adaptation of different species to warming temperatures has been increasingly studied. Moose (Alces alces) is the largest of the ungulate species occupying the northern latitudes across the globe, and in Finland it is the most important game species. It is very well adapted to severe cold temperatures, but has a relatively low tolerance to warm temperatures. Previous studies have documented changes in habitat use by moose due to high temperatures. In many of these studies, the used areas have been classified according to how much thermal cover they were assumed to offer based on satellite/aerial imagery data. Here, we identified the vegetation structure in the areas used by moose under different thermal conditions. For this purpose, we used airborne laser scanning (ALS) data extracted from the locations of GPS-collared moose. This provided us with detailed information about the relationships between moose and the structure of forests it uses in different thermal conditions and we were therefore able to determine and differentiate between the canopy structures at locations occupied by moose during different thermal conditions. We also discovered a threshold beyond which moose behaviour began to change significantly: as day temperatures began to reach 20 °C and higher, the search for areas with higher and denser canopies during daytime became evident. The difference was clear when compared to habitat use at lower temperatures, and was so strong that it provides supporting evidence to previous studies, suggesting that moose are able to modify their behaviour to cope with high temperatures, but also that the species is likely to be affected by warming climate. PMID:24115403

  10. Reliability physics

    NASA Technical Reports Server (NTRS)

    Cuddihy, E. F.; Ross, R. G., Jr.

    1984-01-01

    Speakers whose topics relate to the reliability physics of solar arrays are listed and their topics briefly reviewed. Nine reports are reviewed ranging in subjects from studies of photothermal degradation in encapsulants and polymerizable ultraviolet stabilizers to interface bonding stability to electrochemical degradation of photovoltaic modules.

  11. Reliable communication in the presence of failures

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, Thomas A.

    1987-01-01

    The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistant orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols is the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.

  12. Network connectivity enhancement by exploiting all optical multicast in semiconductor ring laser

    NASA Astrophysics Data System (ADS)

    Siraj, M.; Memon, M. I.; Shoaib, M.; Alshebeili, S.

    2015-03-01

    The use of smart phone and tablet applications will provide the troops for executing, controlling and analyzing sophisticated operations with the commanders providing crucial documents directly to troops wherever and whenever needed. Wireless mesh networks (WMNs) is a cutting edge networking technology which is capable of supporting Joint Tactical radio System (JTRS).WMNs are capable of providing the much needed bandwidth for applications like hand held radios and communication for airborne and ground vehicles. Routing management tasks can be efficiently handled through WMNs through a central command control center. As the spectrum space is congested, cognitive radios are a much welcome technology that will provide much needed bandwidth. They can self-configure themselves, can adapt themselves to the user requirement, provide dynamic spectrum access for minimizing interference and also deliver optimal power output. Sometimes in the indoor environment, there are poor signal issues and reduced coverage. In this paper, a solution utilizing (CR WMNs) over optical network is presented by creating nanocells (PCs) inside the indoor environment. The phenomenon of four-wave mixing (FWM) is exploited to generate all-optical multicast using semiconductor ring laser (SRL). As a result same signal is transmitted at different wavelengths. Every PC is assigned a unique wavelength. By using CR technology in conjunction with PC will not only solve network coverage issue but will provide a good bandwidth to the secondary users.

  13. Anisotropic swelling and microcracking of neutron irradiated Ti3AlC2-Ti5Al2C3 materials

    DOE PAGESBeta

    Ang, Caen K.; Silva, Chinthaka M.; Shih, Chunghao Phillip; Koyanagi, Takaaki; Katoh, Yutai; Zinkle, Steven J.

    2015-12-17

    Mn + 1AXn (MAX) phase materials based on Ti–Al–C have been irradiated at 400 °C (673 K) with fission neutrons to a fluence of 2 × 1025 n/m2 (E > 0.1 MeV), corresponding to ~ 2 displacements per atom (dpa). We report preliminary results of microcracking in the Al-containing MAX phase, which contained the phases Ti3AlC2 and Ti5Al2C3. Equibiaxial ring-on-ring tests of irradiated coupons showed that samples retained 10% of pre-irradiated strength. Volumetric swelling of up to 4% was observed. Phase analysis and microscopy suggest that anisotropic lattice parameter swelling caused microcracking. Lastly, variants of titanium aluminum carbide may bemore » unsuitable materials for irradiation at light water reactor-relevant temperatures.« less

  14. The Cretaceous (Cenomanian) continental record of the Laje do Coringa flagstone (Alcântara Formation), northeastern South America

    NASA Astrophysics Data System (ADS)

    Medeiros, Manuel Alfredo; Lindoso, Rafael Matos; Mendes, Ighor Dienes; Carvalho, Ismar de Souza

    2014-08-01

    The fossil taxa of the Cenomanian continental flora and fauna of São Luís Basin are observed primarily in the bone bed of the Laje do Coringa, Alcântara Formation. Many of the disarticulated fish and tetrapod skeletal and dental elements are remarkably similar to the chronocorrelate fauna of Northern Africa. In this study, we present a summary of the continental flora and fauna of the Laje do Coringa bone-bed. The record emphasizes the existence of a trans-oceanic typical fauna, at least until the early Cenomanian, which may be interpreted as minor evolutionary changes after a major vicariant event or as a result of a land bridge across the equatorial Atlantic Ocean, thereby allowing interchanges between South America and Africa. The paleoenvironmental conditions in the northern Maranhão State coast during that time were inferred as forested humid areas surrounded by an arid to semi-arid landscape.

  15. Network reliability

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1985-01-01

    Network control (or network management) functions are essential for efficient and reliable operation of a network. Some control functions are currently included as part of the Open System Interconnection model. For local area networks, it is widely recognized that there is a need for additional control functions, including fault isolation functions, monitoring functions, and configuration functions. These functions can be implemented in either a central or distributed manner. The Fiber Distributed Data Interface Medium Access Control and Station Management protocols provide an example of distributed implementation. Relative information is presented here in outline form.

  16. Chymotrypsins from the deer (Cervidae) family. Isolation, partial characterization and primary-structure studies of chymotrypsins A and B from both moose (Alces alces) and elk (Cervus elaphus) pancreas.

    PubMed Central

    Lindsay, R M; Stevenson, K J

    1976-01-01

    1. An anionic and a cationic chymotrypsin (EC 3.4.21.1) were isolated from the pancreas glands of the moose (Alces alces) and elk (Cervus elaphus). The A and B chymotrypsins from each species were purified to homogeneity by (NH4)2SO4 fractionation, affinity chromatography on 4-phenylbutylamine-Sepharose and ion-exchange chromatography on DEAE- and CM-cellulose. 2. The molecular weight and pH optimum of each chymotrypsin were similar to those of the corresponding ox A and B chymotrypsins. 3. The substrate specificities of the chymotrypsins were investigated by digestion of glucagon and the oxidized B chain of insulin. The primary specificity of each chymotrypsin for aromatic amino acid residues was further established by determining the Km and kcat for the hydrolysis of a number of synthetic amino acid ester substrates. 4. The amino acid composition and total number of residues of moose and elk chymotrypsin A were similar to those of ox chymotrypsin A. An even greater similarity was observed among the B chymotrypsins of the three species. 5. The A chymotrypsins of moose and elk were fragmented to their constituent 'A', 'B' and 'C' polypeptide chains by succinylation (3-carboxypropionylation), reduction and alkylation of the native enzymes. In each case, the two major chains ('B' and 'C') were separated and isolated. By comparison of the amino acid compositions of moose, elk and oxy 'B' and 'C' chains, a greater difference was observed among the three A chymotrypsins than was suggested by the amino acid compositions of the native enzymes alone. 6. Peptides were isolated from the disulphide bridge and active-site regions of the A and B chymotrypsins of moose and elk by diagonal peptide-'mapping' techniques. From the amino acid compositions of the isolated peptides (assuming maximum homology) and from a comparison of diagonal peptide 'maps', there was established a high degree of primary-structure identity among the mooae, elk and ox chymotrypsins. Tentative sequences

  17. Thermopower of the 312 MAX phases Ti3SiC2 , Ti3GeC2 , and Ti3AlC2

    NASA Astrophysics Data System (ADS)

    Chaput, L.; Hug, G.; Pécheur, P.; Scherrer, H.

    2007-01-01

    The electronic structure and the thermoelectric tensor are calculated for the 312 MAX phases Ti3SiC2 , Ti3GeC2 , and Ti3AlC2 . The thermoelectric tensor is shown to be anisotropic in all cases. However, for Ti3SiC2 and Ti3GeC2 we find the components of the thermoelectric tensor to be negative along the z direction, Sz<0 , and positive in the basal plane, Sx>0 , whereas Sz>0 and Sx>0 over a large temperature range for Ti3AlC2 . This accounts for the different behavior experimentally observed. Moreover, the calculated thermopower as a function of temperature is in good agreement with experiments on polycrystals.

  18. New insight into the helium-induced damage in MAX phase Ti3AlC2 by first-principles studies.

    PubMed

    Xu, Yiguo; Bai, Xiaojing; Zha, Xianhu; Huang, Qing; He, Jian; Luo, Kan; Zhou, Yuhong; Germann, Timothy C; Francisco, Joseph S; Du, Shiyu

    2015-09-21

    In the present work, the behavior of He in the MAX phase Ti3AlC2 material is investigated using first-principle methods. It is found that, according to the predicted formation energies, a single He atom favors residing near the Al plane in Ti3AlC2. The results also show that Al vacancies are better able to trap He atoms than either Ti or C vacancies. The formation energies for the secondary vacancy defects near an Al vacancy or a C vacancy are strongly influenced by He impurity content. According to the present results, the existence of trapped He atoms in primary Al vacancy can promote secondary vacancy formation and the He bubble trapped by Al vacancies has a higher tendency to grow in the Al plane of Ti3AlC2. The diffusion of He in Ti3AlC2 is also investigated. The energy barriers are approximately 2.980 eV and 0.294 eV along the c-axis and in the ab plane, respectively, which means that He atoms exhibit faster migration parallel to the Al plane. Hence, the formation of platelet-like bubbles nucleated from the Al vacancies is favored both energetically and kinetically. Our calculations also show that the conventional spherical bubbles may be originated from He atoms trapped by C vacancies. Taken together, these results are able to explain the observed formation of bubbles in various shapes in recent experiments. This study is expected to provide new insight into the behaviors of MAX phases under irradiation from electronic structure level in order to improve the design of MAX phase based materials. PMID:26395728

  19. New insight into the helium-induced damage in MAX phase Ti3AlC2 by first-principles studies

    NASA Astrophysics Data System (ADS)

    Xu, Yiguo; Bai, Xiaojing; Zha, Xianhu; Huang, Qing; He, Jian; Luo, Kan; Zhou, Yuhong; Germann, Timothy C.; Francisco, Joseph S.; Du, Shiyu

    2015-09-01

    In the present work, the behavior of He in the MAX phase Ti3AlC2 material is investigated using first-principle methods. It is found that, according to the predicted formation energies, a single He atom favors residing near the Al plane in Ti3AlC2. The results also show that Al vacancies are better able to trap He atoms than either Ti or C vacancies. The formation energies for the secondary vacancy defects near an Al vacancy or a C vacancy are strongly influenced by He impurity content. According to the present results, the existence of trapped He atoms in primary Al vacancy can promote secondary vacancy formation and the He bubble trapped by Al vacancies has a higher tendency to grow in the Al plane of Ti3AlC2. The diffusion of He in Ti3AlC2 is also investigated. The energy barriers are approximately 2.980 eV and 0.294 eV along the c-axis and in the ab plane, respectively, which means that He atoms exhibit faster migration parallel to the Al plane. Hence, the formation of platelet-like bubbles nucleated from the Al vacancies is favored both energetically and kinetically. Our calculations also show that the conventional spherical bubbles may be originated from He atoms trapped by C vacancies. Taken together, these results are able to explain the observed formation of bubbles in various shapes in recent experiments. This study is expected to provide new insight into the behaviors of MAX phases under irradiation from electronic structure level in order to improve the design of MAX phase based materials.

  20. Deformation modes and ideal strengths of ternary layered Ti{sub 2}AlC and Ti{sub 2}AlN from first-principles calculations

    SciTech Connect

    Liao Ting; Wang Jingyang; Zhou Yanchun

    2006-06-01

    Deformation and failure modes were studied for Ti{sub 2}AlC and Ti{sub 2}AlN by deforming the materials from elasticity to structural instability using the first-principles density functional calculations. We found that the TiC{sub 0.5}/TiN{sub 0.5} slabs remain structurally stable under deformations, whereas the weak Ti-Al bonds accommodate deformation by softening and breaking at large strains. The structural stability of the ternary compound is determined by the strength of Ti-Al bond, which is demonstrated to be less resistive to shear deformation than to tension. The ideal stress-strain relationships of ternary compounds are presented and compared with those of the binary materials, TiC and TiN, respectively. For Ti{sub 2}AlC and Ti{sub 2}AlN, their ideal tensile strengths are comparable to those of the binary counterparts, while the ideal shear strengths yield much smaller values. Based on electronic structure analyses, the low shear deformation resistance is well interpreted by the response of weak Ti-Al bonds to shear deformations. We propose that the low shear strengths of Ti{sub 2}AlC and Ti{sub 2}AlN originate from low slip resistance of Al atomic planes along the basal plane, and furthermore suggest that this is the mechanism for low hardness, damage tolerance, and intrinsic toughness of ternary layered carbides and nitrides.

  1. A novel optical path routing network that combines coarse granularity optical multicast with fine granularity add/drop and block

    NASA Astrophysics Data System (ADS)

    Soares, Mauro M.; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi

    2015-01-01

    We propose a novel optical path routing mechanism that combines coarse-granularity optical multicast with fine-granularity add/drop and block. We implement the proposal in an optical cross-connect node with broadcast-and-select functionality that offers high cost-effectiveness since no addition equipment from conventional ROADMs is needed. The proposed method, called branching, enhances the routing capabilities over the original grouped routing networks by enabling wavelength paths to be established through different GRE pipes. We also present a novel path/GRE routing and wavelength/GRE index assignment algorithm that supports the new routing function. Numerical experiments using real network topologies verify the improved routing performance and the superior efficiency of the proposed control algorithm over original GRE-based networks.

  2. Reliability and Confidence.

    ERIC Educational Resources Information Center

    Test Service Bulletin, 1952

    1952-01-01

    Some aspects of test reliability are discussed. Topics covered are: (1) how high should a reliability coefficient be?; (2) two factors affecting the interpretation of reliability coefficients--range of talent and interval between testings; (3) some common misconceptions--reliability of speed tests, part vs. total reliability, reliability for what…

  3. Chromosomal mutations and chromosome loss measured in a new human-hamster hybrid cell line, ALC: studies with colcemid, ultraviolet irradiation, and 137Cs gamma-rays

    NASA Technical Reports Server (NTRS)

    Kraemer, S. M.; Waldren, C. A.; Chatterjee, A. (Principal Investigator)

    1997-01-01

    Small mutations, megabase deletions, and aneuploidy are involved in carcinogenesis and genetic defects, so it is important to be able to quantify these mutations and understand mechanisms of their creation. We have previously quantified a spectrum of mutations, including megabase deletions, in human chromosome 11, the sole human chromosome in a hamster-human hybrid cell line AL. S1- mutants have lost expression of a human cell surface antigen, S1, which is encoded by the M1C1 gene at 11p13 so that mutants can be detected via a complement-mediated cytotoxicity assay in which S1+ cells are killed and S1- cells survive. But loss of genes located on the tip of the short arm of 11 (11p15.5) is lethal to the AL hybrid, so that mutants that have lost the entire chromosome 11 die and escape detection. To circumvent this, we fused AL with Chinese hamster ovary (CHO) cells to produce a new hybrid, ALC, in which the requirement for maintaining 11p15.5 is relieved, allowing us to detect mutations events involving loss of 11p15.5. We evaluated the usefulness of this hybrid by conducting mutagenesis studies with colcemid, 137Cs gamma-radiation and UV 254 nm light. Colcemid induced 1000 more S1- mutants per unit dose in ALC than in AL; the increase for UV 254 nm light was only two-fold; and the increase for 137Cs gamma-rays was 12-fold. The increase in S1- mutant fraction in ALC cells treated with colcemid and 137Cs gamma-rays were largely due to chromosome loss and 11p deletions often containing a breakpoint within the centromeric region.

  4. Positive Family History, Infection, Low Absolute Lymphocyte Count (ALC) and Absent Thymic Shadow: Diagnostic Clues for all Molecular Forms of Severe Combined Immunodeficiency (SCID)

    PubMed Central

    McWilliams, Laurie M; Railey, Mary Dell; Buckley, Rebecca H

    2015-01-01

    Background Severe Combined Immunodeficiency (SCID) is a syndrome uniformly fatal during infancy unless recognized and treated successfully by bone marrow transplantation or gene therapy. Because SCID infants have no abnormal physical appearance, diagnosis is usually delayed unless newborn screening is performed. Objective In this study, we sought to evaluate the presenting features of all 172 SCID patients transplanted at this institution over the past 31 years. Methods We reviewed original charts from 172 consecutive classic SCID patients who received either T cell-depleted HLA-haploidentical (N=154) or HLA-identical (N=18) non-ablative related marrow transplants at Duke University Medical Center from 1982–2013. Results The mean age at presentation was 4.87 months. When there was a family history of early infant death or known SCID (63/172 or 37%), the mean presentation age was much earlier, 2.0 months compared to 6.6 months. Failure to thrive was common, with 84 patients (50%) having a weight less than the 5th percentile. The leading infections included oral moniliasis (43%), viral infections (61/172 35.5%) and Pneumocystis jiroveci (26%) pneumonia. The group mean ALC was 1454/cmm; 88% of the infants had an ALC less than 3000/cmm. Absent thymic shadow was seen in 92% of infants with electronic radiographic data available. An absence of T cell function was found in all patients. Conclusions SCID infants appear normal at birth but later present with failure to thrive and/or recurrent fungal, viral and bacterial infections. Low ALCs and absent thymic shadow on chest x-ray are key diagnostic clues. The absence of T cell function confirms the diagnosis. PMID:25824440

  5. First-principles phonon calculations of thermal expansion in Ti3SiC2 , Ti3AlC2 , and Ti3GeC2

    NASA Astrophysics Data System (ADS)

    Togo, Atsushi; Chaput, Laurent; Tanaka, Isao; Hug, Gilles

    2010-05-01

    Thermal properties of ternary carbides with composition Ti3SiC2 , Ti3AlC2 , and Ti3GeC2 were studied using the first-principles phonon calculations. The thermal expansions, the heat capacities at constant pressure, and the isothermal bulk moduli at finite temperatures were obtained under the quasiharmonic approximation. Comparisons were made with the available experimental data and excellent agreements were obtained. Phonon band structures and partial density of states were investigated. These compounds present unusual localized phonon states at low frequencies, which are due to atomiclike vibrations parallel to the basal plane of the Si, Al, or Ge elements.

  6. Flexible and re-configurable optical three-input XOR logic gate of phase-modulated signals with multicast functionality for potential application in optical physical-layer network coding.

    PubMed

    Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru

    2016-02-01

    Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed. PMID:26906806

  7. Reliability model generator

    NASA Technical Reports Server (NTRS)

    McMann, Catherine M. (Inventor); Cohen, Gerald C. (Inventor)

    1991-01-01

    An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.

  8. Reliability Generalization: "Lapsus Linguae"

    ERIC Educational Resources Information Center

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  9. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  10. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  11. Can There Be Reliability without "Reliability?"

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    2004-01-01

    An "Educational Researcher" article by Pamela Moss (1994) asks the title question, "Can there be validity without reliability?" Yes, she answers, if by reliability one means "consistency among independent observations intended as interchangeable" (Moss, 1994, p. 7), quantified by internal consistency indices such as KR-20 coefficients and…

  12. HELIOS Critical Design Review: Reliability

    NASA Technical Reports Server (NTRS)

    Benoehr, H. C.; Herholz, J.; Prem, H.; Mann, D.; Reichert, L.; Rupp, W.; Campbell, D.; Boettger, H.; Zerwes, G.; Kurvin, C.

    1972-01-01

    This paper presents Helios Critical Design Review Reliability form October 16-20, 1972. The topics include: 1) Reliability Requirement; 2) Reliability Apportionment; 3) Failure Rates; 4) Reliability Assessment; 5) Reliability Block Diagram; and 5) Reliability Information Sheet.

  13. Reliability computation from reliability block diagrams

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.; Eckstein, R. E.

    1971-01-01

    A method and a computer program are presented to calculate probability of system success from an arbitrary reliability block diagram. The class of reliability block diagrams that can be handled include any active/standby combination of redundancy, and the computations include the effects of dormancy and switching in any standby redundancy. The mechanics of the program are based on an extension of the probability tree method of computing system probabilities.

  14. Power electronics reliability analysis.

    SciTech Connect

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  15. Human Reliability Program Overview

    SciTech Connect

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  16. Structure par RMN d'un complexe AlcR(1-60)-ADN: Reconnaissance du petit sillon par la partie N-terminale

    NASA Astrophysics Data System (ADS)

    Cahuzac, B.; Félenbok, B.; Guittet, E.

    1999-10-01

    Aspergillus nidulans is a filamentous fungus able to use ethanol as sole energy source. The activation of the ethanol regulon genes expression is mediated by the AlcR protein. Its DNA-binding domain is located in the N-terminus (residues 1 to 60), and its NMR solution structure shows a global zinc binuclear cluster fold, with two helices in addition to the basic binuclear motif. A small number of crystallographic structures of DNA complexes of binuclear cluster proteins is yet known, and points out the major groove and the first helix as the principal sites of interaction on the DNA and the protein respectively. In this article we show evidences that the N-terminus of the protein is involved in binding to the minor groove. Aspergillus nidulans est un champignon filamenteux capable d'utiliser l'éthanol comme source unique d'énergie. La protéine AlcR est responsable de l'activation de l'expression des gènes du régulon éthanol. Le domaine de liaison à l'ADN est situé dans la partie N-terminale de la protéine (a.a. 1 à 60), et sa structure déterminée par RMN en solution montre un repliement global en bouquet binucléaire à zinc, avec deux hélices supplémentaires par rapport au motif de base. Alors que les structures déjà connues de complexes ADN - bouquets binucléaires permettent de situer dans le grand sillon la quasi-totalité des interactions, nous montrons dans la présente étude l'implication du début de la séquence dans la reconnaissance du petit sillon de l'ADN (a.a. 5 et 6).

  17. Reliable Design Versus Trust

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  18. Predicting software reliability

    NASA Technical Reports Server (NTRS)

    Littlewood, B.

    1989-01-01

    A detailed look is given to software reliability techniques. A conceptual model of the failure process is examined, and some software reliability growth models are discussed. Problems for which no current solutions exist are addressed, emphasizing the very difficult problem of safety-critical systems for which the reliability requirements can be enormously demanding.

  19. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  20. Utilizing Joint Routing and Capacity Assignment Algorithms to Achieve Inter- and Intra-Group Delay Fairness in Multi-Rate Multicast Wireless Sensor Networks

    PubMed Central

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu

    2013-01-01

    Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency. PMID:23493123

  1. Utilizing joint routing and capacity assignment algorithms to achieve inter- and intra-group delay fairness in multi-rate multicast wireless sensor networks.

    PubMed

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Lin, Leo Shih-Chang; Wen, Yean-Fu

    2013-01-01

    Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency. PMID:23493123

  2. Human reliability analysis

    SciTech Connect

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

  3. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1989-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  4. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  5. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1990-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predicitons for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates prodcued by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  6. Reliability of fluid systems

    NASA Astrophysics Data System (ADS)

    Kopáček, Jaroslav; Fojtášek, Kamil; Dvořák, Lukáš

    2016-03-01

    This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element), which is seen as a random variable and their data (values) can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.

  7. Hawaii electric system reliability.

    SciTech Connect

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  8. [Assay of three kinds of aluminum fractions (Al(a), Al(b) and Al(c)) in polynuclear aluminum solutions by Al-Ferron timed spectrophotometry and demarcation of their time limits].

    PubMed

    Wang, Chen-yi; Zhang, Cai-hua; Bi, Shu-ping; Zhang, Zhen-chao; Yang, Wei-hua

    2005-02-01

    Al-Ferron timed spectrophotometry assay is a basic method in the study on the formation of polynuclear hydroxyl aluminum species and their transformation laws in aqueous systems. In actual working process, this methodology has some dogmatism and arbitrariness in the time limits demarcation of the three kinds of aluminum fractions (Al(a), Al(b) and Al(c)) in polynuclear aluminum solutions, which makes this kind of classification rougher, and the experimental results non-reproducible. The reason for this difference is that the specific species within Al(a), Al(b) and Al(c) have different reaction mechanism and dynamics, and that specific species of Al(b) having different OH/Al ratios have different reaction rates with ferron. In this paper, the ExpAssoc distribution was applied to quantitatively fit the Al-Ferron reaction dynamics curve, and the extrapolation method was used to survey the 1 min measured value [Al(a)] of monomeric Al, which is hard to obtain in manual manipulation. The time demarcation between Al(b) and Al(c) should reach the point of the experimental data curve up to horizontal platform. The microwave-radiated technology was used to fast assay the total aluminum concentration [Al(T)]. With these methods, the contents of monomeric Al(a), polynuclear Al(b) and gel Al(c) can be conveniently and quantitatively measured. It offers a novel method for surmounting the arbitrariness in the measurement of the three kinds of aluminum fractions and the repetitive calculation of Al(a) and Al(b). PMID:15852869

  9. Electronic structure investigation of Ti3 AlC2 , Ti3 SiC2 , and Ti3 GeC2 by soft x-ray emission spectroscopy

    NASA Astrophysics Data System (ADS)

    Magnuson, M.; Palmquist, J.-P.; Mattesini, M.; Li, S.; Ahuja, R.; Eriksson, O.; Emmerlich, J.; Wilhelmsson, O.; Eklund, P.; Högberg, H.; Hultman, L.; Jansson, U.

    2005-12-01

    The electronic structures of epitaxially grown films of Ti3AlC2 , Ti3SiC2 , and Ti3GeC2 have been investigated by bulk-sensitive soft x-ray emission spectroscopy. The measured high-resolution Ti L , C K , Al L , Si L , and Ge M emission spectra are compared with ab initio density-functional theory including core-to-valence dipole matrix elements. A qualitative agreement between experiment and theory is obtained. A weak covalent Ti-Al bond is manifested by a pronounced shoulder in the Ti L emission of Ti3AlC2 . As Al is replaced with Si or Ge, the shoulder disappears. For the buried Al and Si layers, strongly hybridized spectral shapes are detected in Ti3AlC2 and Ti3SiC2 , respectively. As a result of relaxation of the crystal structure and the increased charge-transfer from Ti to C, the Ti-C bonding is strengthened. The differences between the electronic structures are discussed in relation to the bonding in the nanolaminates and the corresponding change of materials properties.

  10. Photovoltaic system reliability

    SciTech Connect

    Maish, A.B.; Atcitty, C.; Greenberg, D.

    1997-10-01

    This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.

  11. Hyperfine rather than spin splittings dominate the fine structure of the B 4Σ--X 4Σ- bands of AlC

    NASA Astrophysics Data System (ADS)

    Clouthier, Dennis J.; Kalume, Aimable

    2016-01-01

    Laser-induced fluorescence and wavelength resolved emission spectra of the B 4Σ--X 4Σ- band system of the gas phase cold aluminum carbide free radical have been obtained using the pulsed discharge jet technique. The radical was produced by electron bombardment of a precursor mixture of trimethylaluminum in high pressure argon. High resolution spectra show that each rotational line of the 0-0 and 1-1 bands of AlC is split into at least three components, with very similar splittings and intensities in both the P- and R-branches. The observed structure was reproduced by assuming bβS magnetic hyperfine coupling in the excited state, due to a substantial Fermi contact interaction of the unpaired electron in the aluminum 3s orbital. Rotational analysis has yielded ground and excited state equilibrium bond lengths in good agreement with the literature and our own ab initio values. Small discrepancies in the calculated intensities of the hyperfine lines suggest that the upper state spin-spin constant λ' is of the order of ≈0.025-0.030 cm-1.

  12. H Ly-alpha transmittance of thin foils of C, Si/C, and Al/C for keV particle detectors

    NASA Technical Reports Server (NTRS)

    Drake, V. A.; Sandel, B. R.; Jenkins, D. G.; Hsieh, K. C.

    1992-01-01

    A class of instruments designed for remote sensing of space plasmas by measuring energetic neutral atoms (ENA) uses a thin foil as both a signal generator and a light shield. An ENA imager must look directly at the ENA source region, which is also usually an intense source of H Ly-alpha (1216 A) photons. It is desirable to minimize the energy threshold for ENA detectors, at the same time maximizing the blocking of H Ly-alpha. Optimizing filter design to meet these two contrary requirements has led us to measure the transmittance of thin C, Si/C, and Al/C foils at H Ly-alpha. Our results indicate that (1) transmittance of less than 0.0007 can be achieved with 7 micro-g/sq cm Si on 1.7 micro-g/sq cm C; (2) an Si/C composite foil with a thin carbon layer is more effective in blocking UV radiation while having the lowest energy threshold of all the foils measured; and (3) transmittance of Si/C foils of known Si and C thicknesses cannot be accurately predicted, but must be measured.

  13. The effect of M (M=Ti,Cr,V,Nb) on the transport and elastic properties of nanolayered ternary carbides M2AlC

    NASA Astrophysics Data System (ADS)

    Hettinger, J.; Barsoum, M.

    2005-03-01

    We report a systematic investigation of the electronic, magneto-transport, thermal and elastic properties of the family of materials M2AlC where M is Ti, V, Cr or Nb in the temperature range 4 to 300K. The elastic constants were measured for all compounds ultrasonically. The bulk moduli and anisotropic Young's moduli were found to vary in these compounds depending on the transition metal M. The Debye temperatures were in the 640-710 K range for all materials investigated. The Seebeck coefficients for these four materials were small with differing temperature dependences. All but the Nb containing material have Seebeck coefficients that change sign. The electrical conductivity, Hall coefficient and magnetoresistances are analyzed within a two-band framework assuming a temperature-independent charge carrier concentration. We concluded that there is little correlation between the Seebeck voltage and Hall number. As with other MAX-phase materials, all these materials are nearly compensated. Comparisons between these results will be presented. Results will be discussed in relation to theoretical work and recent measurements on related systems.

  14. The effect of M (M=Ti, Cr, V, Nb) on transport and elastic properties of nanolayered ternary carbides M2AlC

    NASA Astrophysics Data System (ADS)

    Hettinger, Jeff; Finkel, Peter; Lofland, Sam; Barsoum, Michel; Gupta, Adrish

    2006-03-01

    We report on a systematic investigation of the electronic, magneto-transport, thermal and elastic properties of the family of materials M2AlC where M is Ti, V, Cr or Nb in the temperature range 4 to 300K. The elastic constants were measured for all compounds ultrasonic technique. The bulk moduli and anisotropic Young's moduli found to be varied in these compounds for various transition metal M. The Debye temperatures were high in the 640-710 K range and quite insensitive to composition. The Seebeck coefficient was a non-monotonic function of a temperature: at the lowest temperatures is small but increases with increasing temperature and saturates at 60-80 K and goes through zero again manifesting change in the dominating charge carrier type. The electrical conductivity, Hall coefficient and magnetoresistances are analyzed within a two-band framework assuming a temperature-independent charge carrier concentration. We concluded that there is little correlation between the Seebeck voltage and Hall number. As with other MAX-phase materials, all these materials are nearly compensated. Comparisons of these results will be presented. Results will be discussed in relation to theoretical work and recent measurements on related systems.

  15. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Wilson, Larry W.

    1989-01-01

    The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

  16. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  17. A reliable transmission protocol for ZigBee-based wireless patient monitoring.

    PubMed

    Chen, Shyr-Kuen; Kao, Tsair; Chan, Chia-Tai; Huang, Chih-Ning; Chiang, Chih-Yen; Lai, Chin-Yu; Tung, Tse-Hua; Wang, Pi-Chung

    2012-01-01

    Patient monitoring systems are gaining their importance as the fast-growing global elderly population increases demands for caretaking. These systems use wireless technologies to transmit vital signs for medical evaluation. In a multihop ZigBee network, the existing systems usually use broadcast or multicast schemes to increase the reliability of signals transmission; however, both the schemes lead to significantly higher network traffic and end-to-end transmission delay. In this paper, we present a reliable transmission protocol based on anycast routing for wireless patient monitoring. Our scheme automatically selects the closest data receiver in an anycast group as a destination to reduce the transmission latency as well as the control overhead. The new protocol also shortens the latency of path recovery by initiating route recovery from the intermediate routers of the original path. On the basis of a reliable transmission scheme, we implement a ZigBee device for fall monitoring, which integrates fall detection, indoor positioning, and ECG monitoring. When the triaxial accelerometer of the device detects a fall, the current position of the patient is transmitted to an emergency center through a ZigBee network. In order to clarify the situation of the fallen patient, 4-s ECG signals are also transmitted. Our transmission scheme ensures the successful transmission of these critical messages. The experimental results show that our scheme is fast and reliable. We also demonstrate that our devices can seamlessly integrate with the next generation technology of wireless wide area network, worldwide interoperability for microwave access, to achieve real-time patient monitoring. PMID:21997287

  18. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  19. Photovoltaic module reliability workshop

    NASA Astrophysics Data System (ADS)

    Mrig, L.

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986 to 1990. The reliability photovoltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warrantees available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the U.S., PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  20. Proposed reliability cost model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  1. Orbiter Autoland reliability analysis

    NASA Technical Reports Server (NTRS)

    Welch, D. Phillip

    1993-01-01

    The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

  2. Photovoltaic module reliability workshop

    SciTech Connect

    Mrig, L.

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  3. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  4. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  5. Gearbox Reliability Collaborative Update (Presentation)

    SciTech Connect

    Sheng, S.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  6. Materials reliability issues in microelectronics

    SciTech Connect

    Lloyd, J.R. ); Yost, F.G. ); Ho, P.S. )

    1991-01-01

    This book covers the proceedings of a MRS symposium on materials reliability in microelectronics. Topics include: electromigration; stress effects on reliability; stress and packaging; metallization; device, oxide and dielectric reliability; new investigative techniques; and corrosion.

  7. Substrate-Driven Convergence of the Microbial Community in Lignocellulose-Amended Enrichments of Gut Microflora from the Canadian Beaver (Castor canadensis) and North American Moose (Alces americanus)

    PubMed Central

    Wong, Mabel T.; Wang, Weijun; Lacourt, Michael; Couturier, Marie; Edwards, Elizabeth A.; Master, Emma R.

    2016-01-01

    Strategic enrichment of microcosms derived from wood foragers can facilitate the discovery of key microbes that produce enzymes for the bioconversion of plant fiber (i.e., lignocellulose) into valuable chemicals and energy. In this study, lignocellulose-degrading microorganisms from the digestive systems of Canadian beaver (Castor canadensis) and North American moose (Alces americanus) were enriched under methanogenic conditions for over 3 years using various wood-derived substrates, including (i) cellulose (C), (ii) cellulose + lignosulphonate (CL), (iii) cellulose + tannic acid (CT), and (iv) poplar hydrolysate (PH). Substantial improvement in the conversion of amended organic substrates into biogas was observed in both beaver dropping and moose rumen enrichment cultures over the enrichment phases (up to 0.36–0.68 ml biogas/mg COD added), except for enrichments amended with tannic acid where conversion was approximately 0.15 ml biogas/mg COD added. Multiplex-pyrosequencing of 16S rRNA genes revealed systematic shifts in the population of Firmicutes, Bacteroidetes, Chlorobi, Spirochaetes, Chloroflexi, and Elusimicrobia in response to the enrichment. These shifts were predominantly substrate driven, not inoculum driven, as revealed by both UPGMA clustering pattern and OTU distribution. Additionally, the relative abundance of multiple OTUs from poorly defined taxonomic lineages increased from less than 1% to 25–50% in microcosms amended with lignocellulosic substrates, including OTUs from classes SJA-28, Endomicrobia, orders Bacteroidales, OPB54, and family Lachnospiraceae. This study provides the first direct comparison of shifts in microbial communities that occurred in different environmental samples in response to multiple relevant lignocellulosic carbon sources, and demonstrates the potential of enrichment to increase the abundance of key lignocellulolytic microorganisms and encoded activities. PMID:27446004

  8. Substrate-Driven Convergence of the Microbial Community in Lignocellulose-Amended Enrichments of Gut Microflora from the Canadian Beaver (Castor canadensis) and North American Moose (Alces americanus).

    PubMed

    Wong, Mabel T; Wang, Weijun; Lacourt, Michael; Couturier, Marie; Edwards, Elizabeth A; Master, Emma R

    2016-01-01

    Strategic enrichment of microcosms derived from wood foragers can facilitate the discovery of key microbes that produce enzymes for the bioconversion of plant fiber (i.e., lignocellulose) into valuable chemicals and energy. In this study, lignocellulose-degrading microorganisms from the digestive systems of Canadian beaver (Castor canadensis) and North American moose (Alces americanus) were enriched under methanogenic conditions for over 3 years using various wood-derived substrates, including (i) cellulose (C), (ii) cellulose + lignosulphonate (CL), (iii) cellulose + tannic acid (CT), and (iv) poplar hydrolysate (PH). Substantial improvement in the conversion of amended organic substrates into biogas was observed in both beaver dropping and moose rumen enrichment cultures over the enrichment phases (up to 0.36-0.68 ml biogas/mg COD added), except for enrichments amended with tannic acid where conversion was approximately 0.15 ml biogas/mg COD added. Multiplex-pyrosequencing of 16S rRNA genes revealed systematic shifts in the population of Firmicutes, Bacteroidetes, Chlorobi, Spirochaetes, Chloroflexi, and Elusimicrobia in response to the enrichment. These shifts were predominantly substrate driven, not inoculum driven, as revealed by both UPGMA clustering pattern and OTU distribution. Additionally, the relative abundance of multiple OTUs from poorly defined taxonomic lineages increased from less than 1% to 25-50% in microcosms amended with lignocellulosic substrates, including OTUs from classes SJA-28, Endomicrobia, orders Bacteroidales, OPB54, and family Lachnospiraceae. This study provides the first direct comparison of shifts in microbial communities that occurred in different environmental samples in response to multiple relevant lignocellulosic carbon sources, and demonstrates the potential of enrichment to increase the abundance of key lignocellulolytic microorganisms and encoded activities. PMID:27446004

  9. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ``factories,`` reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  10. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  11. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  12. Software Reliability Measurement Experience

    NASA Technical Reports Server (NTRS)

    Nikora, A. P.

    1993-01-01

    In this chapter, we describe a recent study of software reliability measurement methods that was conducted at the Jet Propulsion Laboratory. The first section of the chapter, sections 8.1, summarizes the study, characterizes the participating projects, describes the available data, and summarizes the tudy's results.

  13. Reliable solar cookers

    SciTech Connect

    Magney, G.K.

    1992-12-31

    The author describes the activities of SERVE, a Christian relief and development agency, to introduce solar ovens to the Afghan refugees in Pakistan. It has provided 5,000 solar cookers since 1984. The experience has demonstrated the potential of the technology and the need for a durable and reliable product. Common complaints about the cookers are discussed and the ideal cooker is described.

  14. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  15. Nonparametric Methods in Reliability

    PubMed Central

    Hollander, Myles; Peña, Edsel A.

    2005-01-01

    Probabilistic and statistical models for the occurrence of a recurrent event over time are described. These models have applicability in the reliability, engineering, biomedical and other areas where a series of events occurs for an experimental unit as time progresses. Nonparametric inference methods, in particular, the estimation of a relevant distribution function, are described. PMID:16710444

  16. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  17. Reliable broadcast protocols

    NASA Technical Reports Server (NTRS)

    Joseph, T. A.; Birman, Kenneth P.

    1989-01-01

    A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

  18. Data networks reliability

    NASA Astrophysics Data System (ADS)

    Gallager, Robert G.

    1988-10-01

    The research from 1984 to 1986 on Data Network Reliability had the objective of developing general principles governing the reliable and efficient control of data networks. The research was centered around three major areas: congestion control, multiaccess networks, and distributed asynchronous algorithms. The major topics within congestion control were the use of flow control algorithms. The major topics within congestion control were the use of flow control to reduce congestion and the use of routing to reduce congestion. The major topics within multiaccess networks were the communication properties of multiaccess channels, collision resolution, and packet radio networks. The major topics within asynchronous distributed algorithms were failure recovery, time vs. communication tradeoffs, and the general theory of distributed algorithms.

  19. Human Reliability Program Workshop

    SciTech Connect

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  20. Reliability of photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1986-01-01

    In order to assess the reliability of photovoltaic modules, four categories of known array failure and degradation mechanisms are discussed, and target reliability allocations have been developed within each category based on the available technology and the life-cycle-cost requirements of future large-scale terrestrial applications. Cell-level failure mechanisms associated with open-circuiting or short-circuiting of individual solar cells generally arise from cell cracking or the fatigue of cell-to-cell interconnects. Power degradation mechanisms considered include gradual power loss in cells, light-induced effects, and module optical degradation. Module-level failure mechanisms and life-limiting wear-out mechanisms are also explored.

  1. Compact, Reliable EEPROM Controller

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2010-01-01

    A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault

  2. Spacecraft transmitter reliability

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

  3. Reliability and testing

    NASA Technical Reports Server (NTRS)

    Auer, Werner

    1996-01-01

    Reliability and its interdependence with testing are important topics for development and manufacturing of successful products. This generally accepted fact is not only a technical statement, but must be also seen in the light of 'Human Factors.' While the background for this paper is the experience gained with electromechanical/electronic space products, including control and system considerations, it is believed that the content could be also of interest for other fields.

  4. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  5. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  6. Perceptions of environmental change and use of traditional knowledge to plan riparian forest restoration with relocated communities in Alcântara, Eastern Amazon

    PubMed Central

    2014-01-01

    Background Riparian forests provide ecosystem services that are essential for human well-being. The Pepital River is the main water supply for Alcântara (Brazil) and its forests are disappearing. This is affecting water volume and distribution in the region. Promoting forest restoration is imperative. In deprived regions, restoration success depends on the integration of ecology, livelihoods and traditional knowledge (TEK). In this study, an interdisciplinary research framework is proposed to design riparian forest restoration strategies based on ecological data, TEK and social needs. Methods This study takes place in a region presenting a complex history of human relocation and land tenure. Local populations from seven villages were surveyed to document livelihood (including ‘free-listing’ of agricultural crops and homegarden tree species). Additionally, their perceptions toward environmental changes were explored through semi-structured interviews (n = 79). Ethnobotanical information on forest species and their uses were assessed by local-specialists (n = 19). Remnants of conserved forests were surveyed to access ecological information on tree species (three plots of 1,000 m2). Results included descriptive statistics, frequency and Smith’s index of salience of the free-list results. Results The local population depends primarily on slash-and-burn subsistence agriculture to meet their needs. Interviewees showed a strong empirical knowledge about the environmental problems of the river, and of their causes, consequences and potential solutions. Twenty-four tree species (dbh > 10 cm) were found at the reference sites. Tree density averaged 510 individuals per hectare (stdv = 91.6); and 12 species were considered the most abundant (density > 10ind/ha). There was a strong consensus among plant-specialists about the most important trees. The species lists from reference sites and plant-specialists presented an important convergence

  7. Ultimately Reliable Pyrotechnic Systems

    NASA Technical Reports Server (NTRS)

    Scott, John H.; Hinkel, Todd

    2015-01-01

    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing

  8. CR reliability testing

    NASA Astrophysics Data System (ADS)

    Honeyman-Buck, Janice C.; Rill, Lynn; Frost, Meryll M.; Staab, Edward V.

    1998-07-01

    The purpose of this work was to develop a method for systematically testing the reliability of a CR system under realistic daily loads in a non-clinical environment prior to its clinical adoption. Once digital imaging replaces film, it will be very difficult to revert back should the digital system become unreliable. Prior to the beginning of the test, a formal evaluation was performed to set the benchmarks for performance and functionality. A formal protocol was established that included all the 62 imaging plates in the inventory for each 24-hour period in the study. Imaging plates were exposed using different combinations of collimation, orientation, and SID. Anthropomorphic phantoms were used to acquire images of different sizes. Each combination was chosen randomly to simulate the differences that could occur in clinical practice. The tests were performed over a wide range of times with batches of plates processed to simulate the temporal constraints required by the nature of portable radiographs taken in the Intensive Care Unit (ICU). Current patient demographics were used for the test studies so automatic routing algorithms could be tested. During the test, only three minor reliability problems occurred, two of which were not directly related to the CR unit. One plate was discovered to cause a segmentation error that essentially reduced the image to only black and white with no gray levels. This plate was removed from the inventory to be replaced. Another problem was a PACS routing problem that occurred when the DICOM server with which the CR was communicating had a problem with disk space. The final problem was a network printing failure to the laser cameras. Although the units passed the reliability test, problems with interfacing to workstations were discovered. The two issues that were identified were the interpretation of what constitutes a study for CR and the construction of the look-up table for a proper gray scale display.

  9. Reliable VLSI sequential controllers

    NASA Technical Reports Server (NTRS)

    Whitaker, S.; Maki, G.; Shamanna, M.

    1990-01-01

    A VLSI architecture for synchronous sequential controllers is presented that has attractive qualities for producing reliable circuits. In these circuits, one hardware implementation can realize any flow table with a maximum of 2(exp n) internal states and m inputs. Also all design equations are identical. A real time fault detection means is presented along with a strategy for verifying the correctness of the checking hardware. This self check feature can be employed with no increase in hardware. The architecture can be modified to achieve fail safe designs. With no increase in hardware, an adaptable circuit can be realized that allows replacement of faulty transitions with fault free transitions.

  10. Ferrite logic reliability study

    NASA Technical Reports Server (NTRS)

    Baer, J. A.; Clark, C. B.

    1973-01-01

    Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

  11. Fault Tree Reliability Analysis and Design-for-reliability

    1998-05-05

    WinR provides a fault tree analysis capability for performing systems reliability and design-for-reliability analyses. The package includes capabilities for sensitivity and uncertainity analysis, field failure data analysis, and optimization.

  12. On Component Reliability and System Reliability for Space Missions

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

    2012-01-01

    This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

  13. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1990-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  14. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1988-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  15. Load Control System Reliability

    SciTech Connect

    Trudnowski, Daniel

    2015-04-03

    This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”

  16. Understanding the Elements of Operational Reliability: A Key for Achieving High Reliability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2010-01-01

    This viewgraph presentation reviews operational reliability and its role in achieving high reliability through design and process reliability. The topics include: 1) Reliability Engineering Major Areas and interfaces; 2) Design Reliability; 3) Process Reliability; and 4) Reliability Applications.

  17. Formal methods and software reliability

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.

    2004-01-01

    In this position statement I briefly describe how the software reliability problem has changed over the years, and the primary reasons for the recent creation of the Laboratory for Reliable Software at JPL.

  18. Further discussion on reliability: the art of reliability estimation.

    PubMed

    Yang, Yanyun; Green, Samuel B

    2015-01-01

    Sijtsma and van der Ark (2015) focused in their lead article on three frameworks for reliability estimation in nursing research: classical test theory (CTT), factor analysis (FA), and generalizability theory. We extend their presentation with particular attention to CTT and FA methods. We first consider the potential of yielding an overly negative or an overly positive assessment of reliability based on coefficient alpha. Next, we discuss other CTT methods for estimating reliability and how the choice of methods affects the interpretation of the reliability coefficient. Finally, we describe FA methods, which not only permit an understanding of a measure's underlying structure but also yield a variety of reliability coefficients with different interpretations. On a more general note, we discourage reporting reliability as a two-choice outcome--unsatisfactory or satisfactory; rather, we recommend that nursing researchers make a conceptual and empirical argument about when a measure might be more or less reliable, depending on its use. PMID:25738627

  19. Making Reliability Arguments in Classrooms

    ERIC Educational Resources Information Center

    Parkes, Jay; Giron, Tilia

    2006-01-01

    Reliability methodology needs to evolve as validity has done into an argument supported by theory and empirical evidence. Nowhere is the inadequacy of current methods more visible than in classroom assessment. Reliability arguments would also permit additional methodologies for evidencing reliability in classrooms. It would liberalize methodology…

  20. Testing for PV Reliability (Presentation)

    SciTech Connect

    Kurtz, S.; Bansal, S.

    2014-09-01

    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  1. Business of reliability

    NASA Astrophysics Data System (ADS)

    Engel, Pierre

    1999-12-01

    The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

  2. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  3. A Performance Evaluation of NACK-Oriented Protocols as the Foundation of Reliable Delay- Tolerant Networking Convergence Layers

    NASA Technical Reports Server (NTRS)

    Iannicca, Dennis; Hylton, Alan; Ishac, Joseph

    2012-01-01

    Delay-Tolerant Networking (DTN) is an active area of research in the space communications community. DTN uses a standard layered approach with the Bundle Protocol operating on top of transport layer protocols known as convergence layers that actually transmit the data between nodes. Several different common transport layer protocols have been implemented as convergence layers in DTN implementations including User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Licklider Transmission Protocol (LTP). The purpose of this paper is to evaluate several stand-alone implementations of negative-acknowledgment based transport layer protocols to determine how they perform in a variety of different link conditions. The transport protocols chosen for this evaluation include Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), Licklider Transmission Protocol (LTP), NACK-Oriented Reliable Multicast (NORM), and Saratoga. The test parameters that the protocols were subjected to are characteristic of common communications links ranging from terrestrial to cis-lunar and apply different levels of delay, line rate, and error.

  4. Reliability of wireless sensor networks.

    PubMed

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  5. Nuclear weapon reliability evaluation methodology

    SciTech Connect

    Wright, D.L.

    1993-06-01

    This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

  6. A fourth generation reliability predictor

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Martensen, Anna L.

    1988-01-01

    A reliability/availability predictor computer program has been developed and is currently being beta-tested by over 30 US companies. The computer program is called the Hybrid Automated Reliability Predictor (HARP). HARP was developed to fill an important gap in reliability assessment capabilities. This gap was manifested through the use of its third-generation cousin, the Computer-Aided Reliability Estimation (CARE III) program, over a six-year development period and an additional three-year period during which CARE III has been in the public domain. The accumulated experience of the over 30 establishments now using CARE III was used in the development of the HARP program.

  7. US electric power system reliability

    NASA Astrophysics Data System (ADS)

    Electric energy supply, transmission and distribution systems are investigated in order to determine priorities for legislation. The status and the outlook for electric power reliability are discussed.

  8. Stirling Convertor Fasteners Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

    2006-01-01

    Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

  9. Avionics design for reliability bibliography

    NASA Technical Reports Server (NTRS)

    1976-01-01

    A bibliography with abstracts was presented in support of AGARD lecture series No. 81. The following areas were covered: (1) program management, (2) design for high reliability, (3) selection of components and parts, (4) environment consideration, (5) reliable packaging, (6) life cycle cost, and (7) case histories.

  10. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  11. The Reliability of Density Measurements.

    ERIC Educational Resources Information Center

    Crothers, Charles

    1978-01-01

    Data from a land-use study of small- and medium-sized towns in New Zealand are used to ascertain the relationship between official and effective density measures. It was found that the reliability of official measures of density is very low overall, although reliability increases with community size. (Author/RLV)

  12. Photovoltaic performance and reliability workshop

    SciTech Connect

    Mrig, L.

    1993-12-01

    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  13. Photovoltaic performance and reliability workshop

    NASA Astrophysics Data System (ADS)

    Mrig, L.

    1993-12-01

    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986-1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the U.S., PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  14. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  15. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Reliability Standards... RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file each Reliability Standard or modification to a Reliability Standard that it proposes to be made effective...

  16. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Reliability Standards... RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file each Reliability Standard or modification to a Reliability Standard that it proposes to be made effective...

  17. Calculating system reliability with SRFYDO

    SciTech Connect

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  18. Reliability analysis of interdependent lattices

    NASA Astrophysics Data System (ADS)

    Limiao, Zhang; Daqing, Li; Pengju, Qin; Bowen, Fu; Yinan, Jiang; Zio, Enrico; Rui, Kang

    2016-06-01

    Network reliability analysis has drawn much attention recently due to the risks of catastrophic damage in networked infrastructures. These infrastructures are dependent on each other as a result of various interactions. However, most of the reliability analyses of these interdependent networks do not consider spatial constraints, which are found important for robustness of infrastructures including power grid and transport systems. Here we study the reliability properties of interdependent lattices with different ranges of spatial constraints. Our study shows that interdependent lattices with strong spatial constraints are more resilient than interdependent Erdös-Rényi networks. There exists an intermediate range of spatial constraints, at which the interdependent lattices have minimal resilience.

  19. Reliability analysis in intelligent machines

    NASA Technical Reports Server (NTRS)

    Mcinroy, John E.; Saridis, George N.

    1990-01-01

    Given an explicit task to be executed, an intelligent machine must be able to find the probability of success, or reliability, of alternative control and sensing strategies. By using concepts for information theory and reliability theory, new techniques for finding the reliability corresponding to alternative subsets of control and sensing strategies are proposed such that a desired set of specifications can be satisfied. The analysis is straightforward, provided that a set of Gaussian random state variables is available. An example problem illustrates the technique, and general reliability results are presented for visual servoing with a computed torque-control algorithm. Moreover, the example illustrates the principle of increasing precision with decreasing intelligence at the execution level of an intelligent machine.

  20. Reliability and Maintainability (RAM) Training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

    2000-01-01

    The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

  1. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  2. Failure Analysis for Improved Reliability

    NASA Technical Reports Server (NTRS)

    Sood, Bhanu

    2016-01-01

    Outline: Section 1 - What is reliability and root cause? Section 2 - Overview of failure mechanisms. Section 3 - Failure analysis techniques (1. Non destructive analysis techniques, 2. Destructive Analysis, 3. Materials Characterization). Section 4 - Summary and Closure

  3. GaAs Reliability Database

    NASA Technical Reports Server (NTRS)

    Sacco, T.; Gonzalez, S.; Kayali, S.

    1993-01-01

    The database consists of two main sections, the data references and the device reliability records. The reference section contains 8 fields: reference number, date of publication, authors, article title, publisher, volume, and page numbers.

  4. Photovoltaics Performance and Reliability Workshop

    NASA Astrophysics Data System (ADS)

    Mrig, L.

    This document consists of papers and viewgraphs compiled from the proceedings of a workshop held in September 1992. This workshop was the fifth in a series sponsored by NREL/DOE under the general subject areas of photovoltaic module testing and reliability. PV manufacturers, DOE laboratories, electric utilities, and others exchanged technical knowledge and field experience. The topics of cell and module characterization, module and system performance, materials and module durability/reliability research, solar radiation, and applications are discussed.

  5. Accelerator Availability and Reliability Issues

    SciTech Connect

    Steve Suhring

    2003-05-01

    Maintaining reliable machine operations for existing machines as well as planning for future machines' operability present significant challenges to those responsible for system performance and improvement. Changes to machine requirements and beam specifications often reduce overall machine availability in an effort to meet user needs. Accelerator reliability issues from around the world will be presented, followed by a discussion of the major factors influencing machine availability.

  6. Robust fusion with reliabilities weights

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Marques, Miguel

    2002-03-01

    The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.

  7. Reliability measure for segmenting algorithms

    NASA Astrophysics Data System (ADS)

    Alvarez, Robert E.

    2004-05-01

    Segmenting is a key initial step in many computer-aided detection (CAD) systems. Our purpose is to develop a method to estimate the reliability of segmenting algorithm results. We use a statistical shape model computed using principal component analysis. The model retains a small number of eigenvectors, or modes, that represent a large fraction of the variance. The residuals between the segmenting result and its projection into the space of retained modes are computed. The sum of the squares of residuals is transformed to a zero-mean, unit standard deviation Gaussian random variable. We also use the standardized scale parameter. The reliability measure is the probability that the transformed residuals and scale parameter are greater than the absolute value of the observed values. We tested the reliability measure with thirty chest x-ray images with "leave-out-one" testing. The Gaussian assumption was verified using normal probability plots. For each image, a statistical shape model was computed from the hand-digitized data of the rest of the images in the training set. The residuals and scale parameter with automated segment results for the image were used to compute the reliability measure in each case. The reliability measure was significantly lower for two images in the training set with unusual lung fields or processing errors. The data and Matlab scripts for reproducing the figures are at http://www.aprendtech.com/papers/relmsr.zip Errors detected by the new reliability measure can be used to adjust processing or warn the user.

  8. MEMS reliability: coming of age

    NASA Astrophysics Data System (ADS)

    Douglass, Michael R.

    2008-02-01

    In today's high-volume semiconductor world, one could easily take reliability for granted. As the MOEMS/MEMS industry continues to establish itself as a viable alternative to conventional manufacturing in the macro world, reliability can be of high concern. Currently, there are several emerging market opportunities in which MOEMS/MEMS is gaining a foothold. Markets such as mobile media, consumer electronics, biomedical devices, and homeland security are all showing great interest in microfabricated products. At the same time, these markets are among the most demanding when it comes to reliability assurance. To be successful, each company developing a MOEMS/MEMS device must consider reliability on an equal footing with cost, performance and manufacturability. What can this maturing industry learn from the successful development of DLP technology, air bag accelerometers and inkjet printheads? This paper discusses some basic reliability principles which any MOEMS/MEMS device development must use. Examples from the commercially successful and highly reliable Digital Micromirror Device complement the discussion.

  9. Reliability of BGA Packages for Highly Reliable Application and Chip Scale Package Board Level Reliability

    NASA Technical Reports Server (NTRS)

    Ghaffarian, Reza

    1997-01-01

    Diffenent aspects of advanced surface mount package technology have been investigated for aerospace applications. Three key areas included understanding assembly reliability behavior of conventional surface Mount, Ball Grid Arrays (BGAs), and Chip Scale Packages.

  10. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability Standards... RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file...

  11. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability reports. 39... RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.11 Reliability reports. (a) The Electric Reliability Organization shall...

  12. Assessment of NDE reliability data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

    1975-01-01

    Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

  13. Electronics reliability and measurement technology

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (Editor)

    1987-01-01

    A summary is presented of the Electronics Reliability and Measurement Technology Workshop. The meeting examined the U.S. electronics industry with particular focus on reliability and state-of-the-art technology. A general consensus of the approximately 75 attendees was that "the U.S. electronics industries are facing a crisis that may threaten their existence". The workshop had specific objectives to discuss mechanisms to improve areas such as reliability, yield, and performance while reducing failure rates, delivery times, and cost. The findings of the workshop addressed various aspects of the industry from wafers to parts to assemblies. Key problem areas that were singled out for attention are identified, and action items necessary to accomplish their resolution are recommended.

  14. A Review of Score Reliability: Contemporary Thinking on Reliability Issues

    ERIC Educational Resources Information Center

    Rosen, Gerald A.

    2004-01-01

    Bruce Thompson's edited volume begins with a basic principle, one might call it a basic truth, "reliability is a property that applies to scores, and not immutably across all conceivable uses everywhere of a given measure." (p. 3). The author claims that this principle is little known and-or little understood. While that is an arguable point, the…

  15. Reliability in the design phase

    SciTech Connect

    Siahpush, A.S.; Hills, S.W.; Pham, H.; Majumdar, D.

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system`s reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  16. Reliability in the design phase

    SciTech Connect

    Siahpush, A.S.; Hills, S.W.; Pham, H. ); Majumdar, D. )

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system's reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  17. Photovoltaic power system reliability considerations

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.

    1980-01-01

    An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.

  18. Metrological Reliability of Medical Devices

    NASA Astrophysics Data System (ADS)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  19. Reliability growth models for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1991-01-01

    The objective of any reliability growth study is prediction of reliability at some future instant. Another objective is statistical inference, estimation of reliability for reliability demonstration. A cause of concern for the development engineer and management is that reliability demands an excessive number of tests for reliability demonstration. For example, the Space Transportation Main Engine (STME) program requirements call for .99 reliability at 90 pct. confidence for demonstration. This requires running 230 tests with zero failure if a classical binomial model is used. It is therefore also an objective to explore the reliability growth models for reliability demonstration and tracking and their applicability to NASA programs. A reliability growth model is an analytical tool used to monitor the reliability progress during the development program and to establish a test plan to demonstrate an acceptable system reliability.

  20. The Reliability of College Grades

    ERIC Educational Resources Information Center

    Beatty, Adam S.; Walmsley, Philip T.; Sackett, Paul R.; Kuncel, Nathan R.; Koch, Amanda J.

    2015-01-01

    Little is known about the reliability of college grades relative to how prominently they are used in educational research, and the results to date tend to be based on small sample studies or are decades old. This study uses two large databases (N > 800,000) from over 200 educational institutions spanning 13 years and finds that both first-year…

  1. Web Awards: Are They Reliable?

    ERIC Educational Resources Information Center

    Everhart, Nancy; McKnight, Kathleen

    1997-01-01

    School library media specialists recommend quality Web sites to children based on evaluations and Web awards. This article examines three types of Web awards and who grants them, suggests ways to determine their reliability, and discusses specific award sites. Includes a bibliography of Web sites. (PEN)

  2. Reliability Analysis of Money Habitudes

    ERIC Educational Resources Information Center

    Delgadillo, Lucy M.; Bushman, Brittani S.

    2015-01-01

    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  3. Wind turbine reliability database update.

    SciTech Connect

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  4. Averaging Internal Consistency Reliability Coefficients

    ERIC Educational Resources Information Center

    Feldt, Leonard S.; Charter, Richard A.

    2006-01-01

    Seven approaches to averaging reliability coefficients are presented. Each approach starts with a unique definition of the concept of "average," and no approach is more correct than the others. Six of the approaches are applicable to internal consistency coefficients. The seventh approach is specific to alternate-forms coefficients. Although the…

  5. Photovoltaic performance and reliability workshop

    SciTech Connect

    Kroposki, B

    1996-10-01

    This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activities to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.

  6. Wanted: A Solid, Reliable PC

    ERIC Educational Resources Information Center

    Goldsborough, Reid

    2004-01-01

    This article discusses PC reliability, one of the most pressing issues regarding computers. Nearly a quarter century after the introduction of the first IBM PC and the outset of the personal computer revolution, PCs have largely become commodities, with little differentiating one brand from another in terms of capability and performance. Most of…

  7. Discourse Analysis Procedures: Reliability Issues.

    ERIC Educational Resources Information Center

    Hux, Karen; And Others

    1997-01-01

    A study evaluated and compared four methods of assessing reliability on one discourse analysis procedure--a modified version of Damico's Clinical Discourse Analysis. The methods were Pearson product-moment correlations; interobserver agreement; Cohen's kappa; and generalizability coefficients. The strengths and weaknesses of the methods are…

  8. Compound estimation procedures in reliability

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1990-01-01

    At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the

  9. Power Quality and Reliability Project

    NASA Technical Reports Server (NTRS)

    Attia, John O.

    2001-01-01

    One area where universities and industry can link is in the area of power systems reliability and quality - key concepts in the commercial, industrial and public sector engineering environments. Prairie View A&M University (PVAMU) has established a collaborative relationship with the University of'Texas at Arlington (UTA), NASA/Johnson Space Center (JSC), and EP&C Engineering and Technology Group (EP&C) a small disadvantage business that specializes in power quality and engineering services. The primary goal of this collaboration is to facilitate the development and implementation of a Strategic Integrated power/Systems Reliability and Curriculum Enhancement Program. The objectives of first phase of this work are: (a) to develop a course in power quality and reliability, (b) to use the campus of Prairie View A&M University as a laboratory for the study of systems reliability and quality issues, (c) to provide students with NASA/EPC shadowing and Internship experience. In this work, a course, titled "Reliability Analysis of Electrical Facilities" was developed and taught for two semesters. About thirty seven has benefited directly from this course. A laboratory accompanying the course was also developed. Four facilities at Prairie View A&M University were surveyed. Some tests that were performed are (i) earth-ground testing, (ii) voltage, amperage and harmonics of various panels in the buildings, (iii) checking the wire sizes to see if they were the right size for the load that they were carrying, (iv) vibration tests to test the status of the engines or chillers and water pumps, (v) infrared testing to the test arcing or misfiring of electrical or mechanical systems.

  10. Reliability Evaluation of Passive Systems Through Functional Reliability Assessment

    SciTech Connect

    Burgazzi, Luciano

    2003-11-15

    A methodology, to quantify the reliability of passive safety systems, proposed for use in advanced reactor design, is developed. Passive systems are identified as systems that do not need any external input or energy to operate and rely only upon natural physical laws (e.g., gravity, natural circulation, heat conduction, internally stored energy, etc.) and/or intelligent use of the energy inherently available in the system (e.g., chemical reaction, decay heat, etc.). The reliability of a passive system refers to the ability of the system to carry out the required function under the prevailing condition when required: The passive system may fail its mission, in addition to the classical mechanical failure of its components, for deviation from the expected behavior, due to physical phenomena or to different boundary and initial conditions. The present research activity is finalized at the reliability estimation of passive B systems (i.e., implementing moving working fluids, see IAEA); the selected system is a loop operating in natural circulation including a heat source and a heat sink.The functional reliability concept, defined as the probability to perform the required mission, is introduced, and the R-S (Resistance-Stress) model taken from fracture mechanics is adopted. R and S are coined as expressions of functional Requirement and system State. Water mass flow circulating through the system is accounted as a parameter defining the passive system performance, and probability distribution functions (pdf's) are assigned to both R and S quantities; thus, the mission of the passive system defines which parameter values are considered a failure by comparing the corresponding pdfs according to a defined safety criteria. The methodology, its application, and results of the analysis are presented and discussed.

  11. 77 FR 26686 - Transmission Planning Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... Standard TPL-002-0b, submitted by the North American Electric Reliability Corporation (NERC), the...) Reliability Standard TPL- 002-0b, submitted by the North American Electric Reliability Corporation (NERC), the... Reliability Standards, Notice of Proposed Rulemaking, 76 FR 66229 (Oct. 20, 2011), FERC Stats. & Regs. ]...

  12. Gearbox Reliability Collaborative Bearing Calibration

    SciTech Connect

    van Dam, J.

    2011-10-01

    NREL has initiated the Gearbox Reliability Collaborative (GRC) to investigate the root cause of the low wind turbine gearbox reliability. The GRC follows a multi-pronged approach based on a collaborative of manufacturers, owners, researchers and consultants. The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database. At the core of the project are two 750kW gearboxes that have been redesigned and rebuilt so that they are representative of the multi-megawatt gearbox topology currently used in the industry. These gearboxes are heavily instrumented and are tested in the field and on the dynamometer. This report discusses the bearing calibrations of the gearboxes.

  13. On-orbit spacecraft reliability

    NASA Technical Reports Server (NTRS)

    Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

    1978-01-01

    Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

  14. Three approaches to reliability analysis

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.

    1989-01-01

    It is noted that current reliability analysis tools differ not only in their solution techniques, but also in their approach to model abstraction. The analyst must be satisfied with the constraints that are intrinsic to any combination of solution technique and model abstraction. To get a better idea of the nature of these constraints, three reliability analysis tools (HARP, ASSIST/SURE, and CAME) were used to model portions of the Integrated Airframe/Propulsion Control System architecture. When presented with the example problem, all three tools failed to produce correct results. In all cases, either the tool or the model had to be modified. It is suggested that most of the difficulty is rooted in the large model size and long computational times which are characteristic of Markov model solutions.

  15. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Covchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  16. What makes a family reliable?

    NASA Technical Reports Server (NTRS)

    Williams, James G.

    1992-01-01

    Asteroid families are clusters of asteroids in proper element space which are thought to be fragments from former collisions. Studies of families promise to improve understanding of large collision events and a large event can open up the interior of a former parent body to view. While a variety of searches for families have found the same heavily populated families, and some searches have found the same families of lower population, there is much apparent disagreement between proposed families of lower population of different investigations. Indicators of reliability, factors compromising reliability, an illustration of the influence of different data samples, and a discussion of how several investigations perceived families in the same region of proper element space are given.

  17. Reliability Research for Photovoltaic Modules

    NASA Technical Reports Server (NTRS)

    Ross, Ronald J., Jr.

    1986-01-01

    Report describes research approach used to improve reliability of photovoltaic modules. Aimed at raising useful module lifetime to 20 to 30 years. Development of cost-effective solutions to module-lifetime problem requires compromises between degradation rates, failure rates, and lifetimes, on one hand, and costs of initial manufacture, maintenance, and lost energy, on other hand. Life-cycle costing integrates disparate economic terms, allowing cost effectiveness to be quantified, allowing comparison of different design alternatives.

  18. Defining Requirements for Improved Photovoltaic System Reliability

    SciTech Connect

    Maish, A.B.

    1998-12-21

    Reliable systems are an essential ingredient of any technology progressing toward commercial maturity and large-scale deployment. This paper defines reliability as meeting system fictional requirements, and then develops a framework to understand and quantify photovoltaic system reliability based on initial and ongoing costs and system value. The core elements necessary to achieve reliable PV systems are reviewed. These include appropriate system design, satisfactory component reliability, and proper installation and servicing. Reliability status, key issues, and present needs in system reliability are summarized for four application sectors.

  19. Reliable and robust entanglement witness

    NASA Astrophysics Data System (ADS)

    Yuan, Xiao; Mei, Quanxin; Zhou, Shan; Ma, Xiongfeng

    2016-04-01

    Entanglement, a critical resource for quantum information processing, needs to be witnessed in many practical scenarios. Theoretically, witnessing entanglement is by measuring a special Hermitian observable, called an entanglement witness (EW), which has non-negative expected outcomes for all separable states but can have negative expectations for certain entangled states. In practice, an EW implementation may suffer from two problems. The first one is reliability. Due to unreliable realization devices, a separable state could be falsely identified as an entangled one. The second problem relates to robustness. A witness may not be optimal for a target state and fail to identify its entanglement. To overcome the reliability problem, we employ a recently proposed measurement-device-independent entanglement witness scheme, in which the correctness of the conclusion is independent of the implemented measurement devices. In order to overcome the robustness problem, we optimize the EW to draw a better conclusion given certain experimental data. With the proposed EW scheme, where only data postprocessing needs to be modified compared to the original measurement-device-independent scheme, one can efficiently take advantage of the measurement results to maximally draw reliable conclusions.

  20. Reliability in individual monitoring service.

    PubMed

    Mod Ali, N

    2011-03-01

    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country. PMID:21147789

  1. A critical evaluation of GGA + U modeling for atomic, electronic and magnetic structure of Cr2AlC, Cr2GaC and Cr2GeC

    NASA Astrophysics Data System (ADS)

    Dahlqvist, M.; Alling, B.; Rosen, J.

    2015-03-01

    In this work we critically evaluate methods for treating electron correlation effects in multicomponent carbides using a GGA + U framework, addressing doubts from previous works on the usability of density functional theory in the design of magnetic MAX phases. We have studied the influence of the Hubbard U-parameter, applied to Cr 3d orbitals, on the calculated lattice parameters, magnetic moments, magnetic order, bulk modulus and electronic density of states of Cr2AlC, Cr2GaC and Cr2GeC. By considering non-, ferro-, and five different antiferromagnetic spin configurations, we show the importance of including a broad range of magnetic orders in the search for MAX phases with finite magnetic moments in the ground state. We show that when electron correlation is treated on the level of the generalized gradient approximation (U = 0 eV), the magnetic ground state of Cr2AC (A = Al, Ga, Ge) is in-plane antiferromagnetic with finite Cr local moments, and calculated lattice parameters and bulk modulus close to experimentally reported values. By comparing GGA and GGA + U results with experimental data we find that using a U-value larger than 1 eV results in structural parameters deviating strongly from experimentally observed values. Comparisons are also done with hybrid functional calculations (HSE06) resulting in an exchange splitting larger than what is obtained for a U-value of 2 eV. Our results suggest caution and that investigations need to involve several different magnetic orders before lack of magnetism in calculations are blamed on the exchange-correlation approximations in this class of magnetic MAX phases.

  2. Reliability-based casing design

    SciTech Connect

    Maes, M.A.; Gulati, K.C.; Johnson, R.C.; McKenna, D.L.; Brand, P.R.; Lewis, D.B.

    1995-06-01

    The present paper describes the development of reliability-based design criteria for oil and/or gas well casing/tubing. The approach is based on the fundamental principles of limit state design. Limit states for tubulars are discussed and specific techniques for the stochastic modeling of loading and resistance variables are described. Zonation methods and calibration techniques are developed which are geared specifically to the characteristic tubular design for both hydrocarbon drilling and production applications. The application of quantitative risk analysis to the development of risk-consistent design criteria is shown to be a major and necessary step forward in achieving more economic tubular design.

  3. Coding for reliable satellite communications

    NASA Technical Reports Server (NTRS)

    Gaarder, N. T.; Lin, S.

    1986-01-01

    This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.

  4. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  5. Reliability of pyuria detection method.

    PubMed

    Saito, A; Kawada, Y

    1994-01-01

    The reliability of two methods for the detection of pyuria was studied in a total of 106 urine samples obtained from patients with identifiable underlying urinary tract disease. The coefficient of variation (CV) was significantly higher in the microscopic than in the counting chamber method. The CV obtained with the use of the KOVA slide 10 grid, a disposable and less expensive counting chamber, was identical to that obtained with the Bürker-Türk counting chamber. Only 50% of the patients who were proven to have pyuria of > or = 5 WBCs/HPF by the microscopic method had significant bacteriuria of > or = 10(4) bacteria per ml of urine. On the other hand, 95% and 90% of the patients who were proven to have pyuria of > or = 10 WBCs/mm3 with the Bürker-Türk and Fuchs-Rosenthal counting chambers had significant bacteriuria. It was concluded that the counting chamber provides a reliable method for the detection of pyuria and is highly predictive for the presence of significant bacteriuria. The KOVA slide 10 grid is an acceptable alternative to the regular counting chamber. PMID:7519582

  6. Demonstration of reliability centered maintenance

    SciTech Connect

    Schwan, C.A.; Morgan, T.A. )

    1991-04-01

    Reliability centered maintenance (RCM) is an approach to preventive maintenance planning and evaluation that has been used successfully by other industries, most notably the airlines and military. Now EPRI is demonstrating RCM in the commercial nuclear power industry. Just completed are large-scale, two-year demonstrations at Rochester Gas Electric (Ginna Nuclear Power Station) and Southern California Edison (San Onofre Nuclear Generating Station). Both demonstrations were begun in the spring of 1988. At each plant, RCM was performed on 12 to 21 major systems. Both demonstrations determined that RCM is an appropriate means to optimize a PM program and improve nuclear plant preventive maintenance on a large scale. Such favorable results had been suggested by three earlier EPRI pilot studies at Florida Power Light, Duke Power, and Southern California Edison. EPRI selected the Ginna and San Onofre sites because, together, they represent a broad range of utility and plant size, plant organization, plant age, and histories of availability and reliability. Significant steps in each demonstration included: selecting and prioritizing plant systems for RCM evaluation; performing the RCM evaluation steps on selected systems; evaluating the RCM recommendations by a multi-disciplinary task force; implementing the RCM recommendations; establishing a system to track and verify the RCM benefits; and establishing procedures to update the RCM bases and recommendations with time (a living program). 7 refs., 1 tab.

  7. [Trauma scores: reproducibility and reliability].

    PubMed

    Waydhas, C; Nast-Kolb, D; Trupka, A; Kerim-Sade, C; Kanz, G; Zoller, J; Schweiberer, L

    1992-02-01

    The inter-rater reliability of the Injury Severity Score (ISS) and the Polytraumaschlüssel (PTS) [multiple trauma code] was studied using diagnosis sheets filled in for 107 multiple injured patients. The scoring was performed by eight physicians with different levels of qualification. The scores for individual patients varied widely depending on the scorer, with extremes differing from the mean by about 80% and 70% for the ISS and PTS, respectively. The mean ISS and PTS for the whole study population also varied significantly between the scorers (P less than 0.0001, one-way analysis of variance). Raters with experience in trauma scoring calculated significantly higher scores (P less than 0.01, t-test) Neither the ISS nor the PTS seem reliable enough to describe injury severity in an individual patient. Treatment decisions must not be based on such grounds. Even for larger groups, caution must be exercised in comparison of different populations of multiple traumatized patients. PMID:1570531

  8. Reliability of steam generator tubing

    SciTech Connect

    Kadokami, E.

    1997-02-01

    The author presents results on studies made of the reliability of steam generator (SG) tubing. The basis for this work is that in Japan the issue of defects in SG tubing is addressed by the approach that any detected defect should be repaired, either by plugging the tube or sleeving it. However, this leaves open the issue that there is a detection limit in practice, and what is the effect of nondetectable cracks on the performance of tubing. These studies were commissioned to look at the safety issues involved in degraded SG tubing. The program has looked at a number of different issues. First was an assessment of the penetration and opening behavior of tube flaws due to internal pressure in the tubing. They have studied: penetration behavior of the tube flaws; primary water leakage from through-wall flaws; opening behavior of through-wall flaws. In addition they have looked at the question of the reliability of tubing with flaws during normal plant operation. Also there have been studies done on the consequences of tube rupture accidents on the integrity of neighboring tubes.

  9. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems such as tracking in that the distance from the target is a relevant servo parameter. The methodology described in this paper is a hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent', and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  10. Reliability of fiber optic emitters

    NASA Astrophysics Data System (ADS)

    Twu, B.; Kung, H.

    1982-08-01

    Over the past few years a number of fiber optic links were introduced by an American company. Various transmitter-fiber-receiver combinations were studied to satisfy different link performance and reliability requirements. Light emitting diodes (LEDs) were generally used in the transmitter mode. Attention is given to the characteristics of four types of LED's which had been developed, GaAsP LEDs were made from epi-layers grown by vapor phase epitaxy on GaAs substrate. The composition of GaAs and GaP was adjusted to achieve light emission at the desired wavelength. The p-n junction was formed by diffusing zinc into n type epi-layers. GaAlAs LEDs were made from epi-layers grown by liquid phase epitaxy on GaAs substrate. Long term reliability of four LEDs was evaluated. GaAsP diodes showed gradual degradation as a whole. GaAlAs emitters showed insignificant gradual degradation, but they exhibited dark line or dark spot related catastrophic degradation.

  11. 78 FR 38851 - Electric Reliability Organization Proposal To Retire Requirements in Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-28

    ... Federal Energy Regulatory Commission 18 CFR Part 40 Electric Reliability Organization Proposal To Retire... Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability Organization. The... 20426, Telephone: (202) 502-6840. Michael Gandolfo (Technical Information), Office of...

  12. 76 FR 66229 - Transmission Planning Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-26

    ... Energy Regulatory Commission 18 CFR Part 40 Transmission Planning Reliability Standards AGENCY: Federal Energy Regulatory Commission, DOE. ACTION: Notice of proposed rulemaking. SUMMARY: Transmission Planning (TPL) Reliability Standards are intended to ensure that the transmission system is planned and...

  13. Reliability Estimation Methods for Liquid Rocket Engines

    NASA Astrophysics Data System (ADS)

    Hirata, Kunio; Masuya, Goro; Kamijo, Kenjiro

    Reliability estimation using the dispersive, binominal distribution method has been traditionally used to certify the reliability of liquid rocket engines, but its estimation sometimes disagreed with the failure rates of flight engines. In order to take better results, the reliability growth model and the failure distribution method are applied to estimate the reliability of LE-7A engines, which have propelled the first stage of H-2A launch vehicles.

  14. Software reliability modeling and analysis

    NASA Technical Reports Server (NTRS)

    Scholz, F.-W.

    1986-01-01

    A discrete and, as approximation to it, a continuous model for the software reliability growth process are examined. The discrete model is based on independent multinomial trials and concerns itself with the joint distribution of the first occurrence time of its underlying events (bugs). The continuous model is based on the order statistics of N independent nonidentically distributed exponential random variables. It is shown that the spacings between bugs are not necessarily independent or exponentially (geometrically) distributed. However, there is a statistical rationale for viewing them so conditionally. Some identifiability problems are pointed out and resolved. In particular, it appears that the number of bugs in a program is not identifiable. Estimated upper bounds and confidence bounds for the residual program eror content are given based on the spacings of the first k bugs removed.

  15. The SURE Reliability Analysis Program

    NASA Technical Reports Server (NTRS)

    Butler, R. W.

    1986-01-01

    The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.

  16. Space transportation architecture: Reliability sensitivities

    NASA Technical Reports Server (NTRS)

    Williams, A. M.

    1992-01-01

    A sensitivity analysis is given of the benefits and drawbacks associated with a proposed Earth to orbit vehicle architecture. The architecture represents a fleet of six vehicles (two existing, four proposed) that would be responsible for performing various missions as mandated by NASA and the U.S. Air Force. Each vehicle has a prescribed flight rate per year for a period of 31 years. By exposing this fleet of vehicles to a probabilistic environment where the fleet experiences failures, downtimes, setbacks, etc., the analysis involves determining the resiliency and costs associated with the fleet of specific vehicle/subsystem reliabilities. The resources required were actual observed data on the failures and downtimes associated with existing vehicles, data based on engineering judgement for proposed vehicles, and the development of a sensitivity analysis program.

  17. Reliability on ISS Talk Outline

    NASA Technical Reports Server (NTRS)

    Misiora, Mike

    2015-01-01

    1. Overview of ISS 2. Space Environment and it effects a. Radiation b. Microgravity 3. How we ensure reliability a. Requirements b. Component Selection i. Note: I plan to stay away from talk about Rad Hardened components and talk about why we use older processors because they are less susceptible to SEUs. c. Testing d. Redundancy / Failure Tolerance e. Sparing strategies 4. Operational Examples a. Multiple MDM Failures on 6A due to hard drive failure In general, my plan is to only talk about data that is currently available via normal internet sources to ensure that I stay away from any topics that would be Export Controlled, ITAR, or NDA-controlled. The operational example has been well-reported on in the media and those are the details that I plan to cover. Additionally I am not planning on using any slides or showing any photos during the talk.

  18. Confidence bounds on structural reliability

    NASA Technical Reports Server (NTRS)

    Mehta, S. R.; Cruse, T. A.; Mahadevan, S.

    1993-01-01

    Different approaches for quantifying physical, statistical, and model uncertainties associated with the distribution parameters which are aimed at determining structural reliability are described. Confidence intervals on the distribution parameters of the input random variables are estimated using four algorithms to evaluate uncertainty of the response. Design intervals are evaluated using either Monte Carlo simulation or an iterative approach. A first order approach can be used to compute a first approximation of the design interval, but its accuracy is not satisfactory. The regression approach which combines the iterative approach with Monte Carlo simulation is capable of providing good results if the performance function can be accurately represented using regression analysis. It is concluded that the design interval-based approach seems to be quite general and takes into account distribution and model uncertainties.

  19. Reliability factors in gas lasers

    NASA Astrophysics Data System (ADS)

    Malk, E. G.; Ramsay, I. A.

    1982-07-01

    Two types of gas lasers, the helium-neon laser and the sealed off, waveguide carbon dioxide laser, are discussed. The beneficial influence of hard seals on the HeNe laser is briefly described, and the resulting improved mean time between failures is described and discussed, showing a summary of lifetest data. Rejection percentages at 80 percent of the rated power in 18 months of elapsed time is determined to be 10 percent for one family of HeNe lasers and 7.6 percent for another family. An optical failure mode for HeNe lasers and the scientific investigation leading to its elimination are described. Finally, CO2 waveguide laser reliability is discussed in terms of the lifetime degradation factors involved in the operation of these lasers.

  20. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1988-01-01

    Failure times of software undergoing random debugging can be modelled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  1. Making statistical inferences about software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1986-01-01

    Failure times of software undergoing random debugging can be modeled as order statistics of independent but nonidentically distributed exponential random variables. Using this model inferences can be made about current reliability and, if debugging continues, future reliability. This model also shows the difficulty inherent in statistical verification of very highly reliable software such as that used by digital avionics in commercial aircraft.

  2. Space transportation main engine reliability and safety

    NASA Technical Reports Server (NTRS)

    Monk, Jan C.

    1991-01-01

    Viewgraphs are used to illustrate the reliability engineering and aerospace safety of the Space Transportation Main Engine (STME). A technology developed is called Total Quality Management (TQM). The goal is to develop a robust design. Reducing process variability produces a product with improved reliability and safety. Some engine system design characteristics are identified which improves reliability.

  3. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Reliability criteria. 75.42 Section 75...) CONTINUOUS EMISSION MONITORING Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous emission monitoring system, the owner or operator...

  4. Reliability reporting practices in rape myth research.

    PubMed

    Buhi, Eric R

    2005-02-01

    A number of school-based programs address sexual violence by focusing on adolescents' attitudes about rape or acceptance of rape myths. However, many problems exist in the literature regarding measurement of rape myth acceptance, including issues of reliability and validity. This paper addresses measurement reliability issues and reviews reliability reporting practices of studies using the Burt Rape Myth Acceptance Scale. Less than one-half of the 68 articles examined reported reliability coefficients for the data collected. Almost one-third of the studies did not mention reliability. Examples of acceptable reliability reporting are provided. It is argued that reliability coefficients for the data actually analyzed should always be assessed and reported when interpreting program results. Implicationsfor school health research and practice are discussed. PMID:15929595

  5. Software reliability models for critical applications

    SciTech Connect

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  6. Software reliability models for critical applications

    SciTech Connect

    Pham, H.; Pham, M.

    1991-12-01

    This report presents the results of the first phase of the ongoing EG&G Idaho, Inc. Software Reliability Research Program. The program is studying the existing software reliability models and proposes a state-of-the-art software reliability model that is relevant to the nuclear reactor control environment. This report consists of three parts: (1) summaries of the literature review of existing software reliability and fault tolerant software reliability models and their related issues, (2) proposed technique for software reliability enhancement, and (3) general discussion and future research. The development of this proposed state-of-the-art software reliability model will be performed in the second place. 407 refs., 4 figs., 2 tabs.

  7. Theory of reliable systems. [reliability analysis and on-line fault diagnosis

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1974-01-01

    Research is reported in the program to refine the current notion of system reliability by identifying and investigating attributes of a system which are important to reliability considerations, and to develop techniques which facilitate analysis of system reliability. Reliability analysis, and on-line fault diagnosis are discussed.

  8. Fast estimation of reboiler reliability

    SciTech Connect

    Durand, A.A.; Bonilla, M.A.O.

    1995-08-01

    The problems one faces in evaluating the reliability of a reboiler design, or in judging the effect of modifications of process conditions on reboiler operation can be complex. To carry out such evaluations, it is necessary for engineers to perform some calculations to determine: heat transfer coefficients in convection boiling; temperature difference, for the onset of nucleate boiling; heat transfer coefficients in the nucleate boiling region; critical heat flux or critical temperature difference; minimum {Delta}T for film boiling; and heat transfer coefficients for the film boiling region. There are a number of correlations, graphs, and computer programs that can be used to make these calculations. However, besides being laborious, it is still difficult to get a suitable picture of the overall problem from just this data. To simplify the process, and to have a better understanding of the problem, a map of the different boiling regions and their boundaries is presented here. With this map it is possible to locate the design or operating point of a specific kettle reboiler among all the boiling regions, enabling one to make a clearer analysis of its behavior. The parameters used to develop this map are described.

  9. Creep-rupture reliability analysis

    NASA Technical Reports Server (NTRS)

    Peralta-Duran, A.; Wirsching, P. H.

    1984-01-01

    A probabilistic approach to the correlation and extrapolation of creep-rupture data is presented. Time temperature parameters (TTP) are used to correlate the data, and an analytical expression for the master curve is developed. The expression provides a simple model for the statistical distribution of strength and fits neatly into a probabilistic design format. The analysis focuses on the Larson-Miller and on the Manson-Haferd parameters, but it can be applied to any of the TTP's. A method is developed for evaluating material dependent constants for TTP's. It is shown that optimized constants can provide a significant improvement in the correlation of the data, thereby reducing modelling error. Attempts were made to quantify the performance of the proposed method in predicting long term behavior. Uncertainty in predicting long term behavior from short term tests was derived for several sets of data. Examples are presented which illustrate the theory and demonstrate the application of state of the art reliability methods to the design of components under creep.

  10. Creep-rupture reliability analysis

    NASA Technical Reports Server (NTRS)

    Peralta-Duran, A.; Wirsching, P. H.

    1985-01-01

    A probabilistic approach to the correlation and extrapolation of creep-rupture data is presented. Time temperature parameters (TTP) are used to correlate the data, and an analytical expression for the master curve is developed. The expression provides a simple model for the statistical distribution of strength and fits neatly into a probabilistic design format. The analysis focuses on the Larson-Miller and on the Manson-Haferd parameters, but it can be applied to any of the TTP's. A method is developed for evaluating material dependent constants for TTP's. It is shown that optimized constants can provide a significant improvement in the correlation of the data, thereby reducing modelling error. Attempts were made to quantify the performance of the proposed method in predicting long term behavior. Uncertainty in predicting long term behavior from short term tests was derived for several sets of data. Examples are presented which illustrate the theory and demonstrate the application of state of the art reliability methods to the design of components under creep.

  11. A reliable dual Blumlein device

    NASA Astrophysics Data System (ADS)

    Noggle, R. C.; Adler, R. J.; Hendricks, K. J.

    1989-11-01

    In this article, we describe the 500 kV, 100 kA (each arm) GEMINI dual Blumlein accelerator at the Air Force Weapons Laboratory. Novel isolation, trigger, and trigger timing techniques are utilized in this device in order to allow two Blumleins to be charged by one Marx generator, and discharged at different times. The timing circuits are unique in that they make use of saturable magnetic circuits to provide the relative timing of the two outputs. We demonstrate that this technique is reliable, reproducible, and straightforward. We isolate the two Blumlein switches using an inductor wound so that it has minimal inductance when both Blumleins are charging, but maximum inductance when one is fired and the second Blumlein is delayed. Utilization of this technique allows us to isolate the two Blumleins without using resistive isolation with its associated energy loss, or simple inductive isolation with the associated cost in increasing the water line charge time. Complete operational and design data is presented, along with detailed data from the switch trigger arms.

  12. Evaluation of MHTGR fuel reliability

    SciTech Connect

    Wichner, R.P.; Barthold, W.P.

    1992-07-01

    Modular High-Temperature Gas-Cooled Reactor (MHTGR) concepts that house the reactor vessel in a tight but unsealed reactor building place heightened importance on the reliability of the fuel particle coatings as fission product barriers. Though accident consequence analyses continue to show favorable results, the increased dependence on one type of barrier, in addition to a number of other factors, has caused the Nuclear Regulatory Commission (NRC) to consider conservative assumptions regarding fuel behavior. For this purpose, the concept termed ``weak fuel`` has been proposed on an interim basis. ``Weak fuel`` is a penalty imposed on consequence analyses whereby the fuel is assumed to respond less favorably to environmental conditions than predicted by behavioral models. The rationale for adopting this penalty, as well as conditions that would permit its reduction or elimination, are examined in this report. The evaluation includes an examination of possible fuel-manufacturing defects, quality-control procedures for defect detection, and the mechanisms by which fuel defects may lead to failure.

  13. Identifying a reliable boredom induction.

    PubMed

    Markey, Amanda; Chin, Alycia; Vanepps, Eric M; Loewenstein, George

    2014-08-01

    None of the tasks used to induce boredom have undergone rigorous psychometric validation, which creates potential problems for operational equivalence, comparisons across studies, statistical power, and confounding results. This methodological concern was addressed by testing and comparing the effectiveness of six 5-min. computerized boredom inductions (peg turning, audio, video, signature matching, one-back, and an air traffic control task). The tasks were evaluated using standard criteria for emotion inductions: intensity and discreteness. Intensity, the amount of boredom elicited, was measured using a subset of the Multidimensional State Boredom Scale. Discreteness, the extent to which the task elicited boredom and did not elicit other emotions, was measured using a modification of the Differential Emotion Scale. In both a laboratory setting (Study 1; N = 241) and an online setting with Amazon Mechanical Turk workers (Study 2; N = 416), participants were randomly assigned to one of seven tasks (six boredom tasks or a comparison task, a clip from Planet Earth) before rating their boredom using the MSBS and other emotions using the modified DES. In both studies, each task had significantly higher intensity and discreteness than the comparison task, with moderate to large effect sizes. The peg-turning task outperformed the other tasks in both intensity and discreteness, making it the recommended induction. Identification of reliable and valid boredom inductions and systematic comparison of their relative results should help advance state boredom research. PMID:25153752

  14. Time-Dependent Reliability Analysis

    1999-10-27

    FRANTIC-3 was developed to evaluate system unreliability using time-dependent techniques. The code provides two major options: to evaluate standby system unavailability or, in addition to the unavailability to calculate the total system failure probability by including both the unavailability of the system on demand as well as the probability that it will operate for an arbitrary time period following the demand. The FRANTIC-3 time dependent reliability models provide a large selection of repair and testingmore » policies applicable to standby or continously operating systems consisting of periodically tested, monitored, and non-repairable (non-testable) components. Time-dependent and test frequency dependent failures, as well as demand stress related failure, test-caused degradation and wear-out, test associated human errors, test deficiencies, test override, unscheduled and scheduled maintenance, component renewal and replacement policies, and test strategies can be prescribed. The conditional system unavailabilities associated with the downtimes of the user specified failed component are also evaluated. Optionally, the code can perform a sensitivity study for system unavailability or total failure probability to the failure characteristics of the standby components.« less

  15. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  16. Managing Reliability in the 21st Century

    SciTech Connect

    Dellin, T.A.

    1998-11-23

    The rapid pace of change at Ike end of the 20th Century should continue unabated well into the 21st Century. The driver will be the marketplace imperative of "faster, better, cheaper." This imperative has already stimulated a revolution-in-engineering in design and manufacturing. In contrast, to date, reliability engineering has not undergone a similar level of change. It is critical that we implement a corresponding revolution-in-reliability-engineering as we enter the new millennium. If we are still using 20th Century reliability approaches in the 21st Century, then reliability issues will be the limiting factor in faster, better, and cheaper. At the heart of this reliability revolution will be a science-based approach to reliability engineering. Science-based reliability will enable building-in reliability, application-specific products, virtual qualification, and predictive maintenance. The purpose of this paper is to stimulate a dialogue on the future of reliability engineering. We will try to gaze into the crystal ball and predict some key issues that will drive reliability programs in the new millennium. In the 21st Century, we will demand more of our reliability programs. We will need the ability to make accurate reliability predictions that will enable optimizing cost, performance and time-to-market to meet the needs of every market segment. We will require that all of these new capabilities be in place prior to the stint of a product development cycle. The management of reliability programs will be driven by quantifiable metrics of value added to the organization business objectives.

  17. Fundamental mechanisms of micromachine reliability

    SciTech Connect

    DE BOER,MAARTEN P.; SNIEGOWSKI,JEFFRY J.; KNAPP,JAMES A.; REDMOND,JAMES M.; MICHALSKE,TERRY A.; MAYER,THOMAS K.

    2000-01-01

    Due to extreme surface to volume ratios, adhesion and friction are critical properties for reliability of Microelectromechanical Systems (MEMS), but are not well understood. In this LDRD the authors established test structures, metrology and numerical modeling to conduct studies on adhesion and friction in MEMS. They then concentrated on measuring the effect of environment on MEMS adhesion. Polycrystalline silicon (polysilicon) is the primary material of interest in MEMS because of its integrated circuit process compatibility, low stress, high strength and conformal deposition nature. A plethora of useful micromachined device concepts have been demonstrated using Sandia National Laboratories' sophisticated in-house capabilities. One drawback to polysilicon is that in air the surface oxidizes, is high energy and is hydrophilic (i.e., it wets easily). This can lead to catastrophic failure because surface forces can cause MEMS parts that are brought into contact to adhere rather than perform their intended function. A fundamental concern is how environmental constituents such as water will affect adhesion energies in MEMS. The authors first demonstrated an accurate method to measure adhesion as reported in Chapter 1. In Chapter 2 through 5, they then studied the effect of water on adhesion depending on the surface condition (hydrophilic or hydrophobic). As described in Chapter 2, they find that adhesion energy of hydrophilic MEMS surfaces is high and increases exponentially with relative humidity (RH). Surface roughness is the controlling mechanism for this relationship. Adhesion can be reduced by several orders of magnitude by silane coupling agents applied via solution processing. They decrease the surface energy and render the surface hydrophobic (i.e. does not wet easily). However, only a molecular monolayer coats the surface. In Chapters 3-5 the authors map out the extent to which the monolayer reduces adhesion versus RH. They find that adhesion is independent of

  18. Experimental oral transmission of chronic wasting disease to red deer (Cervus elaphus elaphus): Early detection and late stage distribution of protease-resistant prion protein

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chronic wasting disease CWD is the transmissible spongiform encephalopathy or prion disease of wild and farmed cervid ruminants, including Rocky Mountain elk (Cervus elaphus nelsoni), white tailed deer (Odocoileus virginianus), mule deer (Odocoileus hemionus), or moose (Alces alces). Reliable data ...

  19. Infants track the reliability of potential informants.

    PubMed

    Tummeltshammer, Kristen Swan; Wu, Rachel; Sobel, David M; Kirkham, Natasha Z

    2014-09-01

    Across two eye-tracking experiments, we showed that infants are sensitive to the statistical reliability of informative cues and selective in their use of information generated by such cues. We familiarized 8-month-olds with faces (Experiment 1) or arrows (Experiment 2) that cued the locations of animated animals with different degrees of reliability. The reliable cue always cued a box containing an animation, whereas the unreliable cue cued a box that contained an animation only 25% of the time. At test, infants searched longer in the boxes that were reliably cued, but did not search longer in the boxes that were unreliably cued. At generalization, when boxes were cued that never contained animations before, only infants in the face experiment followed the reliable cue. These results provide the first evidence that even young infants can track the reliability of potential informants and use this information judiciously to modify their future behavior. PMID:25022277

  20. Reliability Growth in Space Life Support Systems

    NASA Technical Reports Server (NTRS)

    Jones, Harry W.

    2014-01-01

    A hardware system's failure rate often increases over time due to wear and aging, but not always. Some systems instead show reliability growth, a decreasing failure rate with time, due to effective failure analysis and remedial hardware upgrades. Reliability grows when failure causes are removed by improved design. A mathematical reliability growth model allows the reliability growth rate to be computed from the failure data. The space shuttle was extensively maintained, refurbished, and upgraded after each flight and it experienced significant reliability growth during its operational life. In contrast, the International Space Station (ISS) is much more difficult to maintain and upgrade and its failure rate has been constant over time. The ISS Carbon Dioxide Removal Assembly (CDRA) reliability has slightly decreased. Failures on ISS and with the ISS CDRA continue to be a challenge.

  1. Fatigue reliability of wind turbine components

    SciTech Connect

    Veers, P.S.

    1990-01-01

    Fatigue life estimates for wind turbine components can be extremely variable due to both inherently random and uncertain parameters. A structural reliability analysis is used to qualify the probability that the fatigue life will fall short of a selected target. Reliability analysis also produces measures of the relative importance of the various sources of uncertainty and the sensitivity of the reliability to each input parameter. The process of obtaining reliability estimates is briefly outlined. An example fatigue reliability calculation for a blade joint is formulated; reliability estimates, importance factors, and sensitivities are produced. Guidance in selecting distribution functions for the random variables used to model the random and uncertain parameters is also provided. 5 refs., 9 figs., 1 tab.

  2. Understanding Reliability: A Review for Veterinary Educators.

    PubMed

    Royal, Kenneth D; Hecker, Kent G

    2016-01-01

    Veterinary medical faculty and administrators routinely administer student assessments and conduct surveys to make decisions regarding student performance and to assess their courses/curricula. The decisions that are made are a result of the scores generated. However, how reliable are the scores and how confident can we be about these decisions? Reliability is one of the hallmarks of validity evidence, but what does this mean and what affects the reliability of scores? The purpose of this article is to provide veterinary medical educators and administrators with fundamental information regarding the concept of reliability. Specifically, we review what sources of error reduce the reliability of scores and we describe the different types of reliability coefficients that are reported. PMID:26560547

  3. The Reliability of Psychiatric Diagnosis Revisited

    PubMed Central

    Rankin, Eric; France, Cheryl; El-Missiry, Ahmed; John, Collin

    2006-01-01

    Background: The authors reviewed the topic of reliability of psychiatric diagnosis from the turn of the 20th century to present. The objectives of this paper are to explore the reasons of unreliability of psychiatric diagnosis and propose ways to improve the reliability of psychiatric diagnosis. Method: The authors reviewed the literature on the concept of reliability of psychiatric diagnosis with emphasis on the impact of interviewing skills, use of diagnostic criteria, and structured interviews on the reliability of psychiatric diagnosis. Results: Causes of diagnostic unreliability are attributed to the patient, the clinician and psychiatric nomenclature. The reliability of psychiatric diagnosis can be enhanced by using diagnostic criteria, defining psychiatric symptoms and structuring the interviews. Conclusions: The authors propose the acronym ‘DR.SED,' which stands for diagnostic criteria, reference definitions, structuring the interview, clinical experience, and data. The authors recommend that clinicians use the DR.SED paradigm to improve the reliability of psychiatric diagnoses. PMID:21103149

  4. Reliability assurance for regulation of advanced reactors

    SciTech Connect

    Fullwood, R.; Lofaro, R.; Samanta, P.

    1991-01-01

    The advanced nuclear power plants must achieve higher levels of safety than the first generation of plants. Showing that this is indeed true provides new challenges to reliability and risk assessment methods in the analysis of the designs employing passive and semi-passive protection. Reliability assurance of the advanced reactor systems is important for determining the safety of the design and for determining the plant operability. Safety is the primary concern, but operability is considered indicative of good and safe operation. This paper discusses several concerns for reliability assurance of the advanced design encompassing reliability determination, level of detail required in advanced reactor submittals, data for reliability assurance, systems interactions and common cause effects, passive component reliability, PRA-based configuration control system, and inspection, training, maintenance and test requirements. Suggested approaches are provided for addressing each of these topics.

  5. Reliability assurance for regulation of advanced reactors

    SciTech Connect

    Fullwood, R.; Lofaro, R.; Samanta, P.

    1991-12-31

    The advanced nuclear power plants must achieve higher levels of safety than the first generation of plants. Showing that this is indeed true provides new challenges to reliability and risk assessment methods in the analysis of the designs employing passive and semi-passive protection. Reliability assurance of the advanced reactor systems is important for determining the safety of the design and for determining the plant operability. Safety is the primary concern, but operability is considered indicative of good and safe operation. This paper discusses several concerns for reliability assurance of the advanced design encompassing reliability determination, level of detail required in advanced reactor submittals, data for reliability assurance, systems interactions and common cause effects, passive component reliability, PRA-based configuration control system, and inspection, training, maintenance and test requirements. Suggested approaches are provided for addressing each of these topics.

  6. An asymptotic approach for assessing fatigue reliability

    SciTech Connect

    Tang, J.

    1996-12-01

    By applying the cumulative fatigue damage theory to the random process reliability problem, and the introduction of a new concept of unified equivalent stress level in fatigue life prediction, a technical reliability model for the random process reliability problem under fatigue failure is proposed. The technical model emphasizes efficiency in the design choice and also focuses on the accuracy of the results. Based on this model, an asymptotic method for fatigue reliability under stochastic process loadings is developed. The proposed method uses the recursive iteration algorithm to achieve results which include reliability and corresponding life. The method reconciles the requirement of accuracy and efficiency for the random process reliability problems under fatigue failure. The accuracy and analytical and numerical efforts required are compared. Through numerical example, the advantage of the proposed method is demonstrated.

  7. Combination of structural reliability and interval analysis

    NASA Astrophysics Data System (ADS)

    Qiu, Zhiping; Yang, Di; Elishakoff, Isaac

    2008-02-01

    In engineering applications, probabilistic reliability theory appears to be presently the most important method, however, in many cases precise probabilistic reliability theory cannot be considered as adequate and credible model of the real state of actual affairs. In this paper, we developed a hybrid of probabilistic and non-probabilistic reliability theory, which describes the structural uncertain parameters as interval variables when statistical data are found insufficient. By using the interval analysis, a new method for calculating the interval of the structural reliability as well as the reliability index is introduced in this paper, and the traditional probabilistic theory is incorporated with the interval analysis. Moreover, the new method preserves the useful part of the traditional probabilistic reliability theory, but removes the restriction of its strict requirement on data acquisition. Example is presented to demonstrate the feasibility and validity of the proposed theory.

  8. Combinatorial reliability analysis of multiprocessor computers

    SciTech Connect

    Hwang, K.; Tian-Pong Chang

    1982-12-01

    The authors propose a combinatorial method to evaluate the reliability of multiprocessor computers. Multiprocessor structures are classified as crossbar switch, time-shared buses, and multiport memories. Closed-form reliability expressions are derived via combinatorial path enumeration on the probabilistic-graph representation of a multiprocessor system. The method can analyze the reliability performance of real systems like C.mmp, Tandem 16, and Univac 1100/80. User-oriented performance levels are defined for measuring the performability of degradable multiprocessor systems. For a regularly structured multiprocessor system, it is fast and easy to use this technique for evaluating system reliability with statistically independent component reliabilities. System availability can be also evaluated by this reliability study. 6 references.

  9. NASCOM network: Ground communications reliability report

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A reliability performance analysis of the NASCOM Network circuits is reported. Network performance narrative summary is presented to include significant changes in circuit configurations, current figures, and trends in each trouble category with notable circuit totals specified. Lost time and interruption tables listing circuits which were affected by outages showing their totals category are submitted. A special analysis of circuits with low reliabilities is developed with tables depicting the performance and graphs for individual reliabilities.

  10. Evaluation of competing software reliability predictions

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaly, A. A.; Chan, P. Y.; Littlewood, B.

    1986-01-01

    Different software reliability models can produce very different answers when called upon to predict future reliability in a reliability growth context. Users need to know which, if any, of the competing predictions are trustworthy. Some techniques are presented which form the basis of a partial solution to this problem. Rather than attempting to decide which model is generally best, the approach adopted here allows a user to decide upon the most appropriate model for each application.

  11. MEMS Reliability Assurance Activities at JPL

    NASA Technical Reports Server (NTRS)

    Kayali, S.; Lawton, R.; Stark, B.

    2000-01-01

    An overview of Microelectromechanical Systems (MEMS) reliability assurance and qualification activities at JPL is presented along with the a discussion of characterization of MEMS structures implemented on single crystal silicon, polycrystalline silicon, CMOS, and LIGA processes. Additionally, common failure modes and mechanisms affecting MEMS structures, including radiation effects, are discussed. Common reliability and qualification practices contained in the MEMS Reliability Assurance Guideline are also presented.

  12. High Reliability and Excellence in Staffing.

    PubMed

    Mensik, Jennifer

    2015-01-01

    Nurse staffing is a complex issue, with many facets and no one right answer. High-reliability organizations (HROs) strive and succeed in achieving a high degree of safety or reliability despite operating in hazardous conditions. HROs have systems in place that make them extremely consistent in accomplishing their goals and avoiding potential errors. However, the inability to resolve quality issues may very well be related to the lack of adoption of high-reliability principles throughout our organizations. PMID:26625582

  13. Reliability of the Deployment Resiliency Assessment.

    PubMed

    Simon, Samuel E; Stewart, Kate; Kloc, Michelle; Williams, Thomas V; Wilmoth, Margaret C

    2016-07-01

    This article describes the reliability of the instruments embedded in a mental health screening instrument designed to detect risky drinking, depression, and post-traumatic stress disorder among members of the Armed Forces. The instruments were generally reliable, however, the risky drinking screen (Alcohol Use Disorders Identification Test-Consumption) had unacceptable reliability (α = 0.58). This was the first attempt to assess psychometric properties of a screening and assessment instrument widely used for members of the Armed Forces. PMID:27391616

  14. Signal verification can promote reliable signalling.

    PubMed

    Broom, Mark; Ruxton, Graeme D; Schaefer, H Martin

    2013-11-22

    The central question in communication theory is whether communication is reliable, and if so, which mechanisms select for reliability. The primary approach in the past has been to attribute reliability to strategic costs associated with signalling as predicted by the handicap principle. Yet, reliability can arise through other mechanisms, such as signal verification; but the theoretical understanding of such mechanisms has received relatively little attention. Here, we model whether verification can lead to reliability in repeated interactions that typically characterize mutualisms. Specifically, we model whether fruit consumers that discriminate among poor- and good-quality fruits within a population can select for reliable fruit signals. In our model, plants either signal or they do not; costs associated with signalling are fixed and independent of plant quality. We find parameter combinations where discriminating fruit consumers can select for signal reliability by abandoning unprofitable plants more quickly. This self-serving behaviour imposes costs upon plants as a by-product, rendering it unprofitable for unrewarding plants to signal. Thus, strategic costs to signalling are not a prerequisite for reliable communication. We expect verification to more generally explain signal reliability in repeated consumer-resource interactions that typify mutualisms but also in antagonistic interactions such as mimicry and aposematism. PMID:24068354

  15. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  16. Reliability analysis of continuous fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Thomas, David J.; Wetherhold, Robert C.

    1991-01-01

    This paper describes two methods, the maximum distortion energy (MDE) and the principle of independent action (PIA), developed for the analysis of the reliability of a single continuous composite lamina. It is shown that, for the typical laminated plate structure, the individual lamina reliabilities can be combined in order to produce the upper and the lower bounds of reliability for the laminate, similar in nature to the bounds on properties produced from variational elastic methods. These limits were derived for both the interactive and the model failure considerations. Analytical expressions were also derived for the sensitivity of the reliability limits with respect to changes in the Weibull parameters and in loading conditions.

  17. Signal verification can promote reliable signalling

    PubMed Central

    Broom, Mark; Ruxton, Graeme D.; Schaefer, H. Martin

    2013-01-01

    The central question in communication theory is whether communication is reliable, and if so, which mechanisms select for reliability. The primary approach in the past has been to attribute reliability to strategic costs associated with signalling as predicted by the handicap principle. Yet, reliability can arise through other mechanisms, such as signal verification; but the theoretical understanding of such mechanisms has received relatively little attention. Here, we model whether verification can lead to reliability in repeated interactions that typically characterize mutualisms. Specifically, we model whether fruit consumers that discriminate among poor- and good-quality fruits within a population can select for reliable fruit signals. In our model, plants either signal or they do not; costs associated with signalling are fixed and independent of plant quality. We find parameter combinations where discriminating fruit consumers can select for signal reliability by abandoning unprofitable plants more quickly. This self-serving behaviour imposes costs upon plants as a by-product, rendering it unprofitable for unrewarding plants to signal. Thus, strategic costs to signalling are not a prerequisite for reliable communication. We expect verification to more generally explain signal reliability in repeated consumer–resource interactions that typify mutualisms but also in antagonistic interactions such as mimicry and aposematism. PMID:24068354

  18. Integrated reliability program for Scout research vehicle.

    NASA Technical Reports Server (NTRS)

    Morris, B. V.; Welch, R. C.

    1967-01-01

    Integrated reliability program for Scout launch vehicle in terms of design specification, review functions, malfunction reporting, failed parts analysis, quality control, standardization and certification

  19. NEPP DDR Device Reliability FY13 Report

    NASA Technical Reports Server (NTRS)

    Guertin, Steven M.; Armbar, Mehran

    2014-01-01

    This document reports the status of the NEPP Double Data Rate (DDR) Device Reliability effort for FY2013. The task targeted general reliability of > 100 DDR2 devices from Hynix, Samsung, and Micron. Detailed characterization of some devices when stressed by several data storage patterns was studied, targeting ability of the data cells to store the different data patterns without refresh, highlighting the weakest bits. DDR2, Reliability, Data Retention, Temperature Stress, Test System Evaluation, General Reliability, IDD measurements, electronic parts, parts testing, microcircuits

  20. Immodest Witnesses: Reliability and Writing Assessment

    ERIC Educational Resources Information Center

    Gallagher, Chris W.

    2014-01-01

    This article offers a survey of three reliability theories in writing assessment: positivist, hermeneutic, and rhetorical. Drawing on an interdisciplinary investigation of the notion of "witnessing," this survey emphasizes the kinds of readers and readings each theory of reliability produces and the epistemological grounds on which it…

  1. Evaluating Reliability: A Cost-Effectiveness Approach.

    ERIC Educational Resources Information Center

    Hogan, Andrew

    1986-01-01

    This study derives the economic costs of misclassification in nursing home patient classification systems. These costs are then used as weights to estimate the reliability of a functional assessment instrument. Results suggest that reliability must be redefined and remeasured with each substantively new application of an assessment instrument.…

  2. 78 FR 38311 - Reliability Technical Conference Agenda

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-26

    ... Energy Regulatory Commission Reliability Technical Conference Agenda Reliability Technical Docket No... Notice of Technical Conference issued on May 7, 2013, the Commission will hold a technical conference on... regarding the matters discussed at the technical conference. Any person or entity wishing to submit...

  3. 76 FR 71011 - Reliability Technical Conference Agenda

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... Energy Regulatory Commission Reliability Technical Conference Agenda Reliability Technical Conference... Staff. Not consolidated. As announced in the Notice of Technical Conference issued on October 7, 2011, the Commission will hold a technical conference on Tuesday, November 29, 2011, from 1 p.m. to 5...

  4. Reliability Reporting Practices in Rape Myth Research.

    ERIC Educational Resources Information Center

    Buhi, Eric R.

    2005-01-01

    A number of school-based programs address sexual violence by focusing on adolescents' attitudes about rape or acceptance of rape myths. However, really problems exist in the literature regarding measurement of rape myth acceptance, including issues of reliability and validity. This paper addresses measurement reliability issues and reviews…

  5. Reliability measurement for operational avionics software

    NASA Technical Reports Server (NTRS)

    Thacker, J.; Ovadia, F.

    1979-01-01

    Quantitative measures of reliability for operational software in embedded avionics computer systems are presented. Analysis is carried out on data collected during flight testing and from both static and dynamic simulation testing. Failure rate is found to be a useful statistic for estimating software quality and recognizing reliability trends during the operational phase of software development.

  6. Coefficient Alpha and Reliability of Scale Scores

    ERIC Educational Resources Information Center

    Almehrizi, Rashid S.

    2013-01-01

    The majority of large-scale assessments develop various score scales that are either linear or nonlinear transformations of raw scores for better interpretations and uses of assessment results. The current formula for coefficient alpha (a; the commonly used reliability coefficient) only provides internal consistency reliability estimates of raw…

  7. Scale Reliability Evaluation with Heterogeneous Populations

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…

  8. Simulation of reliability in multiserver computer networks

    NASA Astrophysics Data System (ADS)

    Minkevičius, Saulius

    2012-11-01

    The performance in terms of reliability of computer multiserver networks motivates this paper. The probability limit theorem is derived on the extreme queue length in open multiserver queueing networks in heavy traffic and applied to a reliability model for multiserver computer networks where we relate the time of failure of a multiserver computer network to the system parameters.

  9. The Meaning and Consequences of "Reliability"

    ERIC Educational Resources Information Center

    Moss, Pamela A.

    2004-01-01

    The concern behind my question, "Can there be validity without reliability?" (Moss, 1994), was about the influence of measurement practices on the quality of education. I argued that conventional operationalizations of reliability in the measurement literature, which I summarized as "consistency, quantitatively defined, among independent…

  10. Approach to reliability when applying new technologies

    NASA Technical Reports Server (NTRS)

    Bear, J. C.

    1981-01-01

    Tactical weapon systems, while different in many respects from PTTI applications, face similar risks in achieving reliability in development. General principles derived from experience in achieving high reliability in tactical weapon systems are selectively summarized for application to new technologies in unusual environments.

  11. Stability Reliability of the Behavior Rating Profile.

    ERIC Educational Resources Information Center

    Ellers, Robert A.; And Others

    1989-01-01

    Examined test-retest stability of Behavior Rating Profile for students grades l-12 (N=198), parents (N=212), and teachers (N=176) on 3 norm-referenced scales. Found Teacher Rating scale reliable across all grades for screening and eligibility, Parent Rating scale reliable for Grade 3-12 screening and Grade 3-6,ll, and l2, eligibility. Found…

  12. Calculation reliability in vehicle accident reconstruction.

    PubMed

    Wach, Wojciech

    2016-06-01

    The reconstruction of vehicle accidents is subject to assessment in terms of the reliability of a specific system of engineering and technical operations. In the article [26] a formalized concept of the reliability of vehicle accident reconstruction, defined using Bayesian networks, was proposed. The current article is focused on the calculation reliability since that is the most objective section of this model. It is shown that calculation reliability in accident reconstruction is not another form of calculation uncertainty. The calculation reliability is made dependent on modeling reliability, adequacy of the model and relative uncertainty of calculation. All the terms are defined. An example is presented concerning the analytical determination of the collision location of two vehicles on the road in the absence of evidential traces. It has been proved that the reliability of this kind of calculations generally does not exceed 0.65, despite the fact that the calculation uncertainty itself can reach only 0.05. In this example special attention is paid to the analysis of modeling reliability and calculation uncertainty using sensitivity coefficients and weighted relative uncertainty. PMID:27061147

  13. 75 FR 71625 - System Restoration Reliability Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ... Reliability Standards for the Bulk-Power System, Order No. 693, 72 FR 16416 at P 297 (Apr. 4, 2007), FERC... No. 486, 52 FR 47897 (Dec. 17, 1987), FERC Stats. & Regs. ] 30,783 (1987). \\34\\ 18 CFR 380.4(a)(5... Energy Regulatory Commission 18 CFR Part 40 System Restoration Reliability Standards November 18,...

  14. Reliability of telescopes for the lunar surface

    NASA Astrophysics Data System (ADS)

    Benaroya, Haym

    1995-02-01

    The subject of risk and reliability for lunar structures, in particular lunar-based telescopes, is introduced and critical issues deliberated. General discussions are made more specific regarding the lunar telescope, but this paper provides a framework for further quantitative reliability studies.

  15. 76 FR 30341 - Reliable Storage 1 LLC;

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-25

    ... Reliable Storage 1 LLC; Notice of Preliminary Permit Application Accepted for Filing and Soliciting Comments, Motions to Intervene, and Competing Applications On March 25, 2011, Reliable Storage 1 LLC filed... permission. The proposed pumped storage project would consist of the following: (1) A 70-foot-high,...

  16. 46 CFR 169.619 - Reliability.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 7 2012-10-01 2012-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable,...

  17. 46 CFR 169.619 - Reliability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable,...

  18. 46 CFR 169.619 - Reliability.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable,...

  19. 46 CFR 169.619 - Reliability.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 7 2014-10-01 2014-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable,...

  20. 46 CFR 169.619 - Reliability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Reliability. 169.619 Section 169.619 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) NAUTICAL SCHOOLS SAILING SCHOOL VESSELS Machinery and Electrical Steering Systems § 169.619 Reliability. (a) Except where the OCMI judges it impracticable,...

  1. Portfolio Assessment: Increasing Reliability and Validity.

    ERIC Educational Resources Information Center

    Griffee, Dale

    2002-01-01

    Addresses the traditional understanding of reliability as it pertains to writing portfolio assessments. Offers a list of practical actions that can be taken to increase assessment reliability, including explicit definitions of what a portfolio holds, rater training, rater burnout, and consistent rating procedures. (Contains 26 references.) (NB)

  2. Software reliability experiments data analysis and investigation

    NASA Technical Reports Server (NTRS)

    Walker, J. Leslie; Caglayan, Alper K.

    1991-01-01

    The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.

  3. Relating design and environmental variables to reliability

    NASA Astrophysics Data System (ADS)

    Kolarik, William J.; Landers, Thomas L.

    The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.

  4. Estimating the Reliability of a Crewed Spacecraft

    NASA Astrophysics Data System (ADS)

    Lutomski, M. G.; Garza, J.

    2012-01-01

    Now that the Space Shuttle Program has been retired, the Russian Soyuz Launcher and Soyuz Spacecraft are the only means for crew transportation to and from the International Space Station (ISS). Are the astronauts and cosmonauts safer on the Soyuz than the Space Shuttle system? How do you estimate the reliability of such a crewed spacecraft? The recent loss of the 44 Progress resupply flight to the ISS has put these questions front and center. The Soyuz launcher has been in operation for over 40 years. There have been only two Loss of Crew (LOC) incidents and two Loss of Mission (LOM) incidents involving crew missions. Given that the most recent crewed Soyuz launcher incident took place in 1983, how do we determine current reliability of such a system? How do all of the failures of unmanned Soyuz family launchers such as the 44P impact the reliability of the currently operational crewed launcher? Does the Soyuz exhibit characteristics that demonstrate reliability growth and how would that be reflected in future estimates of success? In addition NASA has begun development of the Orion or Multi-Purpose Crewed Vehicle as well as started an initiative to purchase Commercial Crew services from private firms. The reliability targets are currently several times higher than the last Shuttle reliability estimate. Can these targets be compared to the reliability of the Soyuz arguably the highest reliable crewed spacecraft and launcher in the world to determine whether they are realistic and achievable? To help answer these questions this paper will explore how to estimate the reliability of the Soyuz launcher/spacecraft system over its mission to give a benchmark for other human spaceflight vehicles and their missions. Specifically this paper will look at estimating the Loss of Mission (LOM) and Loss of Crew (LOC) probability for an ISS crewed Soyuz launcher/spacecraft mission using historical data, reliability growth, and Probabilistic Risk Assessment (PRA) techniques.

  5. Scaled CMOS Technology Reliability Users Guide

    NASA Technical Reports Server (NTRS)

    White, Mark

    2010-01-01

    The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is

  6. Mission Reliability Estimation for Repairable Robot Teams

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  7. Inventing the future of reliability: FERC's recent orders and the consolidation of reliability authority

    SciTech Connect

    Skees, J. Daniel

    2010-06-15

    The Energy Policy Act of 2005 established mandatory reliability standard enforcement under a system in which the Federal Energy Regulatory Commission and the Electric Reliability Organization would have their own spheres of responsibility and authority. Recent orders, however, reflect the Commission's frustration with the reliability standard drafting process and suggest that the Electric Reliability Organization's discretion is likely to receive less deference in the future. (author)

  8. 76 FR 66055 - North American Electric Reliability Corporation; Order Approving Interpretation of Reliability...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-25

    ... intervene serves to make AMP a party to this proceeding. \\9\\ 76 FR 52,325 (2011). \\10\\ 18 CFR 385.214 (2011... Energy Regulatory Commission North American Electric Reliability Corporation; Order Approving... Electric Reliability Corporation (NERC), the Commission-certified Electric Reliability Organization...

  9. Towards cost-effective reliability through visualization of the reliability option space

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.

    2004-01-01

    In planning a complex system's development there can be many options to improve its reliability. Typically their sum total cost exceeds the budget available, so it is necessary to select judiciously from among them. Reliability models can be employed to calculate the cost and reliability implications of a candidate selection.

  10. 76 FR 23222 - Electric Reliability Organization Interpretation of Transmission Operations Reliability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-26

    ..., 52 FR 47897 (Dec. 17, 1987), FERC Stats. & Regs. Preambles 1986-1990 ] 30,783 (1987). \\23\\ 18 CFR 380...) proposed interpretation of Reliability Standard, TOP-001-1, Requirement R8. DATES: Comments are due June 27... Requirement R8 in Commission-approved NERC Reliability Standard TOP-001-1 -- Reliability Responsibilities...

  11. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  12. Probabilistic fatigue methodology for six nines reliability

    NASA Technical Reports Server (NTRS)

    Everett, R. A., Jr.; Bartlett, F. D., Jr.; Elber, Wolf

    1990-01-01

    Fleet readiness and flight safety strongly depend on the degree of reliability that can be designed into rotorcraft flight critical components. The current U.S. Army fatigue life specification for new rotorcraft is the so-called six nines reliability, or a probability of failure of one in a million. The progress of a round robin which was established by the American Helicopter Society (AHS) Subcommittee for Fatigue and Damage Tolerance is reviewed to investigate reliability-based fatigue methodology. The participants in this cooperative effort are in the U.S. Army Aviation Systems Command (AVSCOM) and the rotorcraft industry. One phase of the joint activity examined fatigue reliability under uniquely defined conditions for which only one answer was correct. The other phases were set up to learn how the different industry methods in defining fatigue strength affected the mean fatigue life and reliability calculations. Hence, constant amplitude and spectrum fatigue test data were provided so that each participant could perform their standard fatigue life analysis. As a result of this round robin, the probabilistic logic which includes both fatigue strength and spectrum loading variability in developing a consistant reliability analysis was established. In this first study, the reliability analysis was limited to the linear cumulative damage approach. However, it is expected that superior fatigue life prediction methods will ultimately be developed through this open AHS forum. To that end, these preliminary results were useful in identifying some topics for additional study.

  13. Integrating Reliability Analysis with a Performance Tool

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Ulrey, Michael

    1995-01-01

    A large number of commercial simulation tools support performance oriented studies of complex computer and communication systems. Reliability of these systems, when desired, must be obtained by remodeling the system in a different tool. This has obvious drawbacks: (1) substantial extra effort is required to create the reliability model; (2) through modeling error the reliability model may not reflect precisely the same system as the performance model; (3) as the performance model evolves one must continuously reevaluate the validity of assumptions made in that model. In this paper we describe an approach, and a tool that implements this approach, for integrating a reliability analysis engine into a production quality simulation based performance modeling tool, and for modeling within such an integrated tool. The integrated tool allows one to use the same modeling formalisms to conduct both performance and reliability studies. We describe how the reliability analysis engine is integrated into the performance tool, describe the extensions made to the performance tool to support the reliability analysis, and consider the tool's performance.

  14. a Reliability Evaluation System of Association Rules

    NASA Astrophysics Data System (ADS)

    Chen, Jiangping; Feng, Wanshu; Luo, Minghai

    2016-06-01

    In mining association rules, the evaluation of the rules is a highly important work because it directly affects the usability and applicability of the output results of mining. In this paper, the concept of reliability was imported into the association rule evaluation. The reliability of association rules was defined as the accordance degree that reflects the rules of the mining data set. Such degree contains three levels of measurement, namely, accuracy, completeness, and consistency of rules. To show its effectiveness, the "accuracy-completeness-consistency" reliability evaluation system was applied to two extremely different data sets, namely, a basket simulation data set and a multi-source lightning data fusion. Results show that the reliability evaluation system works well in both simulation data set and the actual problem. The three-dimensional reliability evaluation can effectively detect the useless rules to be screened out and add the missing rules thereby improving the reliability of mining results. Furthermore, the proposed reliability evaluation system is applicable to many research fields; using the system in the analysis can facilitate obtainment of more accurate, complete, and consistent association rules.

  15. Multisite Reliability of Cognitive BOLD Data

    PubMed Central

    Brown, Gregory G.; Mathalon, Daniel H.; Stern, Hal; Ford, Judith; Mueller, Bryon; Greve, Douglas N.; McCarthy, Gregory; Voyvodic, Jim; Glover, Gary; Diaz, Michele; Yetter, Elizabeth; Burak Ozyurt, I.; Jorgensen, Kasper W.; Wible, Cynthia G.; Turner, Jessica A.; Thompson, Wesley K.; Potkin, Steven G.

    2010-01-01

    Investigators perform multi-site functional magnetic resonance imaging studies to increase statistical power, to enhance generalizability, and to improve the likelihood of sampling relevant subgroups. Yet undesired site variation in imaging methods could off-set these potential advantages. We used variance components analysis to investigate sources of variation in the blood oxygen level dependent (BOLD) signal across four 3T magnets in voxelwise and region of interest (ROI) analyses. Eighteen participants traveled to four magnet sites to complete eight runs of a working memory task involving emotional or neutral distraction. Person variance was more than 10 times larger than site variance for five of six ROIs studied. Person-by-site interactions, however, contributed sizable unwanted variance to the total. Averaging over runs increased between-site reliability, with many voxels showing good to excellent between-site reliability when eight runs were averaged and regions of interest showing fair to good reliability. Between-site reliability depended on the specific functional contrast analyzed in addition to the number of runs averaged. Although median effect size was correlated with between-site reliability, dissociations were observed for many voxels. Brain regions where the pooled effect size was large but between-site reliability was poor were associated with reduced individual differences. Brain regions where the pooled effect size was small but between-site reliability was excellent were associated with a balance of participants who displayed consistently positive or consistently negative BOLD responses. Although between-site reliability of BOLD data can be good to excellent, acquiring highly reliable data requires robust activation paradigms, ongoing quality assurance, and careful experimental control. PMID:20932915

  16. Reliability Degradation Due to Stockpile Aging

    SciTech Connect

    Robinson, David G.

    1999-04-01

    The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile

  17. Reliability techniques in the petroleum industry

    NASA Technical Reports Server (NTRS)

    Williams, H. L.

    1971-01-01

    Quantitative reliability evaluation methods used in the Apollo Spacecraft Program are translated into petroleum industry requirements with emphasis on offsetting reliability demonstration costs and limited production runs. Described are the qualitative disciplines applicable, the definitions and criteria that accompany the disciplines, and the generic application of these disciplines to the chemical industry. The disciplines are then translated into proposed definitions and criteria for the industry, into a base-line reliability plan that includes these disciplines, and into application notes to aid in adapting the base-line plan to a specific operation.

  18. MEMS reliability: The challenge and the promise

    SciTech Connect

    Miller, W.M.; Tanner, D.M.; Miller, S.L.; Peterson, K.A.

    1998-05-01

    MicroElectroMechanical Systems (MEMS) that think, sense, act and communicate will open up a broad new array of cost effective solutions only if they prove to be sufficiently reliable. A valid reliability assessment of MEMS has three prerequisites: (1) statistical significance; (2) a technique for accelerating fundamental failure mechanisms, and (3) valid physical models to allow prediction of failures during actual use. These already exist for the microelectronics portion of such integrated systems. The challenge lies in the less well understood micromachine portions and its synergistic effects with microelectronics. This paper presents a methodology addressing these prerequisites and a description of the underlying physics of reliability for micromachines.

  19. Apollo experience report: Reliability and quality assurance

    NASA Technical Reports Server (NTRS)

    Sperber, K. P.

    1973-01-01

    The reliability of the Apollo spacecraft resulted from the application of proven reliability and quality techniques and from sound management, engineering, and manufacturing practices. Continual assessment of these techniques and practices was made during the program, and, when deficiencies were detected, adjustments were made and the deficiencies were effectively corrected. The most significant practices, deficiencies, adjustments, and experiences during the Apollo Program are described in this report. These experiences can be helpful in establishing an effective base on which to structure an efficient reliability and quality assurance effort for future space-flight programs.

  20. B-52 stability augmentation system reliability

    NASA Technical Reports Server (NTRS)

    Bowling, T. C.; Key, L. W.

    1976-01-01

    The B-52 SAS (Stability Augmentation System) was developed and retrofitted to nearly 300 aircraft. It actively controls B-52 structural bending, provides improved yaw and pitch damping through sensors and electronic control channels, and puts complete reliance on hydraulic control power for rudder and elevators. The system has experienced over 300,000 flight hours and has exhibited service reliability comparable to the results of the reliability test program. Development experience points out numerous lessons with potential application in the mechanization and development of advanced technology control systems of high reliability.

  1. Reliability of a Series Pipe Network

    NASA Technical Reports Server (NTRS)

    Harris, Rodney; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a series pipe network. The observed responses are the head delivered by a main pump and the line head at a desired flow rate. The probability that the flow rate in the line will be less than a specified minimum will be discussed.

  2. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  3. Measurement Practices for Reliability and Power Quality

    SciTech Connect

    Kueck, JD

    2005-05-06

    This report provides a distribution reliability measurement ''toolkit'' that is intended to be an asset to regulators, utilities and power users. The metrics and standards discussed range from simple reliability, to power quality, to the new blend of reliability and power quality analysis that is now developing. This report was sponsored by the Office of Electric Transmission and Distribution, U.S. Department of Energy (DOE). Inconsistencies presently exist in commonly agreed-upon practices for measuring the reliability of the distribution systems. However, efforts are being made by a number of organizations to develop solutions. In addition, there is growing interest in methods or standards for measuring power quality, and in defining power quality levels that are acceptable to various industries or user groups. The problems and solutions vary widely among geographic areas and among large investor-owned utilities, rural cooperatives, and municipal utilities; but there is still a great degree of commonality. Industry organizations such as the National Rural Electric Cooperative Association (NRECA), the Electric Power Research Institute (EPRI), the American Public Power Association (APPA), and the Institute of Electrical and Electronics Engineers (IEEE) have made tremendous strides in preparing self-assessment templates, optimization guides, diagnostic techniques, and better definitions of reliability and power quality measures. In addition, public utility commissions have developed codes and methods for assessing performance that consider local needs. There is considerable overlap among these various organizations, and we see real opportunity and value in sharing these methods, guides, and standards in this report. This report provides a ''toolkit'' containing synopses of noteworthy reliability measurement practices. The toolkit has been developed to address the interests of three groups: electric power users, utilities, and regulators. The report will also serve

  4. Trends in electronic structures and structural properties of MAX phases: a first-principles study on M(2)AlC (M = Sc, Ti, Cr, Zr, Nb, Mo, Hf, or Ta), M(2)AlN, and hypothetical M(2)AlB phases.

    PubMed

    Khazaei, Mohammad; Arai, Masao; Sasaki, Taizo; Estili, Mehdi; Sakka, Yoshio

    2014-12-17

    MAX phases are a large family of layered ceramics with many potential structural applications. A set of first-principles calculations was performed for M(2)AlC and M(2)AlN (M = Sc, Ti, Cr, Zr, Nb, Mo, Hf, or Ta) MAX phases as well as for hypothetical M(2)AlB to investigate trends in their electronic structures, formation energies, and various mechanical properties. Analysis of the calculated data is used to extend the idea that the elastic properties of MAX phases can be controlled according to the valence electron concentration. The valence electron concentrationcan be tuned through the various combinations of transition metal and nonmetal elements. PMID:25419878

  5. Demand Response For Power System Reliability: FAQ

    SciTech Connect

    Kirby, Brendan J

    2006-12-01

    Demand response is the most underutilized power system reliability resource in North America. Technological advances now make it possible to tap this resource to both reduce costs and improve. Misconceptions concerning response capabilities tend to force loads to provide responses that they are less able to provide and often prohibit them from providing the most valuable reliability services. Fortunately this is beginning to change with some ISOs making more extensive use of load response. This report is structured as a series of short questions and answers that address load response capabilities and power system reliability needs. Its objective is to further the use of responsive load as a bulk power system reliability resource in providing the fastest and most valuable ancillary services.

  6. Amorphous-silicon cell reliability testing

    NASA Technical Reports Server (NTRS)

    Lathrop, J. W.

    1985-01-01

    The work on reliability testing of solar cells is discussed. Results are given on initial temperature and humidity tests of amorphous silicon devices. Calibration and measurement procedures for amorphous and crystalline cells are given. Temperature stress levels are diagrammed.

  7. PV Module Reliability Research (Fact Sheet)

    SciTech Connect

    Not Available

    2013-06-01

    This National Center for Photovoltaics sheet describes the capabilities of its PV module reliability research. The scope and core competencies and capabilities are discussed and recent publications are listed.

  8. Flight control electronics reliability/maintenance study

    NASA Technical Reports Server (NTRS)

    Dade, W. W.; Edwards, R. H.; Katt, G. T.; Mcclellan, K. L.; Shomber, H. A.

    1977-01-01

    Collection and analysis of data are reported that concern the reliability and maintenance experience of flight control system electronics currently in use on passenger carrying jet aircraft. Two airlines B-747 airplane fleets were analyzed to assess the component reliability, system functional reliability, and achieved availability of the CAT II configuration flight control system. Also assessed were the costs generated by this system in the categories of spare equipment, schedule irregularity, and line and shop maintenance. The results indicate that although there is a marked difference in the geographic location and route pattern between the airlines studied, there is a close similarity in the reliability and the maintenance costs associated with the flight control electronics.

  9. Semiconductor Reliability--Another Field for Physicists.

    ERIC Educational Resources Information Center

    Derman, Samuel; Anderson, Wallace T.

    1994-01-01

    Stresses that an important industrial area is product reliability, especially for semiconductors. Suggests that physics students would benefit from training in semiconductors: the many modes of failure, radiation effects, and electrical contact problems. (MVL)

  10. MOV reliability evaluation and periodic verification scheduling

    SciTech Connect

    Bunte, B.D.

    1996-12-01

    The purpose of this paper is to establish a periodic verification testing schedule based on the expected long term reliability of gate or globe motor operated valves (MOVs). The methodology in this position paper determines the nominal (best estimate) design margin for any MOV based on the best available information pertaining to the MOVs design requirements, design parameters, existing hardware design, and present setup. The uncertainty in this margin is then determined using statistical means. By comparing the nominal margin to the uncertainty, the reliability of the MOV is estimated. The methodology is appropriate for evaluating the reliability of MOVs in the GL 89-10 program. It may be used following periodic testing to evaluate and trend MOV performance and reliability. It may also be used to evaluate the impact of proposed modifications and maintenance activities such as packing adjustments. In addition, it may be used to assess the impact of new information of a generic nature which impacts safety related MOVs.

  11. An interactive program for software reliability modeling

    NASA Technical Reports Server (NTRS)

    Farr, W. H.; Smith, O. D.

    1984-01-01

    With the tremendous growth in computer software, the demand has arisen for producing cost effective reliable software. Over the last 10 years an area of research has developed which attempts to address this problem by estimating a program's current reliability by modeling either the times between error detections or the error counts in past testing periods. A new tool for interactive software reliability analysis using the computer is described. This computer program allows the user to perform a complete reliability analysis using any of eight well-known models appearing in the literature. Some of the capabilities of the program are illustrated by means of an analysis of a set of simulated error data.

  12. Reliability for the 21st century

    SciTech Connect

    Keller-McNulty, Sallie; Wilson, A. G.

    2002-01-01

    The sophistication of science and teclinology is growing almost exponentially. Government and industry are relying more and more on science's advanced methods to assess reliability coupled with pcrformance, safety, surety, cost, schedule, etc. Unfortunately, policy, cost, schedule, and other constraints imposed by the real world inhibit the ability of researchers to calculate these metrics efficiently and accurately using traditional methods. Because of such constraints, reliability must undergo an evolutionary change. The first step in this evolution is to reinterpret the concepts and responsibilities of scientists responsible for reliability calculations to meet the new century's needs. The next step is to mount a multidisciplinary approach to the quantification of reliability and its associated metrics using both empirical methods and auxiliary data sources, such as expert knowledge, corporate memory, and mathematical modeling and simulation.

  13. Operational reliability of standby safety systems

    SciTech Connect

    Grant, G.M.; Atwood, C.L.; Gentillon, C.D.

    1995-04-01

    The Idaho National Engineering Laboratory (INEL) is evaluating the operational reliability of several risk-significant standby safety systems based on the operating experience at US commercial nuclear power plants from 1987 through 1993. The reliability assessed is the probability that the system will perform its Probabilistic Risk Assessment (PRA) defined safety function. The quantitative estimates of system reliability are expected to be useful in risk-based regulation. This paper is an overview of the analysis methods and the results of the high pressure coolant injection (HPCI) system reliability study. Key characteristics include (1) descriptions of the data collection and analysis methods, (2) the statistical methods employed to estimate operational unreliability, (3) a description of how the operational unreliability estimates were compared with typical PRA results, both overall and for each dominant failure mode, and (4) a summary of results of the study.

  14. Reliability modelling and analysis of thermal MEMS

    NASA Astrophysics Data System (ADS)

    Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.

    2006-04-01

    This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.

  15. An Ultra Reliability for Project for NASA

    NASA Technical Reports Server (NTRS)

    Shapiro, Andrew A.

    2005-01-01

    NASA has embarked on a new program designed to improve the reliability of NASA systems. In this context, the goal for ultra reliability is to ultimately improve the systems by an order of magnitude. The approach outlined in this presentation involves five steps: 1. Divide NASA systems into seven sectors; 2. Establish sector champions and representatives from each NASA center; 3. Develop a challenge list for each sector using a team of NASA experts in each area with the sector champion facilitating the effort; 4. Develop mitigation strategies for each of the sectors' challenge lists and rank their importance by holding a workshop with area experts from government (NASA and non-NASA), universities and industry; 5. Develop a set of tasks for each sector in order of importance for improving the reliability of NASA systems. Several NASA-wide workshops have been held, identifying issues for reliability improvement and providing mitigation strategies for these issues.

  16. Reliability Studies for Fatigue-Crack Detection

    NASA Technical Reports Server (NTRS)

    Christner, B. K.; Rummel, W. D.; Knadler, J.

    1985-01-01

    Reusable test panels available to assess reliability of techniques that use fluorescent penetrant to detect fatigue cracks. Ultrasonic cleaning method developed for removing penetrant from panels prior to reuse.

  17. Reliability Issues for Photovoltaic Modules (Presentation)

    SciTech Connect

    Kurtz, S.

    2009-10-01

    Si modules good in field; new designs need reliability testing. CdTe & CIGS modules sensitive to moisture; carefully seal. CPV in product development stage; benefits from expertise in other industries.

  18. Reliability of chemical analyses of water samples

    SciTech Connect

    Beardon, R.

    1989-11-01

    Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.

  19. Reliability and risk assessment of structures

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.

    1991-01-01

    Development of reliability and risk assessment of structural components and structures is a major activity at Lewis Research Center. It consists of five program elements: (1) probabilistic loads; (2) probabilistic finite element analysis; (3) probabilistic material behavior; (4) assessment of reliability and risk; and (5) probabilistic structural performance evaluation. Recent progress includes: (1) the evaluation of the various uncertainties in terms of cumulative distribution functions for various structural response variables based on known or assumed uncertainties in primitive structural variables; (2) evaluation of the failure probability; (3) reliability and risk-cost assessment; and (4) an outline of an emerging approach for eventual certification of man-rated structures by computational methods. Collectively, the results demonstrate that the structural durability/reliability of man-rated structural components and structures can be effectively evaluated by using formal probabilistic methods.

  20. An Introduction to Reliability and Maintainability.

    ERIC Educational Resources Information Center

    Berridge, C. R.

    1984-01-01

    Discusses the need to include studies of reliability and maintainability during the design of any system. Topic areas addressed include availability calculations, complex systems and standby redundancy, availability and malfunction levels, design techniques, fault trees, functional maintenance, and others. (DH)

  1. Techniques for improving reliability of computers

    NASA Technical Reports Server (NTRS)

    Cater, W. C.; Mccarthy, C. E.; Jessep, D. C.; Wadia, A. B.; Milligan, F. G.; Bouricius, W. G.

    1972-01-01

    Modular design techniques improve methods of error detection, diagnosis, and recovery. Theoretical computer (MARCS (Modular Architecture for Reliable Computer Systems)) study deals with postulated and modeled technology indigenous to 1975-1980. Study developments are discussed.

  2. Analyzing network reliability using structural motifs

    NASA Astrophysics Data System (ADS)

    Khorramzadeh, Yasamin; Youssef, Mina; Eubank, Stephen; Mowlaei, Shahir

    2015-04-01

    This paper uses the reliability polynomial, introduced by Moore and Shannon in 1956, to analyze the effect of network structure on diffusive dynamics such as the spread of infectious disease. We exhibit a representation for the reliability polynomial in terms of what we call structural motifs that is well suited for reasoning about the effect of a network's structural properties on diffusion across the network. We illustrate by deriving several general results relating graph structure to dynamical phenomena.

  3. Thin-film reliability and engineering overview

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1984-01-01

    The reliability and engineering technology base required for thin film solar energy conversions modules is discussed. The emphasis is on the integration of amorphous silicon cells into power modules. The effort is being coordinated with SERI's thin film cell research activities as part of DOE's Amorphous Silicon Program. Program concentration is on temperature humidity reliability research, glass breaking strength research, point defect system analysis, hot spot heating assessment, and electrical measurements technology.

  4. Reliability of composite vessels and proof testing.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.; Knoell, A. C.

    1972-01-01

    The reliability of anisotropic structures is formulated with the aid of the yield and fracture criteria, on the basis of recent studies of composite rocket motor cases in a state of plane stress. The vessel reliability is estimated in terms of the safety factor, thus permitting a rational interpretation of the design safety factor in terms of the vessel relability. The formulation is consistent with the finite element approach, and is being coded into a computer program for practical design evaluation.

  5. Hardware and software reliability estimation using simulations

    NASA Technical Reports Server (NTRS)

    Swern, Frederic L.

    1994-01-01

    The simulation technique is used to explore the validation of both hardware and software. It was concluded that simulation is a viable means for validating both hardware and software and associating a reliability number with each. This is useful in determining the overall probability of system failure of an embedded processor unit, and improving both the code and the hardware where necessary to meet reliability requirements. The methodologies were proved using some simple programs, and simple hardware models.

  6. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  7. Gearbox Reliability Collaborative (GRC) Description and Loading

    SciTech Connect

    Oyague, F.

    2011-11-01

    This document describes simulated turbine load cases in accordance to the IEC 61400-1 Ed.3 standard, which is representative of the typical wind turbine design process. The information presented herein is intended to provide a broad understanding of the gearbox reliability collaborative 750kW drivetrain and turbine configuration. In addition, fatigue and ultimate strength drivetrain loads resulting from simulations are presented. This information provides the bases for the analytical work of the gearbox reliability collaborative effort.

  8. System Reliability for LED-Based Products

    SciTech Connect

    Davis, J Lynn; Mills, Karmann; Lamvik, Michael; Yaga, Robert; Shepherd, Sarah D; Bittle, James; Baldasaro, Nick; Solano, Eric; Bobashev, Georgiy; Johnson, Cortina; Evans, Amy

    2014-04-07

    Results from accelerated life tests (ALT) on mass-produced commercially available 6” downlights are reported along with results from commercial LEDs. The luminaires capture many of the design features found in modern luminaires. In general, a systems perspective is required to understand the reliability of these devices since LED failure is rare. In contrast, components such as drivers, lenses, and reflector are more likely to impact luminaire reliability than LEDs.

  9. Production Facility System Reliability Analysis Report

    SciTech Connect

    Dale, Crystal Buchanan; Klein, Steven Karl

    2015-10-06

    This document describes the reliability, maintainability, and availability (RMA) modeling of the Los Alamos National Laboratory (LANL) design for the Closed Loop Helium Cooling System (CLHCS) planned for the NorthStar accelerator-based 99Mo production facility. The current analysis incorporates a conceptual helium recovery system, beam diagnostics, and prototype control system into the reliability analysis. The results from the 1000 hr blower test are addressed.

  10. Multi-Disciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  11. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  12. Reliability evaluation methodology for NASA applications

    NASA Technical Reports Server (NTRS)

    Taneja, Vidya S.

    1992-01-01

    Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.

  13. Reliability in perceptual analysis of voice quality.

    PubMed

    Bele, Irene Velsvik

    2005-12-01

    This study focuses on speaking voice quality in male teachers (n = 35) and male actors (n = 36), who represent untrained and trained voice users, because we wanted to investigate normal and supranormal voices. In this study, both substantial and methodologic aspects were considered. It includes a method for perceptual voice evaluation, and a basic issue was rater reliability. A listening group of 10 listeners, 7 experienced speech-language therapists, and 3 speech-language therapist students evaluated the voices by 15 vocal characteristics using VA scales. Two sets of voice signals were investigated: text reading (2 loudness levels) and sustained vowel (3 levels). The results indicated a high interrater reliability for most perceptual characteristics. Connected speech was evaluated more reliably, especially at the normal level, but both types of voice signals were evaluated reliably, although the reliability for connected speech was somewhat higher than for vowels. Experienced listeners tended to be more consistent in their ratings than did the student raters. Some vocal characteristics achieved acceptable reliability even with a smaller panel of listeners. The perceptual characteristics grouped in 4 factors reflected perceptual dimensions. PMID:16301102

  14. Reliability improvement of distribution systems using SSVR.

    PubMed

    Hosseini, Mehdi; Shayanfar, Heidar Ali; Fotuhi-Firuzabad, Mahmoud

    2009-01-01

    This paper presents a reliability assessment algorithm for distribution systems using a Static Series Voltage Regulator (SSVR). Furthermore, this algorithm considers the effects of Distributed Generation (DG) units, alternative sources, system reconfiguration, load shedding and load adding on distribution system reliability indices. In this algorithm, load points are classified into 8 types and separated restoration times are considered for each class. Comparative studies are conducted to investigate the impacts of DG and alternative source unavailability on the distribution system reliability. For reliability assessment, the customer-oriented reliability indices such as SAIFI, SAIDI, CAIDI ASUI and also load- and energy-oriented indices such as ENS and AENS are evaluated. The effectiveness of the proposed algorithm is examined on the two standard distribution systems consisting of 33 and 69 nodes. The best location of the SSVR in distribution systems is determined based on different reliability indices, separately. Results show that the proposed algorithm is efficient for large-scale radial distribution systems and can accommodate the effects of fault isolation and load restoration. PMID:19006802

  15. Improving Reliability of a Residency Interview Process

    PubMed Central

    Serres, Michelle L.; Gundrum, Todd E.

    2013-01-01

    Objective. To improve the reliability and discrimination of a pharmacy resident interview evaluation form, and thereby improve the reliability of the interview process. Methods. In phase 1 of the study, authors used a Many-Facet Rasch Measurement model to optimize an existing evaluation form for reliability and discrimination. In phase 2, interviewer pairs used the modified evaluation form within 4 separate interview stations. In phase 3, 8 interviewers individually-evaluated each candidate in one-on-one interviews. Results. In phase 1, the evaluation form had a reliability of 0.98 with person separation of 6.56; reproducibly, the form separated applicants into 6 distinct groups. Using that form in phase 2 and 3, our largest variation source was candidates, while content specificity was the next largest variation source. The phase 2 g-coefficient was 0.787, while confirmatory phase 3 was 0.922. Process reliability improved with more stations despite fewer interviewers per station—impact of content specificity was greatly reduced with more interview stations. Conclusion. A more reliable, discriminating evaluation form was developed to evaluate candidates during resident interviews, and a process was designed that reduced the impact from content specificity. PMID:24159209

  16. Test-retest reliability of cognitive EEG

    NASA Technical Reports Server (NTRS)

    McEvoy, L. K.; Smith, M. E.; Gevins, A.

    2000-01-01

    OBJECTIVE: Task-related EEG is sensitive to changes in cognitive state produced by increased task difficulty and by transient impairment. If task-related EEG has high test-retest reliability, it could be used as part of a clinical test to assess changes in cognitive function. The aim of this study was to determine the reliability of the EEG recorded during the performance of a working memory (WM) task and a psychomotor vigilance task (PVT). METHODS: EEG was recorded while subjects rested quietly and while they performed the tasks. Within session (test-retest interval of approximately 1 h) and between session (test-retest interval of approximately 7 days) reliability was calculated for four EEG components: frontal midline theta at Fz, posterior theta at Pz, and slow and fast alpha at Pz. RESULTS: Task-related EEG was highly reliable within and between sessions (r0.9 for all components in WM task, and r0.8 for all components in the PVT). Resting EEG also showed high reliability, although the magnitude of the correlation was somewhat smaller than that of the task-related EEG (r0.7 for all 4 components). CONCLUSIONS: These results suggest that under appropriate conditions, task-related EEG has sufficient retest reliability for use in assessing clinical changes in cognitive status.

  17. Multisensory calibration is independent of cue reliability

    PubMed Central

    Zaidel, Adam; Turner, Amanda H.; Angelaki, Dora E.

    2011-01-01

    Multisensory calibration is fundamental for proficient interaction within a changing environment. Initial studies suggested a visual-dominant mechanism. More recently, a cue-reliability based model, similar to optimal cue-integration, has been proposed. However, a more general, reliability-independent model of fixed-ratio adaptation (of which visual-dominance is a sub-case) has never been tested. Here, we studied behavior of both humans and monkeys performing a heading-discrimination task. Subjects were presented with either visual (optic-flow), vestibular (motion-platform) or combined (visual/vestibular) stimuli, and required to report whether self-motion was to the right/left of straight ahead. A systematic heading-discrepancy was introduced between the visual and vestibular cues, without external feedback. Cue-calibration was measured by the resulting sensory adaptation. Both visual and vestibular cues significantly adapted in the direction required to reduce cue-conflict. However, unlike multisensory cue-integration, cue-calibration was not reliability-based. Rather, a model of fixed-ratio adaptation best described the data, whereby vestibular adaptation was greater than visual adaptation, irrespective of relative cue-reliability. The average ratio of vestibular to visual adaptation was 1.75 and 2.30 for the human and monkey data, respectively. Furthermore, only through modeling fixed-ratio adaptation (using the ratio extracted from the data), were we were able to account for reliability-based cue-integration during the adaptation process. The finding that cue-calibration does not depend on cue-reliability is consistent with the notion that it follows an underlying estimate of cue-accuracy. Cue-accuracy is generally independent of cue-reliability and its estimate may change with a much slower time-constant. Thus, greater vestibular vs. visual (fixed-ratio) adaptation suggests lower vestibular vs. visual cue-accuracy. PMID:21957256

  18. Multiphase reliability analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Azam, Mohammad S.; Tu, Fang; Pattipati, Krishna R.

    2003-08-01

    Modern industrial systems assume different configurations to accomplish multiple objectives during different phases of operation, and the component parameters may also vary from one phase to the next. Consequently, reliability evaluation of complex multi-phased systems is a vital and challenging issue. Maximization of mission reliability of a multi-phase system via optimal asset selection is another key demand; incorporation of optimization issues adds to the complexities of reliability evaluation processes. Introduction of components having self-diagnostics and self-recovery capabilities, along with increased complexity and phase-dependent configuration variations in network architectures, requires new approaches for reliability evaluation. This paper considers the problem of evaluating the reliability of a complex multi-phased system with self-recovery/fault-protection options. The reliability analysis is based on a colored digraph (i.e., multi-functional) model that subsumes fault trees and digraphs as special cases. These models enable system designers to decide on system architecture modifications and to determine the optimum levels of redundancy. A sum of disjoint products (SDP) approach is employed to compute system reliability. We also formulated the problem of optimal asset selection in a multi-phase system as one of maximizing the probability of mission success under random load profiles on components. Different methods (e.g., ordinal optimization, robust design, and nonparametric statistical testing) are explored to solve the problem. The resulting analytical expressions and the software tool are demonstrated on a generic programmable software-controlled switchgear, a data bus controller system and a multi-phase mission involving helicopters.

  19. Bioharness™ Multivariable Monitoring Device: Part. II: Reliability

    PubMed Central

    Johnstone, James A.; Ford, Paul A.; Hughes, Gerwyn; Watson, Tim; Garrett, Andrew T.

    2012-01-01

    The Bioharness™ monitoring system may provide physiological information on human performance but the reliability of this data is fundamental for confidence in the equipment being used. The objective of this study was to assess the reliability of each of the 5 Bioharness™ variables using a treadmill based protocol. 10 healthy males participated. A between and within subject design to assess the reliability of Heart rate (HR), Breathing Frequency (BF), Accelerometry (ACC) and Infra-red skin temperature (ST) was completed via a repeated, discontinuous, incremental treadmill protocol. Posture (P) was assessed by a tilt table, moved through 160°. Between subject data reported low Coefficient of Variation (CV) and strong correlations(r) for ACC and P (CV< 7.6; r = 0.99, p < 0.01). In contrast, HR and BF (CV~19.4; r~0.70, p < 0.01) and ST (CV 3.7; r = 0.61, p < 0.01), present more variable data. Intra and inter device data presented strong relationships (r > 0.89, p < 0.01) and low CV (<10.1) for HR, ACC, P and ST. BF produced weaker relationships (r < 0.72) and higher CV (<17.4). In comparison to the other variables BF variable consistently presents less reliability. Global results suggest that the Bioharness™ is a reliable multivariable monitoring device during laboratory testing within the limits presented. Key pointsHeart rate and breathing frequency data increased in variance at higher velocities (i.e. ≥ 10 km.h-1)In comparison to the between subject testing, the intra and inter reliability presented good reliability in data suggesting placement or position of device relative to performer could be important for data collectionUnderstanding a devices variability in measurement is important before it can be used within an exercise testing or monitoring setting PMID:24149347

  20. Design for reliability: NASA reliability preferred practices for design and test

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R.

    1994-01-01

    This tutorial summarizes reliability experience from both NASA and industry and reflects engineering practices that support current and future civil space programs. These practices were collected from various NASA field centers and were reviewed by a committee of senior technical representatives from the participating centers (members are listed at the end). The material for this tutorial was taken from the publication issued by the NASA Reliability and Maintainability Steering Committee (NASA Reliability Preferred Practices for Design and Test. NASA TM-4322, 1991). Reliability must be an integral part of the systems engineering process. Although both disciplines must be weighed equally with other technical and programmatic demands, the application of sound reliability principles will be the key to the effectiveness and affordability of America's space program. Our space programs have shown that reliability efforts must focus on the design characteristics that affect the frequency of failure. Herein, we emphasize that these identified design characteristics must be controlled by applying conservative engineering principles.