Sample records for multicasting stable throughput

  1. Maximization Network Throughput Based on Improved Genetic Algorithm and Network Coding for Optical Multicast Networks

    NASA Astrophysics Data System (ADS)

    Wei, Chengying; Xiong, Cuilian; Liu, Huanlin

    2017-12-01

    Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.

  2. Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network.

    PubMed

    Choi, Sangil; Park, Jong Hyuk

    2016-12-02

    Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM.

  3. Minimum Interference Channel Assignment Algorithm for Multicast in a Wireless Mesh Network

    PubMed Central

    Choi, Sangil; Park, Jong Hyuk

    2016-01-01

    Wireless mesh networks (WMNs) have been considered as one of the key technologies for the configuration of wireless machines since they emerged. In a WMN, wireless routers provide multi-hop wireless connectivity between hosts in the network and also allow them to access the Internet via gateway devices. Wireless routers are typically equipped with multiple radios operating on different channels to increase network throughput. Multicast is a form of communication that delivers data from a source to a set of destinations simultaneously. It is used in a number of applications, such as distributed games, distance education, and video conferencing. In this study, we address a channel assignment problem for multicast in multi-radio multi-channel WMNs. In a multi-radio multi-channel WMN, two nearby nodes will interfere with each other and cause a throughput decrease when they transmit on the same channel. Thus, an important goal for multicast channel assignment is to reduce the interference among networked devices. We have developed a minimum interference channel assignment (MICA) algorithm for multicast that accurately models the interference relationship between pairs of multicast tree nodes using the concept of the interference factor and assigns channels to tree nodes to minimize interference within the multicast tree. Simulation results show that MICA achieves higher throughput and lower end-to-end packet delay compared with an existing channel assignment algorithm named multi-channel multicast (MCM). In addition, MICA achieves much lower throughput variation among the destination nodes than MCM. PMID:27918438

  4. Optical multicast system for data center networks.

    PubMed

    Samadi, Payman; Gupta, Varun; Xu, Junjie; Wang, Howard; Zussman, Gil; Bergman, Keren

    2015-08-24

    We present the design and experimental evaluation of an Optical Multicast System for Data Center Networks, a hardware-software system architecture that uniquely integrates passive optical splitters in a hybrid network architecture for faster and simpler delivery of multicast traffic flows. An application-driven control plane manages the integrated optical and electronic switched traffic routing in the data plane layer. The control plane includes a resource allocation algorithm to optimally assign optical splitters to the flows. The hardware architecture is built on a hybrid network with both Electronic Packet Switching (EPS) and Optical Circuit Switching (OCS) networks to aggregate Top-of-Rack switches. The OCS is also the connectivity substrate of splitters to the optical network. The optical multicast system implementation requires only commodity optical components. We built a prototype and developed a simulation environment to evaluate the performance of the system for bulk multicasting. Experimental and numerical results show simultaneous delivery of multicast flows to all receivers with steady throughput. Compared to IP multicast that is the electronic counterpart, optical multicast performs with less protocol complexity and reduced energy consumption. Compared to peer-to-peer multicast methods, it achieves at minimum an order of magnitude higher throughput for flows under 250 MB with significantly less connection overheads. Furthermore, for delivering 20 TB of data containing only 15% multicast flows, it reduces the total delivery energy consumption by 50% and improves latency by 55% compared to a data center with a sole non-blocking EPS network.

  5. Point-to-Point Multicast Communications Protocol

    NASA Technical Reports Server (NTRS)

    Byrd, Gregory T.; Nakano, Russell; Delagi, Bruce A.

    1987-01-01

    This paper describes a protocol to support point-to-point interprocessor communications with multicast. Dynamic, cut-through routing with local flow control is used to provide a high-throughput, low-latency communications path between processors. In addition multicast transmissions are available, in which copies of a packet are sent to multiple destinations using common resources as much as possible. Special packet terminators and selective buffering are introduced to avoid a deadlock during multicasts. A simulated implementation of the protocol is also described.

  6. A high performance totally ordered multicast protocol

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd; Whetten, Brian; Kaplan, Simon

    1995-01-01

    This paper presents the Reliable Multicast Protocol (RMP). RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service such as IP Multicasting. RMP is fully and symmetrically distributed so that no site bears un undue portion of the communication load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These QoS guarantees are selectable on a per packet basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, an implicit naming service, mutually exclusive handlers for messages, and mutually exclusive locks. It has commonly been held that a large performance penalty must be paid in order to implement total ordering -- RMP discounts this. On SparcStation 10's on a 1250 KB/sec Ethernet, RMP provides totally ordered packet delivery to one destination at 842 KB/sec throughput and with 3.1 ms packet latency. The performance stays roughly constant independent of the number of destinations. For two or more destinations on a LAN, RMP provides higher throughput than any protocol that does not use multicast or broadcast.

  7. Design, Implementation, and Verification of the Reliable Multicast Protocol. Thesis

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd L.

    1995-01-01

    This document describes the Reliable Multicast Protocol (RMP) design, first implementation, and formal verification. RMP provides a totally ordered, reliable, atomic multicast service on top of an unreliable multicast datagram service. RMP is fully and symmetrically distributed so that no site bears an undue portion of the communications load. RMP provides a wide range of guarantees, from unreliable delivery to totally ordered delivery, to K-resilient, majority resilient, and totally resilient atomic delivery. These guarantees are selectable on a per message basis. RMP provides many communication options, including virtual synchrony, a publisher/subscriber model of message delivery, a client/server model of delivery, mutually exclusive handlers for messages, and mutually exclusive locks. It has been commonly believed that total ordering of messages can only be achieved at great performance expense. RMP discounts this. The first implementation of RMP has been shown to provide high throughput performance on Local Area Networks (LAN). For two or more destinations a single LAN, RMP provides higher throughput than any other protocol that does not use multicast or broadcast technology. The design, implementation, and verification activities of RMP have occurred concurrently. This has allowed the verification to maintain a high fidelity between design model, implementation model, and the verification model. The restrictions of implementation have influenced the design earlier than in normal sequential approaches. The protocol as a whole has matured smoother by the inclusion of several different perspectives into the product development.

  8. Digital multi-channel stabilization of four-mode phase-sensitive parametric multicasting.

    PubMed

    Liu, Lan; Tong, Zhi; Wiberg, Andreas O J; Kuo, Bill P P; Myslivets, Evgeny; Alic, Nikola; Radic, Stojan

    2014-07-28

    Stable four-mode phase-sensitive (4MPS) process was investigated as a means to enhance two-pump driven parametric multicasting conversion efficiency (CE) and signal to noise ratio (SNR). Instability of multi-beam, phase sensitive (PS) device that inherently behaves as an interferometer, with output subject to ambient induced fluctuations, was addressed theoretically and experimentally. A new stabilization technique that controls phases of three input waves of the 4MPS multicaster and maximizes CE was developed and described. Stabilization relies on digital phase-locked loop (DPLL) specifically was developed to control pump phases to guarantee stable 4MPS operation that is independent of environmental fluctuations. The technique also controls a single (signal) input phase to optimize the PS-induced improvement of the CE and SNR. The new, continuous-operation DPLL has allowed for fully stabilized PS parametric broadband multicasting, demonstrating CE improvement over 20 signal copies in excess of 10 dB.

  9. An Economic Case for End System Multicast

    NASA Astrophysics Data System (ADS)

    Analoui, Morteza; Rezvani, Mohammad Hossein

    This paper presents a non-strategic model for the end-system multicast networks based on the concept of replica exchange economy. We believe that microeconomics is a good candidate to investigate the problem of selfishness of the end-users (peers) in order to maximize the aggregate throughput. In this solution concept, the decisions that a peer might make, does not affect the actions of the other peers at all. The proposed mechanism tunes the price of the service in such a way that general equilibrium holds.

  10. A Low-Complexity Subgroup Formation with QoS-Aware for Enhancing Multicast Services in LTE Networks

    NASA Astrophysics Data System (ADS)

    Algharem, M.; Omar, M. H.; Rahmat, R. F.; Budiarto, R.

    2018-03-01

    The high demand of Multimedia services on in Long Term Evolution (LTE) and beyond networks forces the networks operators to find a solution that can handle the huge traffic. Along with this, subgroup formation techniques are introduced to overcome the limitations of the Conventional Multicast Scheme (CMS) by splitting the multicast users into several subgroups based on the users’ channels quality signal. However, finding the best subgroup configuration with low complexity is need more investigations. In this paper, an efficient and simple subgroup formation mechanisms are proposed. The proposed mechanisms take the transmitter MAC queue in account. The effectiveness of the proposed mechanisms is evaluated and compared with CMS in terms of throughput, fairness, delay, Block Error Rate (BLER).

  11. An FEC Adaptive Multicast MAC Protocol for Providing Reliability in WLANs

    NASA Astrophysics Data System (ADS)

    Basalamah, Anas; Sato, Takuro

    For wireless multicast applications like multimedia conferencing, voice over IP and video/audio streaming, a reliable transmission of packets within short delivery delay is needed. Moreover, reliability is crucial to the performance of error intolerant applications like file transfer, distributed computing, chat and whiteboard sharing. Forward Error Correction (FEC) is frequently used in wireless multicast to enhance Packet Error Rate (PER) performance, but cannot assure full reliability unless coupled with Automatic Repeat Request forming what is knows as Hybrid-ARQ. While reliable FEC can be deployed at different levels of the protocol stack, it cannot be deployed on the MAC layer of the unreliable IEEE802.11 WLAN due to its inability to exchange ACKs with multiple recipients. In this paper, we propose a Multicast MAC protocol that enhances WLAN reliability by using Adaptive FEC and study it's performance through mathematical analysis and simulation. Our results show that our protocol can deliver high reliability and throughput performance.

  12. Mobility based multicast routing in wireless mesh networks

    NASA Astrophysics Data System (ADS)

    Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan

    2013-01-01

    There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.

  13. Reliable multicast protocol specifications flow control and NACK policy

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian

    1995-01-01

    This appendix presents the flow and congestion control schemes recommended for RMP and a NACK policy based on the whiteboard tool. Because RMP uses a primarily NACK based error detection scheme, there is no direct feedback path through which receivers can signal losses through low buffer space or congestion. Reliable multicast protocols also suffer from the fact that throughput for a multicast group must be divided among the members of the group. This division is usually very dynamic in nature and therefore does not lend itself well to a priori determination. These facts have led the flow and congestion control schemes of RMP to be made completely orthogonal to the protocol specification. This allows several differing schemes to be used in different environments to produce the best results. As a default, a modified sliding window scheme based on previous algorithms are suggested and described below.

  14. Fixed-rate layered multicast congestion control

    NASA Astrophysics Data System (ADS)

    Bing, Zhang; Bing, Yuan; Zengji, Liu

    2006-10-01

    A new fixed-rate layered multicast congestion control algorithm called FLMCC is proposed. The sender of a multicast session transmits data packets at a fixed rate on each layer, while receivers each obtain different throughput by cumulatively subscribing to deferent number of layers based on their expected rates. In order to provide TCP-friendliness and estimate the expected rate accurately, a window-based mechanism implemented at receivers is presented. To achieve this, each receiver maintains a congestion window, adjusts it based on the GAIMD algorithm, and from the congestion window an expected rate is calculated. To measure RTT, a new method is presented which combines an accurate measurement with a rough estimation. A feedback suppression based on a random timer mechanism is given to avoid feedback implosion in the accurate measurement. The protocol is simple in its implementation. Simulations indicate that FLMCC shows good TCP-friendliness, responsiveness as well as intra-protocol fairness, and provides high link utilization.

  15. Apply network coding for H.264/SVC multicasting

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Kuo, C.-C. Jay

    2008-08-01

    In a packet erasure network environment, video streaming benefits from error control in two ways to achieve graceful degradation. The first approach is application-level (or the link-level) forward error-correction (FEC) to provide erasure protection. The second error control approach is error concealment at the decoder end to compensate lost packets. A large amount of research work has been done in the above two areas. More recently, network coding (NC) techniques have been proposed for efficient data multicast over networks. It was shown in our previous work that multicast video streaming benefits from NC for its throughput improvement. An algebraic model is given to analyze the performance in this work. By exploiting the linear combination of video packets along nodes in a network and the SVC video format, the system achieves path diversity automatically and enables efficient video delivery to heterogeneous receivers in packet erasure channels. The application of network coding can protect video packets against the erasure network environment. However, the rank defficiency problem of random linear network coding makes the error concealment inefficiently. It is shown by computer simulation that the proposed NC video multicast scheme enables heterogenous receiving according to their capacity constraints. But it needs special designing to improve the video transmission performance when applying network coding.

  16. Evaluation of multicast schemes in optical burst-switched networks: the case with dynamic sessions

    NASA Astrophysics Data System (ADS)

    Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun; Vandenhoute, Marc

    2000-10-01

    In this paper, we evaluate the performance of several multicast schemes in optical burst-switched WDM networks taking into accounts the overheads due to control packets and guard bands (Gbs) of bursts on separate channels (wavelengths). A straightforward scheme is called Separate Multicasting (S-MCAST) where each source node constructs separate bursts for its multicast (per each multicast session) and unicast traffic. To reduce the overhead due to Gbs (and control packets), one may piggyback the multicast traffic in bursts containing unicast traffic using a scheme called Multiple Unicasting (M-UCAST). The third scheme is called Tree-Shared Multicasting (TS-MCAST) wehreby multicast traffic belonging to multiple multicast sesions can be mixed together in a burst, which is delivered via a shared multicast tree. In [1], we have evaluated several multicast schemes with static sessions at the flow level. In this paper, we perform a simple analysis for the multicast schemes and evaluate the performance of three multicast schemes, focusing on the case with dynamic sessions in terms of the link utilization, bandwidth consumption, blocking (loss) probability, goodput and the processing loads.

  17. Multicasting in Wireless Communications (Ad-Hoc Networks): Comparison against a Tree-Based Approach

    NASA Astrophysics Data System (ADS)

    Rizos, G. E.; Vasiliadis, D. C.

    2007-12-01

    We examine on-demand multicasting in ad hoc networks. The Core Assisted Mesh Protocol (CAMP) is a well-known protocol for multicast routing in ad-hoc networks, generalizing the notion of core-based trees employed for internet multicasting into multicast meshes that have much richer connectivity than trees. On the other hand, wireless tree-based multicast routing protocols use much simpler structures for determining route paths, using only parent-child relationships. In this work, we compare the performance of the CAMP protocol against the performance of wireless tree-based multicast routing protocols, in terms of two important factors, namely packet delay and ratio of dropped packets.

  18. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  19. VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast

    PubMed Central

    Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu

    2015-01-01

    Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member’s departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution. PMID:26562152

  20. VMCast: A VM-Assisted Stability Enhancing Solution for Tree-Based Overlay Multicast.

    PubMed

    Gu, Weidong; Zhang, Xinchang; Gong, Bin; Zhang, Wei; Wang, Lu

    2015-01-01

    Tree-based overlay multicast is an effective group communication method for media streaming applications. However, a group member's departure causes all of its descendants to be disconnected from the multicast tree for some time, which results in poor performance. The above problem is difficult to be addressed because overlay multicast tree is intrinsically instable. In this paper, we proposed a novel stability enhancing solution, VMCast, for tree-based overlay multicast. This solution uses two types of on-demand cloud virtual machines (VMs), i.e., multicast VMs (MVMs) and compensation VMs (CVMs). MVMs are used to disseminate the multicast data, whereas CVMs are used to offer streaming compensation. The used VMs in the same cloud datacenter constitute a VM cluster. Each VM cluster is responsible for a service domain (VMSD), and each group member belongs to a specific VMSD. The data source delivers the multicast data to MVMs through a reliable path, and MVMs further disseminate the data to group members along domain overlay multicast trees. The above approach structurally improves the stability of the overlay multicast tree. We further utilized CVM-based streaming compensation to enhance the stability of the data distribution in the VMSDs. VMCast can be used as an extension to existing tree-based overlay multicast solutions, to provide better services for media streaming applications. We applied VMCast to two application instances (i.e., HMTP and HCcast). The results show that it can obviously enhance the stability of the data distribution.

  1. Demonstration of flexible and reconfigurable WDM multicast scheme supporting downstream emergency multicast communication for WDM optical access network

    NASA Astrophysics Data System (ADS)

    Li, Ze; Zhang, Min; Wang, Danshi; Cui, Yue

    2017-09-01

    We propose a flexible and reconfigurable wavelength-division multiplexing (WDM) multicast scheme supporting downstream emergency multicast communication for WDM optical access network (WDM-OAN) via a multicast module (MM) based on four-wave mixing (FWM) in a semiconductor optical amplifier. It serves as an emergency measure to dispose of the burst, large bandwidth, and real-time multicast service with fast service provisioning and high resource efficiency. It also plays the role of physical backup in cases of big data migration or network disaster caused by invalid lasers or modulator failures. It provides convenient and reliable multicast service and emergency protection for WDM-OAN without modifying WDM-OAN structure. The strategies of an MM setting at the optical line terminal and remote node are discussed to apply this scheme to passive optical networks and active optical networks, respectively. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment in which one-to-six/eight 10-Gbps nonreturn-to-zero-differential phase-shift keying WDM multicasts in both strategies are successfully transmitted over single-mode fiber of 20.2 km. One-to-many reconfigurable WDM multicasts dealing with higher data rate and other modulation formats of multicast service are possible through the proposed scheme. It can be applied to different WDM access technologies, e.g., time-wavelength-division multiplexing-OAN and coherent WDM-OAN, and upgraded smoothly.

  2. Degree-constrained multicast routing for multimedia communications

    NASA Astrophysics Data System (ADS)

    Wang, Yanlin; Sun, Yugeng; Li, Guidan

    2005-02-01

    Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.

  3. A Stateful Multicast Access Control Mechanism for Future Metro-Area-Networks.

    ERIC Educational Resources Information Center

    Sun, Wei-qiang; Li, Jin-sheng; Hong, Pei-lin

    2003-01-01

    Multicasting is a necessity for a broadband metro-area-network; however security problems exist with current multicast protocols. A stateful multicast access control mechanism, based on MAPE, is proposed. The architecture of MAPE is discussed, as well as the states maintained and messages exchanged. The scheme is flexible and scalable. (Author/AEF)

  4. A proposed group management scheme for XTP multicast

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    The purpose of a group management scheme is to enable its associated transfer layer protocol to be responsive to user determined reliability requirements for multicasting. Group management (GM) must assist the client process in coordinating multicast group membership, allow the user to express the subset of the multicast group that a particular multicast distribution must reach in order to be successful (reliable), and provide the transfer layer protocol with the group membership information necessary to guarantee delivery to this subset. GM provides services and mechanisms that respond to the need of the client process or process level management protocols to coordinate, modify, and determine attributes of the multicast group, especially membership. XTP GM provides a link between process groups and their multicast groups by maintaining a group membership database that identifies members in a name space understood by the underlying transfer layer protocol. Other attributes of the multicast group useful to both the client process and the data transfer protocol may be stored in the database. Examples include the relative dispersion, most recent update, and default delivery parameters of a group.

  5. A novel WDM passive optical network architecture supporting two independent multicast data streams

    NASA Astrophysics Data System (ADS)

    Qiu, Yang; Chan, Chun-Kit

    2012-01-01

    We propose a novel scheme to perform optical multicast overlay of two independent multicast data streams on a wavelength-division-multiplexed (WDM) passive optical network. By controlling a sinusoidal clock signal and shifting the wavelength at the optical line terminal (OLT), the delivery of the two multicast data, being carried by the generated optical tones, can be independently and flexibly controlled. Simultaneous transmission of 10-Gb/s unicast downstream and upstream data as well as two independent 10-Gb/s multicast data was successfully demonstrated.

  6. A novel unbalanced multiple description coder for robust video transmission over ad hoc wireless networks

    NASA Astrophysics Data System (ADS)

    Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo

    2006-01-01

    Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.

  7. Enabling end-user network monitoring via the multicast consolidated proxy monitor

    NASA Astrophysics Data System (ADS)

    Kanwar, Anshuman; Almeroth, Kevin C.; Bhattacharyya, Supratik; Davy, Matthew

    2001-07-01

    The debugging of problems in IP multicast networks relies heavily on an eclectic set of stand-alone tools. These tools traditionally neither provide a consistent interface nor do they generate readily interpretable results. We propose the ``Multicast Consolidated Proxy Monitor''(MCPM), an integrated system for collecting, analyzing and presenting multicast monitoring results to both the end user and the network operator at the user's Internet Service Provider (ISP). The MCPM accesses network state information not normally visible to end users and acts as a proxy for disseminating this information. Functionally, through this architecture, we aim to a) provide a view of the multicast network at varying levels of granularity, b) provide end users with a limited ability to query the multicast infrastructure in real time, and c) protect the infrastructure from overwhelming amount of monitoring load through load control. Operationally, our scheme allows scaling to the ISPs dimensions, adaptability to new protocols (introduced as multicast evolves), threshold detection for crucial parameters and an access controlled, customizable interface design. Although the multicast scenario is used to illustrate the benefits of consolidated monitoring, the ultimate aim is to scale the scheme to unicast IP networks.

  8. Issues in designing transport layer multicast facilities

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    Multicasting denotes a facility in a communications system for providing efficient delivery from a message's source to some well-defined set of locations using a single logical address. While modem network hardware supports multidestination delivery, first generation Transport Layer protocols (e.g., the DoD Transmission Control Protocol (TCP) (15) and ISO TP-4 (41)) did not anticipate the changes over the past decade in underlying network hardware, transmission speeds, and communication patterns that have enabled and driven the interest in reliable multicast. Much recent research has focused on integrating the underlying hardware multicast capability with the reliable services of Transport Layer protocols. Here, we explore the communication issues surrounding the design of such a reliable multicast mechanism. Approaches and solutions from the literature are discussed, and four experimental Transport Layer protocols that incorporate reliable multicast are examined.

  9. Application Layer Multicast

    NASA Astrophysics Data System (ADS)

    Allani, Mouna; Garbinato, Benoît; Pedone, Fernando

    An increasing number of Peer-to-Peer (P2P) Internet applications rely today on data dissemination as their cornerstone, e.g., audio or video streaming, multi-party games. These applications typically depend on some support for multicast communication, where peers interested in a given data stream can join a corresponding multicast group. As a consequence, the efficiency, scalability, and reliability guarantees of these applications are tightly coupled with that of the underlying multicast mechanism.

  10. A reliable multicast for XTP

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.

  11. Issues in providing a reliable multicast facility

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Strayer, W. Timothy; Weaver, Alfred C.

    1990-01-01

    Issues involved in point-to-multipoint communication are presented and the literature for proposed solutions and approaches surveyed. Particular attention is focused on the ideas and implementations that align with the requirements of the environment of interest. The attributes of multicast receiver groups that might lead to useful classifications, what the functionality of a management scheme should be, and how the group management module can be implemented are examined. The services that multicasting facilities can offer are presented, followed by mechanisms within the communications protocol that implements these services. The metrics of interest when evaluating a reliable multicast facility are identified and applied to four transport layer protocols that incorporate reliable multicast.

  12. Simultaneous Wireless Power Transfer and Secure Multicasting in Cooperative Decode-and-Forward Relay Networks.

    PubMed

    Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa

    2017-05-16

    In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks.

  13. Simultaneous Wireless Power Transfer and Secure Multicasting in Cooperative Decode-and-Forward Relay Networks

    PubMed Central

    Lee, Jong-Ho; Sohn, Illsoo; Kim, Yong-Hwa

    2017-01-01

    In this paper, we investigate simultaneous wireless power transfer and secure multicasting via cooperative decode-and-forward (DF) relays in the presence of multiple energy receivers and eavesdroppers. Two scenarios are considered under a total power budget: maximizing the minimum harvested energy among the energy receivers under a multicast secrecy rate constraint; and maximizing the multicast secrecy rate under a minimum harvested energy constraint. For both scenarios, we solve the transmit power allocation and relay beamformer design problems by using semidefinite relaxation and bisection technique. We present numerical results to analyze the energy harvesting and secure multicasting performances in cooperative DF relay networks. PMID:28509841

  14. Multicast routing for wavelength-routed WDM networks with dynamic membership

    NASA Astrophysics Data System (ADS)

    Huang, Nen-Fu; Liu, Te-Lung; Wang, Yao-Tzung; Li, Bo

    2000-09-01

    Future broadband networks must support integrated services and offer flexible bandwidth usage. In our previous work, we explore the optical link control layer on the top of optical layer that enables the possibility of bandwidth on-demand service directly over wavelength division multiplexed (WDM) networks. Today, more and more applications and services such as video-conferencing software and Virtual LAN service require multicast support over the underlying networks. Currently, it is difficult to provide wavelength multicast over the optical switches without optical/electronic conversions although the conversion takes extra cost. In this paper, based on the proposed wavelength router architecture (equipped with ATM switches to offer O/E and E/O conversions when necessary), a dynamic multicast routing algorithm is proposed to furnish multicast services over WDM networks. The goal is to joint a new group member into the multicast tree so that the cost, including the link cost and the optical/electronic conversion cost, is kept as less as possible. The effectiveness of the proposed wavelength router architecture as well as the dynamic multicast algorithm is evaluated by simulation.

  15. Mobile Multicast in Hierarchical Proxy Mobile IPV6

    NASA Astrophysics Data System (ADS)

    Hafizah Mohd Aman, Azana; Hashim, Aisha Hassan A.; Mustafa, Amin; Abdullah, Khaizuran

    2013-12-01

    Mobile Internet Protocol Version 6 (MIPv6) environments have been developing very rapidly. Many challenges arise with the fast progress of MIPv6 technologies and its environment. Therefore the importance of improving the existing architecture and operations increases. One of the many challenges which need to be addressed is the need for performance improvement to support mobile multicast. Numerous approaches have been proposed to improve mobile multicast performance. This includes Context Transfer Protocol (CXTP), Hierarchical Mobile IPv6 (HMIPv6), Fast Mobile IPv6 (FMIPv6) and Proxy Mobile IPv6 (PMIPv6). This document describes multicast context transfer in hierarchical proxy mobile IPv6 (H-PMIPv6) to provide better multicasting performance in PMIPv6 domain.

  16. Efficient Group Coordination in Multicast Trees

    DTIC Science & Technology

    2001-01-01

    describe a novel protocol to coordinate multipoint groupwork within the IP-multicast framework. The protocol supports Internet-wide coordination for large...and highly-interactive groupwork , relying on the dissemination of coordination directives among group members across a shared end-to-end multicast

  17. WDM Network and Multicasting Protocol Strategies

    PubMed Central

    Zaim, Abdul Halim

    2014-01-01

    Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM), it is easier to take the advantage of optical networks and optical burst switching (OBS) and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET) and Just In Time (JIT) reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes. PMID:24744683

  18. High-Performance, Reliable Multicasting: Foundations for Future Internet Groupware Applications

    NASA Technical Reports Server (NTRS)

    Callahan, John; Montgomery, Todd; Whetten, Brian

    1997-01-01

    Network protocols that provide efficient, reliable, and totally-ordered message delivery to large numbers of users will be needed to support many future Internet applications. The Reliable Multicast Protocol (RMP) is implemented on top of IP multicast to facilitate reliable transfer of data for replicated databases and groupware applications that will emerge on the Internet over the next decade. This paper explores some of the basic questions and applications of reliable multicasting in the context of the development and analysis of RMP.

  19. Simultaneous multichannel wavelength multicasting and XOR logic gate multicasting for three DPSK signals based on four-wave mixing in quantum-dot semiconductor optical amplifier.

    PubMed

    Qin, Jun; Lu, Guo-Wei; Sakamoto, Takahide; Akahane, Kouichi; Yamamoto, Naokatsu; Wang, Danshi; Wang, Cheng; Wang, Hongxiang; Zhang, Min; Kawanishi, Tetsuya; Ji, Yuefeng

    2014-12-01

    In this paper, we experimentally demonstrate simultaneous multichannel wavelength multicasting (MWM) and exclusive-OR logic gate multicasting (XOR-LGM) for three 10Gbps non-return-to-zero differential phase-shift-keying (NRZ-DPSK) signals in quantum-dot semiconductor optical amplifier (QD-SOA) by exploiting the four-wave mixing (FWM) process. No additional pump is needed in the scheme. Through the interaction of the input three 10Gbps DPSK signal lights in QD-SOA, each channel is successfully multicasted to three wavelengths (1-to-3 for each), totally 3-to-9 MWM, and at the same time, three-output XOR-LGM is obtained at three different wavelengths. All the new generated channels are with a power penalty less than 1.2dB at a BER of 10(-9). Degenerate and non-degenerate FWM components are fully used in the experiment for data and logic multicasting.

  20. MTP: An atomic multicast transport protocol

    NASA Technical Reports Server (NTRS)

    Freier, Alan O.; Marzullo, Keith

    1990-01-01

    Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.

  1. Link importance incorporated failure probability measuring solution for multicast light-trees in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhang, Lu; Tang, Ying; Huang, Shanguo

    2018-03-01

    The light-tree-based optical multicasting (LT-OM) scheme provides a spectrum- and energy-efficient method to accommodate emerging multicast services. Some studies focus on the survivability technologies for LTs against a fixed number of link failures, such as single-link failure. However, a few studies involve failure probability constraints when building LTs. It is worth noting that each link of an LT plays different important roles under failure scenarios. When calculating the failure probability of an LT, the importance of its every link should be considered. We design a link importance incorporated failure probability measuring solution (LIFPMS) for multicast LTs under independent failure model and shared risk link group failure model. Based on the LIFPMS, we put forward the minimum failure probability (MFP) problem for the LT-OM scheme. Heuristic approaches are developed to address the MFP problem in elastic optical networks. Numerical results show that the LIFPMS provides an accurate metric for calculating the failure probability of multicast LTs and enhances the reliability of the LT-OM scheme while accommodating multicast services.

  2. Mobility based key management technique for multicast security in mobile ad hoc networks.

    PubMed

    Madhusudhanan, B; Chitra, S; Rajan, C

    2015-01-01

    In MANET multicasting, forward and backward secrecy result in increased packet drop rate owing to mobility. Frequent rekeying causes large message overhead which increases energy consumption and end-to-end delay. Particularly, the prevailing group key management techniques cause frequent mobility and disconnections. So there is a need to design a multicast key management technique to overcome these problems. In this paper, we propose the mobility based key management technique for multicast security in MANET. Initially, the nodes are categorized according to their stability index which is estimated based on the link availability and mobility. A multicast tree is constructed such that for every weak node, there is a strong parent node. A session key-based encryption technique is utilized to transmit a multicast data. The rekeying process is performed periodically by the initiator node. The rekeying interval is fixed depending on the node category so that this technique greatly minimizes the rekeying overhead. By simulation results, we show that our proposed approach reduces the packet drop rate and improves the data confidentiality.

  3. Many-to-Many Multicast Routing Schemes under a Fixed Topology

    PubMed Central

    Ding, Wei; Wang, Hongfa; Wei, Xuerui

    2013-01-01

    Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706

  4. Dynamic segment shared protection for multicast traffic in meshed wavelength-division-multiplexing optical networks

    NASA Astrophysics Data System (ADS)

    Liao, Luhua; Li, Lemin; Wang, Sheng

    2006-12-01

    We investigate the protection approach for dynamic multicast traffic under shared risk link group (SRLG) constraints in meshed wavelength-division-multiplexing optical networks. We present a shared protection algorithm called dynamic segment shared protection for multicast traffic (DSSPM), which can dynamically adjust the link cost according to the current network state and can establish a primary light-tree as well as corresponding SRLG-disjoint backup segments for a dependable multicast connection. A backup segment can efficiently share the wavelength capacity of its working tree and the common resources of other backup segments based on SRLG-disjoint constraints. The simulation results show that DSSPM not only can protect the multicast sessions against a single-SRLG breakdown, but can make better use of the wavelength resources and also lower the network blocking probability.

  5. Near-field self-interference cancellation and quality of service multicast beamforming in full-duplex

    NASA Astrophysics Data System (ADS)

    Wu, Fei; Shao, Shihai; Tang, Youxi

    2016-10-01

    To enable simultaneous multicast downlink transmit and receive operations on the same frequency band, also known as full-duplex links between an access point and mobile users. The problem of minimizing the total power of multicast transmit beamforming is considered from the viewpoint of ensuring the suppression amount of near-field line-of-sight self-interference and guaranteeing prescribed minimum signal-to-interference-plus-noise-ratio (SINR) at each receiver of the multicast groups. Based on earlier results for multicast groups beamforming, the joint problem is easily shown to be NP-hard. A semidefinite relaxation (SDR) technique with linear program power adjust method is proposed to solve the NP-hard problem. Simulation shows that the proposed method is feasible even when the local receive antenna in nearfield and the mobile user in far-filed are in the same direction.

  6. An efficient group multicast routing for multimedia communication

    NASA Astrophysics Data System (ADS)

    Wang, Yanlin; Sun, Yugen; Yan, Xinfang

    2004-04-01

    Group multicasting is a kind of communication mechanism whereby each member of a group sends messages to all the other members of the same group. Group multicast routing algorithms capable of satisfying quality of service (QoS) requirements of multimedia applications are essential for high-speed networks. We present a heuristic algorithm for group multicast routing with end to end delay constraint. Source-specific routing trees for each member are generated in our algorithm, which satisfy member"s bandwidth and end to end delay requirements. Simulations over random network were carried out to compare proposed algorithm performance with Low and Song"s. The experimental results show that our proposed algorithm performs better in terms of network cost and ability in constructing feasible multicast trees for group members. Moreover, our algorithm achieves good performance in balancing traffic, which can avoid link blocking and enhance the network behavior efficiently.

  7. Web server for priority ordered multimedia services

    NASA Astrophysics Data System (ADS)

    Celenk, Mehmet; Godavari, Rakesh K.; Vetnes, Vermund

    2001-10-01

    In this work, our aim is to provide finer priority levels in the design of a general-purpose Web multimedia server with provisions of the CM services. The type of services provided include reading/writing a web page, downloading/uploading an audio/video stream, navigating the Web through browsing, and interactive video teleconferencing. The selected priority encoding levels for such operations follow the order of admin read/write, hot page CM and Web multicasting, CM read, Web read, CM write and Web write. Hot pages are the most requested CM streams (e.g., the newest movies, video clips, and HDTV channels) and Web pages (e.g., portal pages of the commercial Internet search engines). Maintaining a list of these hot Web pages and CM streams in a content addressable buffer enables a server to multicast hot streams with lower latency and higher system throughput. Cold Web pages and CM streams are treated as regular Web and CM requests. Interactive CM operations such as pause (P), resume (R), fast-forward (FF), and rewind (RW) have to be executed without allocation of extra resources. The proposed multimedia server model is a part of the distributed network with load balancing schedulers. The SM is connected to an integrated disk scheduler (IDS), which supervises an allocated disk manager. The IDS follows the same priority handling as the SM, and implements a SCAN disk-scheduling method for an improved disk access and a higher throughput. Different disks are used for the Web and CM services in order to meet the QoS requirements of CM services. The IDS ouput is forwarded to an Integrated Transmission Scheduler (ITS). The ITS creates a priority ordered buffering of the retrieved Web pages and CM data streams that are fed into an auto regressive moving average (ARMA) based traffic shaping circuitry before being transmitted through the network.

  8. Group-multicast capable optical virtual private ring with contention avoidance

    NASA Astrophysics Data System (ADS)

    Peng, Yunfeng; Du, Shu; Long, Keping

    2008-11-01

    A ring based optical virtual private network (OVPN) employing contention sensing and avoidance is proposed to deliver multiple-to-multiple group-multicast traffic. The network architecture is presented and its operation principles as well as performance are investigated. The main contribution of this article is the presentation of an innovative group-multicast capable OVPN architecture with technologies available today.

  9. Multicast Routing and Wavelength Assignment with Shared Protection in Multi-Fiber WDM Mesh Networks: Optimal and Heuristic Solutions

    NASA Astrophysics Data System (ADS)

    Woradit, Kampol; Guyot, Matthieu; Vanichchanunt, Pisit; Saengudomlert, Poompat; Wuttisittikulkij, Lunchakorn

    While the problem of multicast routing and wavelength assignment (MC-RWA) in optical wavelength division multiplexing (WDM) networks has been investigated, relatively few researchers have considered network survivability for multicasting. This paper provides an optimization framework to solve the MC-RWA problem in a multi-fiber WDM network that can recover from a single-link failure with shared protection. Using the light-tree (LT) concept to support multicast sessions, we consider two protection strategies that try to reduce service disruptions after a link failure. The first strategy, called light-tree reconfiguration (LTR) protection, computes a new multicast LT for each session affected by the failure. The second strategy, called optical branch reconfiguration (OBR) protection, tries to restore a logical connection between two adjacent multicast members disconnected by the failure. To solve the MC-RWA problem optimally, we propose an integer linear programming (ILP) formulation that minimizes the total number of fibers required for both working and backup traffic. The ILP formulation takes into account joint routing of working and backup traffic, the wavelength continuity constraint, and the limited splitting degree of multicast-capable optical cross-connects (MC-OXCs). After showing some numerical results for optimal solutions, we propose heuristic algorithms that reduce the computational complexity and make the problem solvable for large networks. Numerical results suggest that the proposed heuristic yields efficient solutions compared to optimal solutions obtained from exact optimization.

  10. Distributed Ship Navigation Control System Based on Dual Network

    NASA Astrophysics Data System (ADS)

    Yao, Ying; Lv, Wu

    2017-10-01

    Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.

  11. Dynamic multicast routing scheme in WDM optical network

    NASA Astrophysics Data System (ADS)

    Zhu, Yonghua; Dong, Zhiling; Yao, Hong; Yang, Jianyong; Liu, Yibin

    2007-11-01

    During the information era, the Internet and the service of World Wide Web develop rapidly. Therefore, the wider and wider bandwidth is required with the lower and lower cost. The demand of operation turns out to be diversified. Data, images, videos and other special transmission demands share the challenge and opportunity with the service providers. Simultaneously, the electrical equipment has approached their limit. So the optical communication based on the wavelength division multiplexing (WDM) and the optical cross-connects (OXCs) shows great potentials and brilliant future to build an optical network based on the unique technical advantage and multi-wavelength characteristic. In this paper, we propose a multi-layered graph model with inter-path between layers to solve the problem of multicast routing wavelength assignment (RWA) contemporarily by employing an efficient graph theoretic formulation. And at the same time, an efficient dynamic multicast algorithm named Distributed Message Copying Multicast (DMCM) mechanism is also proposed. The multicast tree with minimum hops can be constructed dynamically according to this proposed scheme.

  12. Performance investigation of optical multicast overlay system using orthogonal modulation format

    NASA Astrophysics Data System (ADS)

    Singh, Simranjit; Singh, Sukhbir; Kaur, Ramandeep; Kaler, R. S.

    2015-03-01

    We proposed a bandwidth efficient wavelength division multiplexed-passive optical network (WDM-PON) to simultaneously transmit 60 Gb/s unicast and 10 Gb/s multicast services with 10 Gb/s upstream. The differential phase shift keying (DPSK) multicast signal is superimposed onto multiplexed non-return to zero/polarization shift keying (NRZ/PolSK) orthogonal modulated data signals. Upstream amplitude shift keying (ASK) signals formed without use of any additional light source and superimposed onto received unicast NRZ/PolSK signal before being transmitted back to optical line terminal (OLT). We also investigated the proposed WDM-PON system for variable optical input power, transmission distance of single mode fiber in multicast enable and disable mode. The measured Quality factor for all unicast and multicast signal is in acceptable range (>6). The original contribution of this paper is to propose a bandwidth efficient WDM-PON system that could be projected even in high speed scenario at reduced channel spacing and expected to be more technical viable due to use of optical orthogonal modulation formats.

  13. Design and development of cell queuing, processing, and scheduling modules for the iPOINT input-buffered ATM testbed

    NASA Astrophysics Data System (ADS)

    Duan, Haoran

    1997-12-01

    This dissertation presents the concepts, principles, performance, and implementation of input queuing and cell-scheduling modules for the Illinois Pulsar-based Optical INTerconnect (iPOINT) input-buffered Asynchronous Transfer Mode (ATM) testbed. Input queuing (IQ) ATM switches are well suited to meet the requirements of current and future ultra-broadband ATM networks. The IQ structure imposes minimum memory bandwidth requirements for cell buffering, tolerates bursty traffic, and utilizes memory efficiently for multicast traffic. The lack of efficient cell queuing and scheduling solutions has been a major barrier to build high-performance, scalable IQ-based ATM switches. This dissertation proposes a new Three-Dimensional Queue (3DQ) and a novel Matrix Unit Cell Scheduler (MUCS) to remove this barrier. 3DQ uses a linked-list architecture based on Synchronous Random Access Memory (SRAM) to combine the individual advantages of per-virtual-circuit (per-VC) queuing, priority queuing, and N-destination queuing. It avoids Head of Line (HOL) blocking and provides per-VC Quality of Service (QoS) enforcement mechanisms. Computer simulation results verify the QoS capabilities of 3DQ. For multicast traffic, 3DQ provides efficient usage of cell buffering memory by storing multicast cells only once. Further, the multicast mechanism of 3DQ prevents a congested destination port from blocking other less- loaded ports. The 3DQ principle has been prototyped in the Illinois Input Queue (iiQueue) module. Using Field Programmable Gate Array (FPGA) devices, SRAM modules, and integrated on a Printed Circuit Board (PCB), iiQueue can process incoming traffic at 800 Mb/s. Using faster circuit technology, the same design is expected to operate at the OC-48 rate (2.5 Gb/s). MUCS resolves the output contention by evaluating the weight index of each candidate and selecting the heaviest. It achieves near-optimal scheduling and has a very short response time. The algorithm originates from a heuristic strategy that leads to 'socially optimal' solutions, yielding a maximum number of contention-free cells being scheduled. A novel mixed digital-analog circuit has been designed to implement the MUCS core functionality. The MUCS circuit maps the cell scheduling computation to the capacitor charging and discharging procedures that are conducted fully in parallel. The design has a uniform circuit structure, low interconnect counts, and low chip I/O counts. Using 2 μm CMOS technology, the design operates on a 100 MHz clock and finds a near-optimal solution within a linear processing time. The circuit has been verified at the transistor level by HSPICE simulation. During this research, a five-port IQ-based optoelectronic iPOINT ATM switch has been developed and demonstrated. It has been fully functional with an aggregate throughput of 800 Mb/s. The second-generation IQ-based switch is currently under development. Equipped with iiQueue modules and MUCS module, the new switch system will deliver a multi-gigabit aggregate throughput, eliminate HOL blocking, provide per-VC QoS, and achieve near-100% link bandwidth utilization. Complete documentation of input modules and trunk module for the existing testbed, and complete documentation of 3DQ, iiQueue, and MUCS for the second-generation testbed are given in this dissertation.

  14. Scalable Multicast Protocols for Overlapped Groups in Broker-Based Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kim, Chayoung; Ahn, Jinho

    In sensor networks, there are lots of overlapped multicast groups because of many subscribers, associated with their potentially varying specific interests, querying every event to sensors/publishers. And gossip based communication protocols are promising as one of potential solutions providing scalability in P(Publish)/ S(Subscribe) paradigm in sensor networks. Moreover, despite the importance of both guaranteeing message delivery order and supporting overlapped multicast groups in sensor or P2P networks, there exist little research works on development of gossip-based protocols to satisfy all these requirements. In this paper, we present two versions of causally ordered delivery guaranteeing protocols for overlapped multicast groups. The one is based on sensor-broker as delegates and the other is based on local views and delegates representing subscriber subgroups. In the sensor-broker based protocol, sensor-broker might lead to make overlapped multicast networks organized by subscriber's interests. The message delivery order has been guaranteed consistently and all multicast messages are delivered to overlapped subscribers using gossip based protocols by sensor-broker. Therefore, these features of the sensor-broker based protocol might be significantly scalable rather than those of the protocols by hierarchical membership list of dedicated groups like traditional committee protocols. And the subscriber-delegate based protocol is much stronger rather than fully decentralized protocols guaranteeing causally ordered delivery based on only local views because the message delivery order has been guaranteed consistently by all corresponding members of the groups including delegates. Therefore, this feature of the subscriber-delegate protocol is a hybrid approach improving the inherent scalability of multicast nature by gossip-based technique in all communications.

  15. Inertial Motion Tracking for Inserting Humans into a Networked Synthetic Environment

    DTIC Science & Technology

    2007-08-31

    tracking methods. One method requires markers on the tracked buman body, and other method does not use nmkers. OPTOTRAK from Northem Digital Inc. is a...of using multicasting protocols. Unfortunately, most routers on the Internet are not configured for multicasting. A technique called tunneling is...used to overcome this problem. Tunneling is a software solution that m s on the end point routerslcomputers and allows multicast packets to traverse

  16. Authenticated IGMP for Controlling Access to Multicast Distribution Tree

    NASA Astrophysics Data System (ADS)

    Park, Chang-Seop; Kang, Hyun-Sun

    A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.

  17. A decentralized software bus based on IP multicas ting

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd

    1995-01-01

    We describe decentralized reconfigurable implementation of a conference management system based on the low-level Internet Protocol (IP) multicasting protocol. IP multicasting allows low-cost, world-wide, two-way transmission of data between large numbers of conferencing participants through the Multicasting Backbone (MBone). Each conference is structured as a software bus -- a messaging system that provides a run-time interconnection model that acts as a separate agent (i.e., the bus) for routing, queuing, and delivering messages between distributed programs. Unlike the client-server interconnection model, the software bus model provides a level of indirection that enhances the flexibility and reconfigurability of a distributed system. Current software bus implementations like POLYLITH, however, rely on a centralized bus process and point-to-point protocols (i.e., TCP/IP) to route, queue, and deliver messages. We implement a software bus called the MULTIBUS that relies on a separate process only for routing and uses a reliable IP multicasting protocol for delivery of messages. The use of multicasting means that interconnections are independent of IP machine addresses. This approach allows reconfiguration of bus participants during system execution without notifying other participants of new IP addresses. The use of IP multicasting also permits an economy of scale in the number of participants. We describe the MULITIBUS protocol elements and show how our implementation performs better than centralized bus implementations.

  18. Polling-Based High-Bit-Rate Packet Transfer in a Microcellular Network to Allow Fast Terminals

    NASA Astrophysics Data System (ADS)

    Hoa, Phan Thanh; Lambertsen, Gaute; Yamada, Takahiko

    A microcellular network will be a good candidate for the future broadband mobile network. It is expected to support high-bit-rate connection for many fast mobile users if the handover is processed fast enough to lessen its impact on QoS requirements. One of the promising techniques is believed to use for the wireless interface in such a microcellular network is the WLAN (Wireless LAN) technique due to its very high wireless channel rate. However, the less capability of mobility support of this technique must be improved to be able to expand its utilization for the microcellular environment. The reason of its less support mobility is large handover latency delay caused by contention-based handover to the new BS (base station) and delay of re-forwarding data from the old to new BS. This paper presents a proposal of multi-polling and dynamic LMC (Logical Macro Cell) to reduce mentioned above delays. Polling frame for an MT (Mobile Terminal) is sent from every BS belonging to the same LMC — a virtual single macro cell that is a multicast group of several adjacent micro-cells in which an MT is communicating. Instead of contending for the medium of a new BS during handover, the MT responds to the polling sent from that new BS to enable the transition. Because only one BS of the LMC receives the polling ACK (acknowledgement) directly from the MT, this ACK frame has to be multicast to all BSs of the same LMC through the terrestrial network to continue sending the next polling cycle at each BS. Moreover, when an MT hands over to a new cell, its current LMC is switched over to a newly corresponding LMC to prevent the future contending for a new LMC. By this way, an MT can do handover between micro-cells of an LMC smoothly because the redundant resource is reserved for it at neighboring cells, no need to contend with others. Our simulation results using the OMNeT++ simulator illustrate the performance achievements of the multi-polling and dynamic LMC scheme in eliminating handover latency, packet loss and keeping mobile users' throughput stable in the high traffic load condition though it causes somewhat overhead on the neighboring cells.

  19. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  20. The reliable multicast protocol application programming interface

    NASA Technical Reports Server (NTRS)

    Montgomery , Todd; Whetten, Brian

    1995-01-01

    The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.

  1. Multicast backup reprovisioning problem for Hamiltonian cycle-based protection on WDM networks

    NASA Astrophysics Data System (ADS)

    Din, Der-Rong; Huang, Jen-Shen

    2014-03-01

    As networks grow in size and complexity, the chance and the impact of failures increase dramatically. The pre-allocated backup resources cannot provide 100% protection guarantee when continuous failures occur in a network. In this paper, the multicast backup re-provisioning problem (MBRP) for Hamiltonian cycle (HC)-based protection on WDM networks for the link-failure case is studied. We focus on how to recover the protecting capabilities of Hamiltonian cycle against the subsequent link-failures on WDM networks for multicast transmissions, after recovering the multicast trees affected by the previous link-failure. Since this problem is a hard problem, an algorithm, which consists of several heuristics and a genetic algorithm (GA), is proposed to solve it. The simulation results of the proposed method are also given. Experimental results indicate that the proposed algorithm can solve this problem efficiently.

  2. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  3. The multidriver: A reliable multicast service using the Xpress Transfer Protocol

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Fenton, John C.; Weaver, Alfred C.

    1990-01-01

    A reliable multicast facility extends traditional point-to-point virtual circuit reliability to one-to-many communication. Such services can provide more efficient use of network resources, a powerful distributed name binding capability, and reduced latency in multidestination message delivery. These benefits will be especially valuable in real-time environments where reliable multicast can enable new applications and increase the availability and the reliability of data and services. We present a unique multicast service that exploits features in the next-generation, real-time transfer layer protocol, the Xpress Transfer Protocol (XTP). In its reliable mode, the service offers error, flow, and rate-controlled multidestination delivery of arbitrary-sized messages, with provision for the coordination of reliable reverse channels. Performance measurements on a single-segment Proteon ProNET-4 4 Mbps 802.5 token ring with heterogeneous nodes are discussed.

  4. Flexible and re-configurable optical three-input XOR logic gate of phase-modulated signals with multicast functionality for potential application in optical physical-layer network coding.

    PubMed

    Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru

    2016-02-08

    Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.

  5. Demonstration of flexible multicasting and aggregation functionality for TWDM-PON

    NASA Astrophysics Data System (ADS)

    Chen, Yuanxiang; Li, Juhao; Zhu, Paikun; Zhu, Jinglong; Tian, Yu; Wu, Zhongying; Peng, Huangfa; Xu, Yongchi; Chen, Jingbiao; He, Yongqi; Chen, Zhangyuan

    2017-06-01

    The time- and wavelength-division multiplexed passive optical network (TWDM-PON) has been recognized as an attractive solution to provide broadband access for the next-generation networks. In this paper, we propose flexible service multicasting and aggregation functionality for TWDM-PON utilizing multiple-pump four-wave-mixing (FWM) and cyclic arrayed waveguide grating (AWG). With the proposed scheme, multiple TWDM-PON links share a single optical line terminal (OLT), which can greatly reduce the network deployment expense and achieve efficient network resource utilization by load balancing among different optical distribution networks (ODNs). The proposed scheme is compatible with existing TDM-PON infrastructure with fixed-wavelength OLT transmitter, thus smooth service upgrade can be achieved. Utilizing the proposed scheme, we demonstrate a proof-of-concept experiment with 10-Gb/s OOK and 10-Gb/s QPSK orthogonal frequency division multiplexing (OFDM) signal multicasting and aggregating to seven PON links. Compared with back-to-back (BTB) channel, the newly generated multicasting OOK signal and OFDM signal have power penalty of 1.6 dB and 2 dB at the BER of 10-3, respectively. For the aggregation of multiple channels, no obvious power penalty is observed. What is more, to verify the flexibility of the proposed scheme, we reconfigure the wavelength selective switch (WSS) and adjust the number of pumps to realize flexible multicasting functionality. One to three, one to seven, one to thirteen and one to twenty-one multicasting are achieved without modifying OLT structure.

  6. Multicast Routing of Hierarchical Data

    NASA Technical Reports Server (NTRS)

    Shacham, Nachum

    1992-01-01

    The issue of multicast of broadband, real-time data in a heterogeneous environment, in which the data recipients differ in their reception abilities, is considered. Traditional multicast schemes, which are designed to deliver all the source data to all recipients, offer limited performance in such an environment, since they must either force the source to overcompress its signal or restrict the destination population to those who can receive the full signal. We present an approach for resolving this issue by combining hierarchical source coding techniques, which allow recipients to trade off reception bandwidth for signal quality, and sophisticated routing algorithms that deliver to each destination the maximum possible signal quality. The field of hierarchical coding is briefly surveyed and new multicast routing algorithms are presented. The algorithms are compared in terms of network utilization efficiency, lengths of paths, and the required mechanisms for forwarding packets on the resulting paths.

  7. Multifunctional switching unit for add/drop, wavelength conversion, format conversion, and WDM multicast based on bidirectional LCoS and SOA-loop architecture.

    PubMed

    Wang, Danshi; Zhang, Min; Qin, Jun; Lu, Guo-Wei; Wang, Hongxiang; Huang, Shanguo

    2014-09-08

    We propose a multifunctional optical switching unit based on the bidirectional liquid crystal on silicon (LCoS) and semiconductor optical amplifier (SOA) architecture. Add/drop, wavelength conversion, format conversion, and WDM multicast are experimentally demonstrated. Due to the bidirectional characteristic, the LCoS device cannot only multiplex the input signals, but also de-multiplex the converted signals. Dual-channel wavelength conversion and format conversion from 2 × 25Gbps differential quadrature phase-shift-keying (DQPSK) to 2 × 12.5Gbps differential phase-shift-keying (DPSK) based on four-wave mixing (FWM) in SOA is obtained with only one pump. One-to-six WDM multicast of 25Gbps DQPSK signals with two pumps is also achieved. All of the multicast channels are with a power penalty less than 1.1 dB at FEC threshold of 3.8 × 10⁻³.

  8. Multisites Coordination in Shared Multicast Trees

    DTIC Science & Technology

    1999-01-01

    conferencing, distributed interactive simulations, and collaborative systems. We de- scribe a novel protocol to coordinate multipoint groupwork in the IP...multicast framework. The pro- tocol supports Internet-wide coordination for large and highly-interactive groupwork , relying on trans- mission of

  9. Optimization of multicast optical networks with genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lv, Bo; Mao, Xiangqiao; Zhang, Feng; Qin, Xi; Lu, Dan; Chen, Ming; Chen, Yong; Cao, Jihong; Jian, Shuisheng

    2007-11-01

    In this letter, aiming to obtain the best multicast performance of optical network in which the video conference information is carried by specified wavelength, we extend the solutions of matrix games with the network coding theory and devise a new method to solve the complex problems of multicast network switching. In addition, an experimental optical network has been testified with best switching strategies by employing the novel numerical solution designed with an effective way of genetic algorithm. The result shows that optimal solutions with genetic algorithm are accordance with the ones with the traditional fictitious play method.

  10. Fast casual multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Schiper, Andre; Stephenson, Pat

    1990-01-01

    A new protocol is presented that efficiently implements a reliable, causally ordered multicast primitive and is easily extended into a totally ordered one. Intended for use in the ISIS toolkit, it offers a way to bypass the most costly aspects of ISIS while benefiting from virtual synchrony. The facility scales with bounded overhead. Measured speedups of more than an order of magnitude were obtained when the protocol was implemented within ISIS. One conclusion is that systems such as ISIS can achieve performance competitive with the best existing multicast facilities--a finding contradicting the widespread concern that fault-tolerance may be unacceptably costly.

  11. A Secure Multicast Framework in Large and High-Mobility Network Groups

    NASA Astrophysics Data System (ADS)

    Lee, Jung-San; Chang, Chin-Chen

    With the widespread use of Internet applications such as Teleconference, Pay-TV, Collaborate tasks, and Message services, how to construct and distribute the group session key to all group members securely is becoming and more important. Instead of adopting the point-to-point packet delivery, these emerging applications are based upon the mechanism of multicast communication, which allows the group member to communicate with multi-party efficiently. There are two main issues in the mechanism of multicast communication: Key Distribution and Scalability. The first issue is how to distribute the group session key to all group members securely. The second one is how to maintain the high performance in large network groups. Group members in conventional multicast systems have to keep numerous secret keys in databases, which makes it very inconvenient for them. Furthermore, in case that a member joins or leaves the communication group, many involved participants have to change their own secret keys to preserve the forward secrecy and the backward secrecy. We consequently propose a novel version for providing secure multicast communication in large network groups. Our proposed framework not only preserves the forward secrecy and the backward secrecy but also possesses better performance than existing alternatives. Specifically, simulation results demonstrate that our scheme is suitable for high-mobility environments.

  12. Optical network scaling: roles of spectral and spatial aggregation.

    PubMed

    Arık, Sercan Ö; Ho, Keang-Po; Kahn, Joseph M

    2014-12-01

    As the bit rates of routed data streams exceed the throughput of single wavelength-division multiplexing channels, spectral and spatial traffic aggregation become essential for optical network scaling. These aggregation techniques reduce network routing complexity by increasing spectral efficiency to decrease the number of fibers, and by increasing switching granularity to decrease the number of switching components. Spectral aggregation yields a modest decrease in the number of fibers but a substantial decrease in the number of switching components. Spatial aggregation yields a substantial decrease in both the number of fibers and the number of switching components. To quantify routing complexity reduction, we analyze the number of multi-cast and wavelength-selective switches required in a colorless, directionless and contentionless reconfigurable optical add-drop multiplexer architecture. Traffic aggregation has two potential drawbacks: reduced routing power and increased switching component size.

  13. XML Tactical Chat (XTC): The Way Ahead for Navy Chat

    DTIC Science & Technology

    2007-09-01

    multicast transmissions via sophisticated pruning algorithms, while allowing multicast packets to “ tunnel ” through IP routers. [Macedonia, Brutzman 1994...conference was Jabber Inc. who added some great insight into the power of Jabber. • Great features including blackberry handheld connectivity and

  14. Reliable WDM multicast in optical burst-switched networks

    NASA Astrophysics Data System (ADS)

    Jeong, Myoungki; Qiao, Chunming; Xiong, Yijun

    2000-09-01

    IN this paper,l we present a reliable WDM (Wavelength-Division Multiplexing) multicast protocol in optical burst-switched (OBS) networks. Since the burst dropping (loss) probability may be potentially high in a heavily loaded OBS backbone network, reliable multicast protocols that have developed for IP networks at the transport (or application) layer may incur heavy overheads such as a large number of duplicate retransmissions. In addition, it may take a longer time for an end host to detect and then recover from burst dropping (loss) occurred at the WDM layer. For efficiency reasons, we propose burst loss recovery within the OBS backbone (i.e., at the WDM link layer). The proposed protocol requires two additional functions to be performed by the WDM switch controller: subcasting and maintaining burst states, when the WDM switch has more than one downstream on the WDM multicast tree. We show that these additional functions are simple to implement and the overhead associated with them is manageable.

  15. Analysis on Multicast Routing Protocols for Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Xiang, Ma

    As the Mobile Ad Hoc Networks technologies face a series of challenges like dynamic changes of topological structure, existence of unidirectional channel, limited wireless transmission bandwidth, the capability limitations of mobile termination and etc, therefore, the research to mobile Ad Hoc network routings inevitablely undertake a more important task than those to other networks. Multicast is a mode of communication transmission oriented to group computing, which sends the data to a group of host computers by using single source address. In a typical mobile Ad Hoc Network environment, multicast has a significant meaning. On the one hand, the users of mobile Ad Hoc Network usually need to form collaborative working groups; on the other hand, this is also an important means of fully using the broadcast performances of wireless communication and effectively using the limited wireless channel resources. This paper summarizes and comparatively analyzes the routing mechanisms of various existing multicast routing protocols according to the characteristics of mobile Ad Hoc network.

  16. WDM/TDM PON experiments using the AWG free spectral range periodicity to transmit unicast and multicast data

    NASA Astrophysics Data System (ADS)

    Bock, Carlos; Prat, Josep

    2005-04-01

    A hybrid WDM/TDM PON architecture implemented by means of two cascaded Arrayed Waveguide Gratings (AWG) is presented. Using the Free Spectral Range (FSR) periodicity of AWGs we transmit unicast and multicast traffic on different wavelengths to each Optical Network Unit (ONU). The OLT is equipped with two laser stacks, a tunable one for unicast transmission and a fixed one for multicast transmission. We propose the ONU to be reflective in order to avoid any light source at the Costumer Premises Equipment (CPE). Optical transmission tests demonstrate correct transmission at 2.5 Gbps up to 30 km.

  17. Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems

    NASA Technical Reports Server (NTRS)

    Shu, Wei Wennie; Porter, John

    2000-01-01

    The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.

  18. Proxy-assisted multicasting of video streams over mobile wireless networks

    NASA Astrophysics Data System (ADS)

    Nguyen, Maggie; Pezeshkmehr, Layla; Moh, Melody

    2005-03-01

    This work addresses the challenge of providing seamless multimedia services to mobile users by proposing a proxy-assisted multicast architecture for delivery of video streams. We propose a hybrid system of streaming proxies, interconnected by an application-layer multicast tree, where each proxy acts as a cluster head to stream out content to its stationary and mobile users. The architecture is based on our previously proposed Enhanced-NICE protocol, which uses an application-layer multicast tree to deliver layered video streams to multiple heterogeneous receivers. We targeted the study on placements of streaming proxies to enable efficient delivery of live and on-demand video, supporting both stationary and mobile users. The simulation results are evaluated and compared with two other baseline scenarios: one with a centralized proxy system serving the entire population and one with mini-proxies each to serve its local users. The simulations are implemented using the J-SIM simulator. The results show that even though proxies in the hybrid scenario experienced a slightly longer delay, they had the lowest drop rate of video content. This finding illustrates the significance of task sharing in multiple proxies. The resulted load balancing among proxies has provided a better video quality delivered to a larger audience.

  19. Multicasting based optical inverse multiplexing in elastic optical network.

    PubMed

    Guo, Bingli; Xu, Yingying; Zhu, Paikun; Zhong, Yucheng; Chen, Yuanxiang; Li, Juhao; Chen, Zhangyuan; He, Yongqi

    2014-06-16

    Optical multicasting based inverse multiplexing (IM) is introduced in spectrum allocation of elastic optical network to resolve the spectrum fragmentation problem, where superchannels could be split and fit into several discrete spectrum blocks in the intermediate node. We experimentally demonstrate it with a 1-to-7 optical superchannel multicasting module and selecting/coupling components. Also, simulation results show that, comparing with several emerging spectrum defragmentation solutions (e.g., spectrum conversion, split spectrum), IM could reduce blocking performance significantly but without adding too much system complexity as split spectrum. On the other hand, service fairness for traffic with different granularity of these schemes is investigated for the first time and it shows that IM performs better than spectrum conversion and almost as well as split spectrum, especially for smaller size traffic under light traffic intensity.

  20. Lightweight causal and atomic group multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Schiper, Andre; Stephenson, Pat

    1991-01-01

    The ISIS toolkit is a distributed programming environment based on support for virtually synchronous process groups and group communication. A suite of protocols is presented to support this model. The approach revolves around a multicast primitive, called CBCAST, which implements a fault-tolerant, causally ordered message delivery. This primitive can be used directly or extended into a totally ordered multicast primitive, called ABCAST. It normally delivers messages immediately upon reception, and imposes a space overhead proportional to the size of the groups to which the sender belongs, usually a small number. It is concluded that process groups and group communication can achieve performance and scaling comparable to that of a raw message transport layer. This finding contradicts the widespread concern that this style of distributed computing may be unacceptably costly.

  1. Reliable multicast protocol specifications protocol operations

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd; Whetten, Brian

    1995-01-01

    This appendix contains the complete state tables for Reliable Multicast Protocol (RMP) Normal Operation, Multi-RPC Extensions, Membership Change Extensions, and Reformation Extensions. First the event types are presented. Afterwards, each RMP operation state, normal and extended, is presented individually and its events shown. Events in the RMP specification are one of several things: (1) arriving packets, (2) expired alarms, (3) user events, (4) exceptional conditions.

  2. Tera-node Network Technology (Task 3) Scalable Personal Telecommunications

    DTIC Science & Technology

    2000-03-14

    Simulation results of this work may be found in http://north.east.isi.edu/spt/ audio.html. 6. Internet Research Task Force Reliable Multicast...Adaptation, 4. Multimedia Proxy Caching, 5. Experiments with the Rate Adaptation Protocol (RAP) 6. Providing leadership and innovation to the Internet ... Research Task Force (IRTF) Reliable Multicast Research Group (RMRG) 1. End-to-end Architecture for Quality-adaptive Streaming Applications over the

  3. Bulk data transfer distributer: a high performance multicast model in ALMA ACS

    NASA Astrophysics Data System (ADS)

    Cirami, R.; Di Marcantonio, P.; Chiozzi, G.; Jeram, B.

    2006-06-01

    A high performance multicast model for the bulk data transfer mechanism in the ALMA (Atacama Large Millimeter Array) Common Software (ACS) is presented. The ALMA astronomical interferometer will consist of at least 50 12-m antennas operating at millimeter wavelength. The whole software infrastructure for ALMA is based on ACS, which is a set of application frameworks built on top of CORBA. To cope with the very strong requirements for the amount of data that needs to be transported by the software communication channels of the ALMA subsystems (a typical output data rate expected from the Correlator is of the order of 64 MB per second) and with the potential CORBA bottleneck due to parameter marshalling/de-marshalling, usage of IIOP protocol, etc., a transfer mechanism based on the ACE/TAO CORBA Audio/Video (A/V) Streaming Service has been developed. The ACS Bulk Data Transfer architecture bypasses the CORBA protocol with an out-of-bound connection for the data streams (transmitting data directly in TCP or UDP format), using at the same time CORBA for handshaking and leveraging the benefits of ACS middleware. Such a mechanism has proven to be capable of high performances, of the order of 800 Mbits per second on a 1Gbit Ethernet network. Besides a point-to-point communication model, the ACS Bulk Data Transfer provides a multicast model. Since the TCP protocol does not support multicasting and all the data must be correctly delivered to all ALMA subsystems, a distributer mechanism has been developed. This paper focuses on the ACS Bulk Data Distributer, which mimics a multicast behaviour managing data dispatching to all receivers willing to get data from the same sender.

  4. Flexible and scalable wavelength multicast of coherent optical OFDM with tolerance against pump phase-noise using reconfigurable coherent multi-carrier pumping.

    PubMed

    Lu, Guo-Wei; Bo, Tianwai; Sakamoto, Takahide; Yamamoto, Naokatsu; Chan, Calvin Chun-Kit

    2016-10-03

    Recently the ever-growing demand for dynamic and high-capacity services in optical networks has resulted in new challenges that require improved network agility and flexibility in order for network resources to become more "consumable" and dynamic, or elastic, in response to requests from higher network layers. Flexible and scalable wavelength conversion or multicast is one of the most important technologies needed for developing agility in the physical layer. This paper will investigate how, using a reconfigurable coherent multi-carrier as a pump, the multicast scalability and the flexibility in wavelength allocation of the converted signals can be effectively improved. Moreover, the coherence in the multiple carriers prevents the phase noise transformation from the local pump to the converted signals, which is imperative for the phase-noise-sensitive multi-level single- or multi-carrier modulated signal. To verify the feasibility of the proposed scheme, we experimentally demonstrate the wavelength multicast of coherent optical orthogonal frequency division multiplexing (CO-OFDM) signals using a reconfigurable coherent multi-carrier pump, showing flexibility in wavelength allocation, scalability in multicast, and tolerance against pump phase noise. Less than 0.5 dB and 1.8 dB power penalties at a bit-error rate (BER) of 10-3 are obtained for the converted CO-OFDM-quadrature phase-shift keying (QPSK) and CO-OFDM-16-ary quadrature amplitude modulation (16QAM) signals, respectively, even when using a distributed feedback laser (DFB) as a pump source. In contrast, with a free-running pumping scheme, the phase noise from DFB pumps severely deteriorates the CO-OFDM signals, resulting in a visible error-floor at a BER of 10-2 in the converted CO-OFDM-16QAM signals.

  5. Overview of AMS (CCSDS Asynchronous Message Service)

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott

    2006-01-01

    This viewgraph presentation gives an overview of the Consultative Committee for Space Data Systems (CCSDS) Asynchronous Message Service (AMS). The topics include: 1) Key Features; 2) A single AMS continuum; 3) The AMS Protocol Suite; 4) A multi-continuum venture; 5) Constraining transmissions; 6) Security; 7) Fault Tolerance; 8) Performance of Reference Implementation; 9) AMS vs Multicast (1); 10) AMS vs Multicast (2); 11) RAMS testing exercise; and 12) Results.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishihara, T

    Currently, the problem at hand is in distributing identical copies of OEP and filter software to a large number of farm nodes. One of the common methods used to transfer these softwares is through unicast. Unicast protocol faces the problem of repetitiously sending the same data over the network. Since the sending rate is limited, this process poses to be a bottleneck. Therefore, one possible solution to this problem lies in creating a reliable multicast protocol. A specific type of multicast protocol is the Bulk Multicast Protocol [4]. This system consists of one sender distributing data to many receivers. Themore » sender delivers data at a given rate of data packets. In response to that, the receiver replies to the sender with a status packet which contains information about the packet loss in terms of Negative Acknowledgment. The probability of the status packet sent back to the sender is+, where N is the number of receivers. The protocol is designed to have approximately 1 status packet for each data packet sent. In this project, we were able to show that the time taken for the complete transfer of a file to multiple receivers was about 12 times faster with multicast than by the use of unicast. The implementation of this experimental protocol shows remarkable improvement in mass data transfer to a large number of farm machines.« less

  7. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  8. Hybrid digital-analog video transmission in wireless multicast and multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Lin, Xiaocheng; Fan, Nianfei; Zhang, Lin

    2016-01-01

    Wireless video multicast has become one of the key technologies in wireless applications. But the main challenge of conventional wireless video multicast, i.e., the cliff effect, remains unsolved. To overcome the cliff effect, a hybrid digital-analog (HDA) video transmission framework based on SoftCast, which transmits the digital bitstream with the quantization residuals, is proposed. With an effective power allocation algorithm and appropriate parameter settings, the residual gains can be maximized; meanwhile, the digital bitstream can assure transmission of a basic video to the multicast receiver group. In the multiple-input multiple-output (MIMO) system, since nonuniform noise interference on different antennas can be regarded as the cliff effect problem, ParCast, which is a variation of SoftCast, is also applied to video transmission to solve it. The HDA scheme with corresponding power allocation algorithms is also applied to improve video performance. Simulations show that the proposed HDA scheme can overcome the cliff effect completely with the transmission of residuals. What is more, it outperforms the compared WSVC scheme by more than 2 dB when transmitting under the same bandwidth, and it can further improve performance by nearly 8 dB in MIMO when compared with the ParCast scheme.

  9. Research on performance of three-layer MG-OXC system based on MLAG and OCDM

    NASA Astrophysics Data System (ADS)

    Wang, Yubao; Ren, Yanfei; Meng, Ying; Bai, Jian

    2017-10-01

    At present, as traffic volume which optical transport networks convey and species of traffic grooming methods increase rapidly, optical switching techniques are faced with a series of issues, such as more requests for the number of wavelengths and complicated structure management and implementation. This work introduces optical code switching based on wavelength switching, constructs the three layers multi-granularity optical cross connection (MG-OXC) system on the basis of optical code division multiplexing (OCDM) and presents a new traffic grooming algorithm. The proposed architecture can improve the flexibility of traffic grooming, reduce the amount of used wavelengths and save the number of consumed ports, hence, it can simplify routing device and enhance the performance of the system significantly. Through analyzing the network model of switching structure on multicast layered auxiliary graph (MLAG) and the establishment of traffic grooming links, and the simulation of blocking probability and throughput, this paper shows the excellent performance of this mentioned architecture.

  10. Multicast Parametric Synchronous Sampling

    DTIC Science & Technology

    2011-09-01

    enhancement in a parametric mixer device. Fig. 4 shows the principle of generating uniform, high quality replicas extending over previously un-attainable...critical part of the MPASS architecture and is responsible for the direct and continuous acquisition of data across all of the multicast signal copies...ii) ability to copy THz signals with impunity to tens of replicas ; (iii) all-optical delays > 1.9 us; (iv) 10’s of THz-fast all-optical sampling of

  11. Fault recovery in the reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Whetten, Brian

    1995-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast (12, 5) media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  12. Specification and Design of a Fault Recovery Model for the Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd; Callahan, John R.; Whetten, Brian

    1996-01-01

    The Reliable Multicast Protocol (RMP) provides a unique, group-based model for distributed programs that need to handle reconfiguration events at the application layer. This model, called membership views, provides an abstraction in which events such as site failures, network partitions, and normal join-leave events are viewed as group reformations. RMP provides access to this model through an application programming interface (API) that notifies an application when a group is reformed as the result of a some event. RMP provides applications with reliable delivery of messages using an underlying IP Multicast media to other group members in a distributed environment even in the case of reformations. A distributed application can use various Quality of Service (QoS) levels provided by RMP to tolerate group reformations. This paper explores the implementation details of the mechanisms in RMP that provide distributed applications with membership view information and fault recovery capabilities.

  13. On-board B-ISDN fast packet switching architectures. Phase 2: Development. Proof-of-concept architecture definition report

    NASA Technical Reports Server (NTRS)

    Shyy, Dong-Jye; Redman, Wayne

    1993-01-01

    For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.

  14. Agile multicasting based on cascaded χ(2) nonlinearities in a step-chirped periodically poled lithium niobate.

    PubMed

    Ahlawat, Meenu; Bostani, Ameneh; Tehranchi, Amirhossein; Kashyap, Raman

    2013-08-01

    We experimentally demonstrate the possibility of agile multicasting for wavelength division multiplexing (WDM) networks, of a single-channel to two and seven channels over the C band, also extendable to S and L bands. This is based on cascaded χ(2) nonlinear mixing processes, namely, second-harmonic generation (SHG)-sum-frequency generation (SFG) and difference-frequency generation (DFG) in a 20-mm-long step-chirped periodically poled lithium niobate crystal, specially designed and fabricated for a 28-nm-wide SH-SF bandwidth centered at around 1.55 μm. The multiple idlers are simultaneously tuned by detuning the pump wavelengths within the broad SH-SF bandwidth. By selectively tuning the pump wavelengths over less than 10 and 6 nm, respectively, multicasting into two and seven idlers is successfully achieved across ~70 WDM channels within the 50 GHz International Telecommunication Union grid spacing.

  15. Near-common-path interferometer for imaging Fourier-transform spectroscopy in wide-field microscopy

    PubMed Central

    Wadduwage, Dushan N.; Singh, Vijay Raj; Choi, Heejin; Yaqoob, Zahid; Heemskerk, Hans; Matsudaira, Paul; So, Peter T. C.

    2017-01-01

    Imaging Fourier-transform spectroscopy (IFTS) is a powerful method for biological hyperspectral analysis based on various imaging modalities, such as fluorescence or Raman. Since the measurements are taken in the Fourier space of the spectrum, it can also take advantage of compressed sensing strategies. IFTS has been readily implemented in high-throughput, high-content microscope systems based on wide-field imaging modalities. However, there are limitations in existing wide-field IFTS designs. Non-common-path approaches are less phase-stable. Alternatively, designs based on the common-path Sagnac interferometer are stable, but incompatible with high-throughput imaging. They require exhaustive sequential scanning over large interferometric path delays, making compressive strategic data acquisition impossible. In this paper, we present a novel phase-stable, near-common-path interferometer enabling high-throughput hyperspectral imaging based on strategic data acquisition. Our results suggest that this approach can improve throughput over those of many other wide-field spectral techniques by more than an order of magnitude without compromising phase stability. PMID:29392168

  16. AF-TRUST, Air Force Team for Research in Ubiquitous Secure Technology

    DTIC Science & Technology

    2010-07-26

    Charles Sutton, J. D. Tygar, and Kai Xia. Book chapter in Jeffrey J. P. Tsai and Philip S. Yu (eds.) Machine Learning in Cyber Trust: Security, Privacy...enterprise, tactical, embedded systems and command and control levels. From these studies, commissioned by Dr . Sekar Chandersekaran of the Secretary of the...Data centers avoid IP Multicast because of a series of problems with the technology. • Dr . Multicast (the MCMD), a system that maps traditional I PMC

  17. Secure Hierarchical Multicast Routing and Multicast Internet Anonymity

    DTIC Science & Technology

    1998-06-01

    Multimedia, Summer 94, pages 76{79, 94. [15] David Chaum . Blind signatures for untraceable payments. In Proc. Crypto󈨖, pages 199{203, 1982. [16] David L...use of digital signatures , which consist of a cryptographic hash of the message encrypted with the private key of the signer. Digitally-signed messages... signature on the request and on the certi cate it contains. Notice that the location service need not retrieve the initiator’s public key as it is contained

  18. Remote software upload techniques in future vehicles and their performance analysis

    NASA Astrophysics Data System (ADS)

    Hossain, Irina

    Updating software in vehicle Electronic Control Units (ECUs) will become a mandatory requirement for a variety of reasons, for examples, to update/fix functionality of an existing system, add new functionality, remove software bugs and to cope up with ITS infrastructure. Software modules of advanced vehicles can be updated using Remote Software Upload (RSU) technique. The RSU employs infrastructure-based wireless communication technique where the software supplier sends the software to the targeted vehicle via a roadside Base Station (BS). However, security is critically important in RSU to avoid any disasters due to malfunctions of the vehicle or to protect the proprietary algorithms from hackers, competitors or people with malicious intent. In this thesis, a mechanism of secure software upload in advanced vehicles is presented which employs mutual authentication of the software provider and the vehicle using a pre-shared authentication key before sending the software. The software packets are sent encrypted with a secret key along with the Message Digest (MD). In order to increase the security level, it is proposed the vehicle to receive more than one copy of the software along with the MD in each copy. The vehicle will install the new software only when it receives more than one identical copies of the software. In order to validate the proposition, analytical expressions of average number of packet transmissions for successful software update is determined. Different cases are investigated depending on the vehicle's buffer size and verification methods. The analytical and simulation results show that it is sufficient to send two copies of the software to the vehicle to thwart any security attack while uploading the software. The above mentioned unicast method for RSU is suitable when software needs to be uploaded to a single vehicle. Since multicasting is the most efficient method of group communication, updating software in an ECU of a large number of vehicles could benefit from it. However, like the unicast RSU, the security requirements of multicast communication, i.e., authenticity, confidentiality and integrity of the software transmitted and access control of the group members is challenging. In this thesis, an infrastructure-based mobile multicasting for RSU in vehicle ECUs is proposed where an ECU receives the software from a remote software distribution center using the road side BSs as gateways. The Vehicular Software Distribution Network (VSDN) is divided into small regions administered by a Regional Group Manager (RGM). Two multicast Group Key Management (GKM) techniques are proposed based on the degree of trust on the BSs named Fully-trusted (FT) and Semi-trusted (ST) systems. Analytical models are developed to find the multicast session establishment latency and handover latency for these two protocols. The average latency to perform mutual authentication of the software vendor and a vehicle, and to send the multicast session key by the software provider during multicast session initialization, and the handoff latency during multicast session is calculated. Analytical and simulation results show that the link establishment latency per vehicle of our proposed schemes is in the range of few seconds and the ST system requires few ms higher time than the FT system. The handoff latency is also in the range of few seconds and in some cases ST system requires less handoff time than the FT system. Thus, it is possible to build an efficient GKM protocol without putting too much trust on the BSs.

  19. Multipoint to multipoint routing and wavelength assignment in multi-domain optical networks

    NASA Astrophysics Data System (ADS)

    Qin, Panke; Wu, Jingru; Li, Xudong; Tang, Yongli

    2018-01-01

    In multi-point to multi-point (MP2MP) routing and wavelength assignment (RWA) problems, researchers usually assume the optical networks to be a single domain. However, the optical networks develop toward to multi-domain and larger scale in practice. In this context, multi-core shared tree (MST)-based MP2MP RWA are introduced problems including optimal multicast domain sequence selection, core nodes belonging in which domains and so on. In this letter, we focus on MST-based MP2MP RWA problems in multi-domain optical networks, mixed integer linear programming (MILP) formulations to optimally construct MP2MP multicast trees is presented. A heuristic algorithm base on network virtualization and weighted clustering algorithm (NV-WCA) is proposed. Simulation results show that, under different traffic patterns, the proposed algorithm achieves significant improvement on network resources occupation and multicast trees setup latency in contrast with the conventional algorithms which were proposed base on a single domain network environment.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ennis, G.; Lala, T.K.

    This document presents the results of a study undertaken by First Pacific Networks as part of EPRI Project RP-3567-01 regarding the support of broadcast services within the EPRI Utility Communications Architecture (UCA) protocols and the use of such services by UCA applications. This report has focused on the requirements and architectural implications of broadcast within UCA. A subsequent phase of this project is to develop specific recommendations for extending CUA so as to support broadcast. The conclusions of this report are presented in Section 5. The authors summarize the major conclusions as follows: broadcast and multicast support would be verymore » useful within UCA, not only for utility-specific applications but also simply to support the network engineering of a large-scale communications system, in this regard, UCA is no different from other large network systems which have found broadcast and multicast to be of substantial benefit for a variety of system management purposes; the primary architectural impact of broadcast and multicast falls on the UCA network level (which would need to be enhanced) and the UCA application level (which would be the user of broadcast); there is a useful subset of MMS services which could take advantage of broadcast; the UCA network level would need to be enhanced both in the areas of addressing and routing so as to properly support broadcast. A subsequent analysis will be required to define the specific enhancements to UCA required to support broadcast and multicast.« less

  1. Rapid high-throughput cloning and stable expression of antibodies in HEK293 cells.

    PubMed

    Spidel, Jared L; Vaessen, Benjamin; Chan, Yin Yin; Grasso, Luigi; Kline, J Bradford

    2016-12-01

    Single-cell based amplification of immunoglobulin variable regions is a rapid and powerful technique for cloning antigen-specific monoclonal antibodies (mAbs) for purposes ranging from general laboratory reagents to therapeutic drugs. From the initial screening process involving small quantities of hundreds or thousands of mAbs through in vitro characterization and subsequent in vivo experiments requiring large quantities of only a few, having a robust system for generating mAbs from cloning through stable cell line generation is essential. A protocol was developed to decrease the time, cost, and effort required by traditional cloning and expression methods by eliminating bottlenecks in these processes. Removing the clonal selection steps from the cloning process using a highly efficient ligation-independent protocol and from the stable cell line process by utilizing bicistronic plasmids to generate stable semi-clonal cell pools facilitated an increased throughput of the entire process from plasmid assembly through transient transfections and selection of stable semi-clonal cell pools. Furthermore, the time required by a single individual to clone, express, and select stable cell pools in a high-throughput format was reduced from 4 to 6months to only 4 to 6weeks. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Programming with process groups: Group and multicast semantics

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.

  3. Multicasting for all-optical multifiber networks

    NASA Astrophysics Data System (ADS)

    Kã¶Ksal, Fatih; Ersoy, Cem

    2007-02-01

    All-optical wavelength-routed WDM WANs can support the high bandwidth and the long session duration requirements of the application scenarios such as interactive distance learning or on-line diagnosis of patients simultaneously in different hospitals. However, multifiber and limited sparse light splitting and wavelength conversion capabilities of switches result in a difficult optimization problem. We attack this problem using a layered graph model. The problem is defined as a k-edge-disjoint degree-constrained Steiner tree problem for routing and fiber and wavelength assignment of k multicasts. A mixed integer linear programming formulation for the problem is given, and a solution using CPLEX is provided. However, the complexity of the problem grows quickly with respect to the number of edges in the layered graph, which depends on the number of nodes, fibers, wavelengths, and multicast sessions. Hence, we propose two heuristics layered all-optical multicast algorithm [(LAMA) and conservative fiber and wavelength assignment (C-FWA)] to compare with CPLEX, existing work, and unicasting. Extensive computational experiments show that LAMA's performance is very close to CPLEX, and it is significantly better than existing work and C-FWA for nearly all metrics, since LAMA jointly optimizes routing and fiber-wavelength assignment phases compared with the other candidates, which attack the problem by decomposing two phases. Experiments also show that important metrics (e.g., session and group blocking probability, transmitter wavelength, and fiber conversion resources) are adversely affected by the separation of two phases. Finally, the fiber-wavelength assignment strategy of C-FWA (Ex-Fit) uses wavelength and fiber conversion resources more effectively than the First Fit.

  4. Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid

    PubMed Central

    Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz

    2017-01-01

    Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead. PMID:28736582

  5. Multicast Delayed Authentication For Streaming Synchrophasor Data in the Smart Grid.

    PubMed

    Câmara, Sérgio; Anand, Dhananjay; Pillitteri, Victoria; Carmo, Luiz

    2016-01-01

    Multicast authentication of synchrophasor data is challenging due to the design requirements of Smart Grid monitoring systems such as low security overhead, tolerance of lossy networks, time-criticality and high data rates. In this work, we propose inf -TESLA, Infinite Timed Efficient Stream Loss-tolerant Authentication, a multicast delayed authentication protocol for communication links used to stream synchrophasor data for wide area control of electric power networks. Our approach is based on the authentication protocol TESLA but is augmented to accommodate high frequency transmissions of unbounded length. inf TESLA protocol utilizes the Dual Offset Key Chains mechanism to reduce authentication delay and computational cost associated with key chain commitment. We provide a description of the mechanism using two different modes for disclosing keys and demonstrate its security against a man-in-the-middle attack attempt. We compare our approach against the TESLA protocol in a 2-day simulation scenario, showing a reduction of 15.82% and 47.29% in computational cost, sender and receiver respectively, and a cumulative reduction in the communication overhead.

  6. On compensation of four wave mixing effect in dispersion managed hybrid WDM-OTDM multicast overlay system with optical phase conjugation modules

    NASA Astrophysics Data System (ADS)

    Singh, Sukhbir; Singh, Surinder

    2017-11-01

    This paper investigated the effect of FWM and its suppression using optical phase conjugation modules in dispersion managed hybrid WDM-OTDM multicast overlay system. Interaction between propagating wavelength signals at higher power level causes new FWM component generation that can significant limit the system performance. OPC module consists of the pump signal and 0.6 km HNLF implemented in midway of optical link to generate destructive phase FWM components. Investigation revealed that by use of even OPC module in optical link reduces the FWM power and mitigate the interaction between wavelength signals at variable signal input power, dispersion parameter (β2) and transmission distance. System performance comparison is also made between without DM-OPC module, with DM and with DM-OPC module in scenario of FWM tolerance. The BER performance of hybrid WDM-OTDM multicast system using OPC module is improved by multiplication factor of 2 as comparable to dispersion managed and coverage distance is increased by factor of 2 as in Singh and Singh (2016).

  7. Experimental Evaluation of Unicast and Multicast CoAP Group Communication

    PubMed Central

    Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet

    2016-01-01

    The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262

  8. An extractive removal step optimized for a high-throughput α-cellulose extraction method for δ 13 C and δ 18 O stable isotope ratio analysis in conifer tree rings

    Treesearch

    Wen Lin; Asko Noormets; John S. King; Ge Sun; Steve McNulty; Jean-Christophe Domec; Lucas Cernusak

    2017-01-01

    Stable isotope ratios (δ13C and δ18O) of tree-ring α-cellulose are important tools in paleoclimatology, ecology, plant physiology and genetics. The Multiple Sample Isolation System for Solids (MSISS) was a major advance in the tree-ring α-cellulose extraction methods, offering greater throughput and reduced labor input compared to traditional alternatives. However, the...

  9. Multicast for savings in cache-based video distribution

    NASA Astrophysics Data System (ADS)

    Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf

    1999-12-01

    Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.

  10. Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.

    PubMed

    Jung, Young-Ho; Choi, Jihoon

    2017-02-25

    A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.

  11. A Loss Tolerant Rate Controller for Reliable Multicast

    NASA Technical Reports Server (NTRS)

    Montgomery, Todd

    1997-01-01

    This paper describes the design, specification, and performance of a Loss Tolerant Rate Controller (LTRC) for use in controlling reliable multicast senders. The purpose of this rate controller is not to adapt to congestion (or loss) on a per loss report basis (such as per received negative acknowledgment), but instead to use loss report information and perceived state to decide more prudent courses of action for both the short and long term. The goal of this controller is to be responsive to congestion, but not overly reactive to spurious independent loss. Performance of the controller is verified through simulation results.

  12. Hybrid monitoring scheme for end-to-end performance enhancement of multicast-based real-time media

    NASA Astrophysics Data System (ADS)

    Park, Ju-Won; Kim, JongWon

    2004-10-01

    As real-time media applications based on IP multicast networks spread widely, end-to-end QoS (quality of service) provisioning for these applications have become very important. To guarantee the end-to-end QoS of multi-party media applications, it is essential to monitor the time-varying status of both network metrics (i.e., delay, jitter and loss) and system metrics (i.e., CPU and memory utilization). In this paper, targeting the multicast-enabled AG (Access Grid) a next-generation group collaboration tool based on multi-party media services, the applicability of hybrid monitoring scheme that combines active and passive monitoring is investigated. The active monitoring measures network-layer metrics (i.e., network condition) with probe packets while the passive monitoring checks both application-layer metrics (i.e., user traffic condition by analyzing RTCP packets) and system metrics. By comparing these hybrid results, we attempt to pinpoint the causes of performance degradation and explore corresponding reactions to improve the end-to-end performance. The experimental results show that the proposed hybrid monitoring can provide useful information to coordinate the performance improvement of multi-party real-time media applications.

  13. A heuristic for efficient data distribution management in distributed simulation

    NASA Astrophysics Data System (ADS)

    Gupta, Pankaj; Guha, Ratan K.

    2005-05-01

    In this paper, we propose an algorithm for reducing the complexity of region matching and efficient multicasting in data distribution management component of High Level Architecture (HLA) Run Time Infrastructure (RTI). The current data distribution management (DDM) techniques rely on computing the intersection between the subscription and update regions. When a subscription region and an update region of different federates overlap, RTI establishes communication between the publisher and the subscriber. It subsequently routes the updates from the publisher to the subscriber. The proposed algorithm computes the update/subscription regions matching for dynamic allocation of multicast group. It provides new multicast routines that exploit the connectivity of federation by communicating updates regarding interactions and routes information only to those federates that require them. The region-matching problem in DDM reduces to clique-covering problem using the connections graph abstraction where the federations represent the vertices and the update/subscribe relations represent the edges. We develop an abstract model based on connection graph for data distribution management. Using this abstract model, we propose a heuristic for solving the region-matching problem of DDM. We also provide complexity analysis of the proposed heuristics.

  14. Efficient Network Coding-Based Loss Recovery for Reliable Multicast in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Chi, Kaikai; Jiang, Xiaohong; Ye, Baoliu; Horiguchi, Susumu

    Recently, network coding has been applied to the loss recovery of reliable multicast in wireless networks [19], where multiple lost packets are XOR-ed together as one packet and forwarded via single retransmission, resulting in a significant reduction of bandwidth consumption. In this paper, we first prove that maximizing the number of lost packets for XOR-ing, which is the key part of the available network coding-based reliable multicast schemes, is actually a complex NP-complete problem. To address this limitation, we then propose an efficient heuristic algorithm for finding an approximately optimal solution of this optimization problem. Furthermore, we show that the packet coding principle of maximizing the number of lost packets for XOR-ing sometimes cannot fully exploit the potential coding opportunities, and we then further propose new heuristic-based schemes with a new coding principle. Simulation results demonstrate that the heuristic-based schemes have very low computational complexity and can achieve almost the same transmission efficiency as the current coding-based high-complexity schemes. Furthermore, the heuristic-based schemes with the new coding principle not only have very low complexity, but also slightly outperform the current high-complexity ones.

  15. Fingerprint multicast in secure video streaming.

    PubMed

    Zhao, H Vicky; Liu, K J Ray

    2006-01-01

    Digital fingerprinting is an emerging technology to protect multimedia content from illegal redistribution, where each distributed copy is labeled with unique identification information. In video streaming, huge amount of data have to be transmitted to a large number of users under stringent latency constraints, so the bandwidth-efficient distribution of uniquely fingerprinted copies is crucial. This paper investigates the secure multicast of anticollusion fingerprinted video in streaming applications and analyzes their performance. We first propose a general fingerprint multicast scheme that can be used with most spread spectrum embedding-based multimedia fingerprinting systems. To further improve the bandwidth efficiency, we explore the special structure of the fingerprint design and propose a joint fingerprint design and distribution scheme. From our simulations, the two proposed schemes can reduce the bandwidth requirement by 48% to 87%, depending on the number of users, the characteristics of video sequences, and the network and computation constraints. We also show that under the constraint that all colluders have the same probability of detection, the embedded fingerprints in the two schemes have approximately the same collusion resistance. Finally, we propose a fingerprint drift compensation scheme to improve the quality of the reconstructed sequences at the decoder's side without introducing extra communication overhead.

  16. Research on Collaborative Technology in Distributed Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi

    2018-01-01

    Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.

  17. IPTV multicast with peer-assisted lossy error control

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Zhu, Xiaoqing; Begen, Ali C.; Girod, Bernd

    2010-07-01

    Emerging IPTV technology uses source-specific IP multicast to deliver television programs to end-users. To provide reliable IPTV services over the error-prone DSL access networks, a combination of multicast forward error correction (FEC) and unicast retransmissions is employed to mitigate the impulse noises in DSL links. In existing systems, the retransmission function is provided by the Retransmission Servers sitting at the edge of the core network. In this work, we propose an alternative distributed solution where the burden of packet loss repair is partially shifted to the peer IP set-top boxes. Through Peer-Assisted Repair (PAR) protocol, we demonstrate how the packet repairs can be delivered in a timely, reliable and decentralized manner using the combination of server-peer coordination and redundancy of repairs. We also show that this distributed protocol can be seamlessly integrated with an application-layer source-aware error protection mechanism called forward and retransmitted Systematic Lossy Error Protection (SLEP/SLEPr). Simulations show that this joint PARSLEP/ SLEPr framework not only effectively mitigates the bottleneck experienced by the Retransmission Servers, thus greatly enhancing the scalability of the system, but also efficiently improves the resistance to the impulse noise.

  18. Verification and validation of a reliable multicast protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  19. Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Luo, Hongbin; Li, Lemin; Yu, Hongfang

    2006-12-01

    Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.

  20. A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks

    PubMed Central

    Hammad, Karim; El Bakly, Ahmed M.

    2018-01-01

    A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760

  1. A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.

    PubMed

    Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M

    2018-01-01

    A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.

  2. Towards Microeconomic Resource Sharing in End System Multicast Networks Based on Walrasian General Equilibrium

    NASA Astrophysics Data System (ADS)

    Rezvani, Mohammad Hossein; Analoui, Morteza

    2010-11-01

    We have designed a competitive economical mechanism for application level multicast in which a number of independent services are provided to the end-users by a number of origin servers. Each offered service can be thought of as a commodity and the origin servers and the users who relay the service to their downstream nodes can thus be thought of as producers of the economy. Also, the end-users can be viewed as consumers of the economy. The proposed mechanism regulates the price of each service in such a way that general equilibrium holds. So, all allocations will be Pareto optimal in the sense that the social welfare of the users is maximized.

  3. High-speed free-space based reconfigurable card-to-card optical interconnects with broadcast capability.

    PubMed

    Wang, Ke; Nirmalathas, Ampalavanapillai; Lim, Christina; Skafidas, Efstratios; Alameh, Kamal

    2013-07-01

    In this paper, we propose and experimentally demonstrate a free-space based high-speed reconfigurable card-to-card optical interconnect architecture with broadcast capability, which is required for control functionalities and efficient parallel computing applications. Experimental results show that 10 Gb/s data can be broadcast to all receiving channels for up to 30 cm with a worst-case receiver sensitivity better than -12.20 dBm. In addition, arbitrary multicasting with the same architecture is also investigated. 10 Gb/s reconfigurable point-to-point link and multicast channels are simultaneously demonstrated with a measured receiver sensitivity power penalty of ~1.3 dB due to crosstalk.

  4. The combination of gas-phase fluorophore technology and automation to enable high-throughput analysis of plant respiration.

    PubMed

    Scafaro, Andrew P; Negrini, A Clarissa A; O'Leary, Brendan; Rashid, F Azzahra Ahmad; Hayes, Lucy; Fan, Yuzhen; Zhang, You; Chochois, Vincent; Badger, Murray R; Millar, A Harvey; Atkin, Owen K

    2017-01-01

    Mitochondrial respiration in the dark ( R dark ) is a critical plant physiological process, and hence a reliable, efficient and high-throughput method of measuring variation in rates of R dark is essential for agronomic and ecological studies. However, currently methods used to measure R dark in plant tissues are typically low throughput. We assessed a high-throughput automated fluorophore system of detecting multiple O 2 consumption rates. The fluorophore technique was compared with O 2 -electrodes, infrared gas analysers (IRGA), and membrane inlet mass spectrometry, to determine accuracy and speed of detecting respiratory fluxes. The high-throughput fluorophore system provided stable measurements of R dark in detached leaf and root tissues over many hours. High-throughput potential was evident in that the fluorophore system was 10 to 26-fold faster per sample measurement than other conventional methods. The versatility of the technique was evident in its enabling: (1) rapid screening of R dark in 138 genotypes of wheat; and, (2) quantification of rarely-assessed whole-plant R dark through dissection and simultaneous measurements of above- and below-ground organs. Variation in absolute R dark was observed between techniques, likely due to variation in sample conditions (i.e. liquid vs. gas-phase, open vs. closed systems), indicating that comparisons between studies using different measuring apparatus may not be feasible. However, the high-throughput protocol we present provided similar values of R dark to the most commonly used IRGA instrument currently employed by plant scientists. Together with the greater than tenfold increase in sample processing speed, we conclude that the high-throughput protocol enables reliable, stable and reproducible measurements of R dark on multiple samples simultaneously, irrespective of plant or tissue type.

  5. Data collection framework for energy efficient privacy preservation in wireless sensor networks having many-to-many structures.

    PubMed

    Bahşi, Hayretdin; Levi, Albert

    2010-01-01

    Wireless sensor networks (WSNs) generally have a many-to-one structure so that event information flows from sensors to a unique sink. In recent WSN applications, many-to-many structures evolved due to the need for conveying collected event information to multiple sinks. Privacy preserved data collection models in the literature do not solve the problems of WSN applications in which network has multiple un-trusted sinks with different level of privacy requirements. This study proposes a data collection framework bases on k-anonymity for preventing record disclosure of collected event information in WSNs. Proposed method takes the anonymity requirements of multiple sinks into consideration by providing different levels of privacy for each destination sink. Attributes, which may identify an event owner, are generalized or encrypted in order to meet the different anonymity requirements of sinks at the same anonymized output. If the same output is formed, it can be multicasted to all sinks. The other trivial solution is to produce different anonymized outputs for each sink and send them to related sinks. Multicasting is an energy efficient data sending alternative for some sensor nodes. Since minimization of energy consumption is an important design criteria for WSNs, multicasting the same event information to multiple sinks reduces the energy consumption of overall network.

  6. Why we need a centralized repository for isotopic data

    USDA-ARS?s Scientific Manuscript database

    Stable isotopes encode the origin and integrate the history of matter; thus, their analysis offers tremendous potential to address questions across diverse scientific disciplines. Indeed, the broad applicability of stable isotopes, coupled with advancements in high-throughput analysis, have created ...

  7. A high-throughput exploration of magnetic materials by using structure predicting methods

    NASA Astrophysics Data System (ADS)

    Arapan, S.; Nieves, P.; Cuesta-López, S.

    2018-02-01

    We study the capability of a structure predicting method based on genetic/evolutionary algorithm for a high-throughput exploration of magnetic materials. We use the USPEX and VASP codes to predict stable and generate low-energy meta-stable structures for a set of representative magnetic structures comprising intermetallic alloys, oxides, interstitial compounds, and systems containing rare-earths elements, and for both types of ferromagnetic and antiferromagnetic ordering. We have modified the interface between USPEX and VASP codes to improve the performance of structural optimization as well as to perform calculations in a high-throughput manner. We show that exploring the structure phase space with a structure predicting technique reveals large sets of low-energy metastable structures, which not only improve currently exiting databases, but also may provide understanding and solutions to stabilize and synthesize magnetic materials suitable for permanent magnet applications.

  8. Admission and Preventive Load Control for Delivery of Multicast and Broadcast Services via S-UMTS

    NASA Astrophysics Data System (ADS)

    Angelou, E.; Koutsokeras, N.; Andrikopoulos, I.; Mertzanis, I.; Karaliopoulos, M.; Henrio, P.

    2003-07-01

    An Admission Control strategy is proposed for unidirectional satellite systems delivering multicast and broadcast services to mobile users. In such systems, both the radio interface and the targeted services impose particular requirements on the RRM task. We briefly discuss the RRM requirements that stem from the services point of view and from the features of the SATIN access scheme that differentiate it from the conventional T-UMTS radio interface. The main functional entities of RRM and the alternative modes of operation are outlined and the proposed Admission Control algorithm is described in detail. The results from the simulation study that demonstrate its performance for a number of different scenarios are finally presented and conclusions derived.

  9. Design of a Multicast Optical Packet Switch Based on Fiber Bragg Grating Technology for Future Networks

    NASA Astrophysics Data System (ADS)

    Cheng, Yuh-Jiuh; Yeh, Tzuoh-Chyau; Cheng, Shyr-Yuan

    2011-09-01

    In this paper, a non-blocking multicast optical packet switch based on fiber Bragg grating technology with optical output buffers is proposed. Only the header of optical packets is converted to electronic signals to control the fiber Bragg grating array of input ports and the packet payloads should be transparently destined to their output ports so that the proposed switch can reduce electronic interfaces as well as the bit rate. The modulation and the format of packet payloads may be non-standard where packet payloads could also include different wavelengths for increasing the volume of traffic. The advantage is obvious: the proposed switch could transport various types of traffic. An easily implemented architecture which can provide multicast services is also presented. An optical output buffer is designed to queue the packets if more than one incoming packet should reach to the same destination output port or including any waiting packets in optical output buffer that will be sent to the output port at a time slot. For preserving service-packet sequencing and fairness of routing sequence, a priority scheme and a round-robin algorithm are adopted at the optical output buffer. The fiber Bragg grating arrays for both input ports and output ports are designed for routing incoming packets using optical code division multiple access technology.

  10. Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging

    NASA Technical Reports Server (NTRS)

    Burleigh, Scott

    2011-01-01

    This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.

  11. Air-stable ink for scalable, high-throughput layer deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weil, Benjamin D; Connor, Stephen T; Cui, Yi

    A method for producing and depositing air-stable, easily decomposable, vulcanized ink on any of a wide range of substrates is disclosed. The ink enables high-volume production of optoelectronic and/or electronic devices using scalable production methods, such as roll-to-roll transfer, fast rolling processes, and the like.

  12. Dynamic Network Selection for Multicast Services in Wireless Cooperative Networks

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Jin, Le; He, Feng; Cheng, Hanwen; Wu, Lenan

    In next generation mobile multimedia communications, different wireless access networks are expected to cooperate. However, it is a challenging task to choose an optimal transmission path in this scenario. This paper focuses on the problem of selecting the optimal access network for multicast services in the cooperative mobile and broadcasting networks. An algorithm is proposed, which considers multiple decision factors and multiple optimization objectives. An analytic hierarchy process (AHP) method is applied to schedule the service queue and an artificial neural network (ANN) is used to improve the flexibility of the algorithm. Simulation results show that by applying the AHP method, a group of weight ratios can be obtained to improve the performance of multiple objectives. And ANN method is effective to adaptively adjust weight ratios when users' new waiting threshold is generated.

  13. Polarization-insensitive PAM-4-carrying free-space orbital angular momentum (OAM) communications.

    PubMed

    Liu, Jun; Wang, Jian

    2016-02-22

    We present a simple configuration incorporating single polarization-sensitive phase-only liquid crystal spatial light modulator (SLM) to facilitate polarization-insensitive free-space optical communications employing orbital angular momentum (OAM) modes. We experimentally demonstrate several polarization-insensitive optical communication subsystems by propagating a single OAM mode, multicasting 4 and 10 OAM modes, and multiplexing 8 OAM modes, respectively. Free-space polarization-insensitive optical communication links using OAM modes that carry four-level pulse-amplitude modulation (PAM-4) signal are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties are less than 1 dB in both polarization-insensitive N-fold OAM modes multicasting and multiple OAM modes multiplexing at a bit-error rate (BER) of 2e-3 (enhanced forward-error correction (EFEC) threshold).

  14. Advanced Map For Real-Time Process Control

    NASA Astrophysics Data System (ADS)

    Shiobara, Yasuhisa; Matsudaira, Takayuki; Sashida, Yoshio; Chikuma, Makoto

    1987-10-01

    MAP, a communications protocol for factory automation proposed by General Motors [1], has been accepted by users throughout the world and is rapidly becoming a user standard. In fact, it is now a LAN standard for factory automation. MAP is intended to interconnect different devices, such as computers and programmable devices, made by different manufacturers, enabling them to exchange information. It is based on the OSI intercomputer com-munications protocol standard under development by the ISO. With progress and standardization, MAP is being investigated for application to process control fields other than factory automation [2]. The transmission response time of the network system and centralized management of data exchanged with various devices for distributed control are import-ant in the case of a real-time process control with programmable controllers, computers, and instruments connected to a LAN system. MAP/EPA and MINI MAP aim at reduced overhead in protocol processing and enhanced transmission response. If applied to real-time process control, a protocol based on point-to-point and request-response transactions limits throughput and transmission response. This paper describes an advanced MAP LAN system applied to real-time process control by adding a new data transmission control that performs multicasting communication voluntarily and periodically in the priority order of data to be exchanged.

  15. Combined Wavelet Video Coding and Error Control for Internet Streaming and Multicast

    NASA Astrophysics Data System (ADS)

    Chu, Tianli; Xiong, Zixiang

    2003-12-01

    This paper proposes an integrated approach to Internet video streaming and multicast (e.g., receiver-driven layered multicast (RLM) by McCanne) based on combined wavelet video coding and error control. We design a packetized wavelet video (PWV) coder to facilitate its integration with error control. The PWV coder produces packetized layered bitstreams that are independent among layers while being embedded within each layer. Thus, a lost packet only renders the following packets in the same layer useless. Based on the PWV coder, we search for a multilayered error-control strategy that optimally trades off source and channel coding for each layer under a given transmission rate to mitigate the effects of packet loss. While both the PWV coder and the error-control strategy are new—the former incorporates embedded wavelet video coding and packetization and the latter extends the single-layered approach for RLM by Chou et al.—the main distinction of this paper lies in the seamless integration of the two parts. Theoretical analysis shows a gain of up to 1 dB on a channel with 20% packet loss using our combined approach over separate designs of the source coder and the error-control mechanism. This is also substantiated by our simulations with a gain of up to 0.6 dB. In addition, our simulations show a gain of up to 2.2 dB over previous results reported by Chou et al.

  16. Exact and heuristic algorithms for Space Information Flow.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing; Li, Zongpeng

    2018-01-01

    Space Information Flow (SIF) is a new promising research area that studies network coding in geometric space, such as Euclidean space. The design of algorithms that compute the optimal SIF solutions remains one of the key open problems in SIF. This work proposes the first exact SIF algorithm and a heuristic SIF algorithm that compute min-cost multicast network coding for N (N ≥ 3) given terminal nodes in 2-D Euclidean space. Furthermore, we find that the Butterfly network in Euclidean space is the second example besides the Pentagram network where SIF is strictly better than Euclidean Steiner minimal tree. The exact algorithm design is based on two key techniques: Delaunay triangulation and linear programming. Delaunay triangulation technique helps to find practically good candidate relay nodes, after which a min-cost multicast linear programming model is solved over the terminal nodes and the candidate relay nodes, to compute the optimal multicast network topology, including the optimal relay nodes selected by linear programming from all the candidate relay nodes and the flow rates on the connection links. The heuristic algorithm design is also based on Delaunay triangulation and linear programming techniques. The exact algorithm can achieve the optimal SIF solution with an exponential computational complexity, while the heuristic algorithm can achieve the sub-optimal SIF solution with a polynomial computational complexity. We prove the correctness of the exact SIF algorithm. The simulation results show the effectiveness of the heuristic SIF algorithm.

  17. System for high throughput water extraction from soil material for stable isotope analysis of water

    USDA-ARS?s Scientific Manuscript database

    A major limitation in the use of stable isotope of water in ecological studies is the time that is required to extract water from soil and plant samples. Using vacuum distillation the extraction time can be less than one hour per sample. Therefore, assembling a distillation system that can process m...

  18. TCP Throughput Profiles Using Measurements over Dedicated Connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Sen, Satyabrata

    Wide-area data transfers in high-performance computing infrastructures are increasingly being carried over dynamically provisioned dedicated network connections that provide high capacities with no competing traffic. We present extensive TCP throughput measurements and time traces over a suite of physical and emulated 10 Gbps connections with 0-366 ms round-trip times (RTTs). Contrary to the general expectation, they show significant statistical and temporal variations, in addition to the overall dependencies on the congestion control mechanism, buffer size, and the number of parallel streams. We analyze several throughput profiles that have highly desirable concave regions wherein the throughput decreases slowly with RTTs, inmore » stark contrast to the convex profiles predicted by various TCP analytical models. We present a generic throughput model that abstracts the ramp-up and sustainment phases of TCP flows, which provides insights into qualitative trends observed in measurements across TCP variants: (i) slow-start followed by well-sustained throughput leads to concave regions; (ii) large buffers and multiple parallel streams expand the concave regions in addition to improving the throughput; and (iii) stable throughput dynamics, indicated by a smoother Poincare map and smaller Lyapunov exponents, lead to wider concave regions. These measurements and analytical results together enable us to select a TCP variant and its parameters for a given connection to achieve high throughput with statistical guarantees.« less

  19. An Approach to Verification and Validation of a Reliable Multicasting Protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1994-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or offnominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  20. An approach to verification and validation of a reliable multicasting protocol

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test was different between the model and implementation, then the differences helped identify inconsistencies between the model and implementation. The dialogue between both teams drove the co-evolution of the model and implementation. Testing served as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP.

  1. A conifer-friendly high-throughput α-cellulose extraction method for δ13C and δ18O stable isotope ratio analysis

    NASA Astrophysics Data System (ADS)

    Lin, W.; Noormets, A.; domec, J.; King, J. S.; Sun, G.; McNulty, S.

    2012-12-01

    Wood stable isotope ratios (δ13C and δ18O) offer insight to water source and plant water use efficiency (WUE), which in turn provide a glimpse to potential plant responses to changing climate, particularly rainfall patterns. The synthetic pathways of cell wall deposition in wood rings differ in their discrimination ratios between the light and heavy isotopes, and α-cellulose is broadly seen as the best indicator of plant water status due to its local and temporal fixation and to its high abundance within the wood. To use the effects of recent severe droughts on the WUE of loblolly pine (Pinus taeda) throughout Southeastern USA as a harbinger of future changes, an effort has been undertaken to sample the entire range of the species and to sample the isotopic composition in a consistent manner. To be able to accommodate the large number of samples required by this analysis, we have developed a new high-throughput method for α-cellulose extraction, which is the rate-limiting step in such an endeavor. Although an entire family of methods has been developed and perform well, their throughput in a typical research lab setting is limited to 16-75 samples per week with intensive labor input. The resin exclusion step in conifersis is particularly time-consuming. We have combined the recent advances of α-cellulose extraction in plant ecology and wood science, including a high-throughput extraction device developed in the Potsdam Dendro Lab and a simple chemical-based resin exclusion method. By transferring the entire extraction process to a multiport-based system allows throughputs of up to several hundred samples in two weeks, while minimizing labor requirements to 2-3 days per batch of samples.

  2. Digital Multicasting of Multiple Audio Streams

    NASA Technical Reports Server (NTRS)

    Macha, Mitchell; Bullock, John

    2007-01-01

    The Mission Control Center Voice Over Internet Protocol (MCC VOIP) system (see figure) comprises hardware and software that effect simultaneous, nearly real-time transmission of as many as 14 different audio streams to authorized listeners via the MCC intranet and/or the Internet. The original version of the MCC VOIP system was conceived to enable flight-support personnel located in offices outside a spacecraft mission control center to monitor audio loops within the mission control center. Different versions of the MCC VOIP system could be used for a variety of public and commercial purposes - for example, to enable members of the general public to monitor one or more NASA audio streams through their home computers, to enable air-traffic supervisors to monitor communication between airline pilots and air-traffic controllers in training, and to monitor conferences among brokers in a stock exchange. At the transmitting end, the audio-distribution process begins with feeding the audio signals to analog-to-digital converters. The resulting digital streams are sent through the MCC intranet, using a user datagram protocol (UDP), to a server that converts them to encrypted data packets. The encrypted data packets are then routed to the personal computers of authorized users by use of multicasting techniques. The total data-processing load on the portion of the system upstream of and including the encryption server is the total load imposed by all of the audio streams being encoded, regardless of the number of the listeners or the number of streams being monitored concurrently by the listeners. The personal computer of a user authorized to listen is equipped with special- purpose MCC audio-player software. When the user launches the program, the user is prompted to provide identification and a password. In one of two access- control provisions, the program is hard-coded to validate the user s identity and password against a list maintained on a domain-controller computer at the MCC. In the other access-control provision, the program verifies that the user is authorized to have access to the audio streams. Once both access-control checks are completed, the audio software presents a graphical display that includes audiostream-selection buttons and volume-control sliders. The user can select all or any subset of the available audio streams and can adjust the volume of each stream independently of that of the other streams. The audio-player program spawns a "read" process for the selected stream(s). The spawned process sends, to the router(s), a "multicast-join" request for the selected streams. The router(s) responds to the request by sending the encrypted multicast packets to the spawned process. The spawned process receives the encrypted multicast packets and sends a decryption packet to audio-driver software. As the volume or muting features are changed by the user, interrupts are sent to the spawned process to change the corresponding attributes sent to the audio-driver software. The total latency of this system - that is, the total time from the origination of the audio signals to generation of sound at a listener s computer - lies between four and six seconds.

  3. NADIR: A Flexible Archiving System Current Development

    NASA Astrophysics Data System (ADS)

    Knapic, C.; De Marco, M.; Smareglia, R.; Molinaro, M.

    2014-05-01

    The New Archiving Distributed InfrastructuRe (NADIR) is under development at the Italian center for Astronomical Archives (IA2) to increase the performances of the current archival software tools at the data center. Traditional softwares usually offer simple and robust solutions to perform data archive and distribution but are awkward to adapt and reuse in projects that have different purposes. Data evolution in terms of data model, format, publication policy, version, and meta-data content are the main threats to re-usage. NADIR, using stable and mature framework features, answers those very challenging issues. Its main characteristics are a configuration database, a multi threading and multi language environment (C++, Java, Python), special features to guarantee high scalability, modularity, robustness, error tracking, and tools to monitor with confidence the status of each project at each archiving site. In this contribution, the development of the core components is presented, commenting also on some performance and innovative features (multi-cast and publisher-subscriber paradigms). NADIR is planned to be developed as simply as possible with default configurations for every project, first of all for LBT and other IA2 projects.

  4. Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter

    NASA Astrophysics Data System (ADS)

    Ashizawa, Kunitaka; Sato, Takehiro; Tokuhashi, Kazumasa; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki; Oki, Eiji

    This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30msec, which is acceptable for on-demand broadcast services.

  5. MDP: Reliable File Transfer for Space Missions

    NASA Technical Reports Server (NTRS)

    Rash, James; Criscuolo, Ed; Hogie, Keith; Parise, Ron; Hennessy, Joseph F. (Technical Monitor)

    2002-01-01

    This paper presents work being done at NASA/GSFC by the Operating Missions as Nodes on the Internet (OMNI) project to demonstrate the application of the Multicast Dissemination Protocol (MDP) to space missions to reliably transfer files. This work builds on previous work by the OMNI project to apply Internet communication technologies to space communication. The goal of this effort is to provide an inexpensive, reliable, standard, and interoperable mechanism for transferring files in the space communication environment. Limited bandwidth, noise, delay, intermittent connectivity, link asymmetry, and one-way links are all possible issues for space missions. Although these are link-layer issues, they can have a profound effect on the performance of transport and application level protocols. MDP, a UDP-based reliable file transfer protocol, was designed for multicast environments which have to address these same issues, and it has done so successfully. Developed by the Naval Research Lab in the mid 1990's, MDP is now in daily use by both the US Post Office and the DoD. This paper describes the use of MDP to provide automated end-to-end data flow for space missions. It examines the results of a parametric study of MDP in a simulated space link environment and discusses the results in terms of their implications for space missions. Lessons learned are addressed, which suggest minor enhancements to the MDP user interface to add specific features for space mission requirements, such as dynamic control of data rate, and a checkpoint/resume capability. These are features that are provided for in the protocol, but are not implemented in the sample MDP application that was provided. A brief look is also taken at the status of standardization. A version of MDP known as NORM (Neck Oriented Reliable Multicast) is in the process of becoming an IETF standard.

  6. 47 CFR 73.1201 - Station identification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... offerings. Television and Class A television broadcast stations may make these announcements visually or... multicast audio programming streams, in a manner that appropriately alerts its audience to the fact that it is listening to a digital audio broadcast. No other insertion between the station's call letters and...

  7. Back pressure based multicast scheduling for fair bandwidth allocation.

    PubMed

    Sarkar, Saswati; Tassiulas, Leandros

    2005-09-01

    We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.

  8. Secure Multicast Tree Structure Generation Method for Directed Diffusion Using A* Algorithms

    NASA Astrophysics Data System (ADS)

    Kim, Jin Myoung; Lee, Hae Young; Cho, Tae Ho

    The application of wireless sensor networks to areas such as combat field surveillance, terrorist tracking, and highway traffic monitoring requires secure communication among the sensor nodes within the networks. Logical key hierarchy (LKH) is a tree based key management model which provides secure group communication. When a sensor node is added or evicted from the communication group, LKH updates the group key in order to ensure the security of the communications. In order to efficiently update the group key in directed diffusion, we propose a method for secure multicast tree structure generation, an extension to LKH that reduces the number of re-keying messages by considering the addition and eviction ratios of the history data. For the generation of the proposed key tree structure the A* algorithm is applied, in which the branching factor at each level can take on different value. The experiment results demonstrate the efficiency of the proposed key tree structure against the existing key tree structures of fixed branching factors.

  9. Context-based user grouping for multi-casting in heterogeneous radio networks

    NASA Astrophysics Data System (ADS)

    Mannweiler, C.; Klein, A.; Schneider, J.; Schotten, H. D.

    2011-08-01

    Along with the rise of sophisticated smartphones and smart spaces, the availability of both static and dynamic context information has steadily been increasing in recent years. Due to the popularity of social networks, these data are complemented by profile information about individual users. Making use of this information by classifying users in wireless networks enables targeted content and advertisement delivery as well as optimizing network resources, in particular bandwidth utilization, by facilitating group-based multi-casting. In this paper, we present the design and implementation of a web service for advanced user classification based on user, network, and environmental context information. The service employs simple and advanced clustering algorithms for forming classes of users. Available service functionalities include group formation, context-aware adaptation, and deletion as well as the exposure of group characteristics. Moreover, the results of a performance evaluation, where the service has been integrated in a simulator modeling user behavior in heterogeneous wireless systems, are presented.

  10. From Genes to Protein Mechanics on a Chip

    PubMed Central

    Milles, Lukas F.; Verdorfer, Tobias; Pippig, Diana A.; Nash, Michael A.; Gaub, Hermann E.

    2014-01-01

    Single-molecule force spectroscopy enables mechanical testing of individual proteins, however low experimental throughput limits the ability to screen constructs in parallel. We describe a microfluidic platform for on-chip protein expression and measurement of single-molecule mechanical properties. We constructed microarrays of proteins covalently attached to a chip surface, and found that a single cohesin-modified cantilever that bound to the terminal dockerin-tag of each protein remained stable over thousands of pulling cycles. The ability to synthesize and mechanically probe protein libraries presents new opportunities for high-throughput mechanical phenotyping. PMID:25194847

  11. The growing impact of lyophilized cell-free protein expression systems

    PubMed Central

    Hunt, J. Porter; Yang, Seung Ook; Wilding, Kristen M.; Bundy, Bradley C.

    2017-01-01

    ABSTRACT Recently reported shelf-stable, on-demand protein synthesis platforms are enabling new possibilities in biotherapeutics, biosensing, biocatalysis, and high throughput protein expression. Lyophilized cell-free protein expression systems not only overcome cold-storage limitations, but also enable stockpiling for on-demand synthesis and completely sterilize the protein synthesis platform. Recently reported high-yield synthesis of cytotoxic protein Onconase from lyophilized E. coli extract preparations demonstrates the utility of lyophilized cell-free protein expression and its potential for creating on-demand biotherapeutics, vaccines, biosensors, biocatalysts, and high throughput protein synthesis. PMID:27791452

  12. The Development of Interactive Distance Learning in Taiwan: Challenges and Prospects.

    ERIC Educational Resources Information Center

    Chu, Clarence T.

    1999-01-01

    Describes three types of interactive distance-education systems under development in Taiwan: real-time multicast systems; virtual-classroom systems; and curriculum-on-demand systems. Discusses the use of telecommunications and computer technology in higher education, problems and challenges, and future prospects. (Author/LRW)

  13. Australian DefenceScience. Volume 16, Number 1, Autumn

    DTIC Science & Technology

    2008-01-01

    are carried via VOIP technology, and multicast IP traffic for audio -visual communications is also supported. The SSATIN system overall is seen to...Artificial Intelligence and Soft Computing Palma de Mallorca, Spain http://iasted.com/conferences/home-628.html 1 - 3 Sep 2008 Visualisation , Imaging and

  14. QoS Adaptation in Multimedia Multicast Conference Applications for E-Learning Services

    ERIC Educational Resources Information Center

    Deusdado, Sérgio; Carvalho, Paulo

    2006-01-01

    The evolution of the World Wide Web service has incorporated new distributed multimedia conference applications, powering a new generation of e-learning development and allowing improved interactivity and prohuman relations. Groupware applications are increasingly representative in the Internet home applications market, however, the Quality of…

  15. Internet technologies and requirements for telemedicine

    NASA Technical Reports Server (NTRS)

    Lamaster, H.; Meylor, J.; Meylor, F.

    1997-01-01

    Internet technologies are briefly introduced and those applicable for telemedicine are reviewed. Multicast internet technologies are described. The National Aeronautics and Space Administration (NASA) 'Telemedicine Space-bridge to Russia' project is described and used to derive requirements for internet telemedicine. Telemedicine privacy and Quality of Service (QoS) requirements are described.

  16. 47 CFR 76.66 - Satellite broadcast signal carriage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... free over-the-air signal, including multicast and high definition digital signals. (c) Election cycle... first retransmission consent-mandatory carriage election cycle shall be for a four-year period... carriage election cycle, and all cycles thereafter, shall be for a period of three years (e.g. the second...

  17. 47 CFR 73.1201 - Station identification.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...

  18. 47 CFR 73.1201 - Station identification.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...

  19. 47 CFR 73.1201 - Station identification.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...

  20. 47 CFR 73.1201 - Station identification.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...; Provided, That the name of the licensee, the station's frequency, the station's channel number, as stated... number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station with major channel number 26 may use 26.1 to identify...

  1. Highly Stable and Active Catalyst for Sabatier Reactions

    NASA Technical Reports Server (NTRS)

    Hu, Jianli; Brooks, Kriston P.

    2012-01-01

    Highly active Ru/TiO2 catalysts for Sabatier reaction have been developed. The catalysts have shown to be stable under repeated shutting down/startup conditions. When the Ru/TiO2 catalyst is coated on the engineered substrate Fe-CrAlY felt, activity enhancement is more than doubled when compared with an identically prepared engineered catalyst made from commercial Degussa catalyst. Also, bimetallic Ru-Rh/TiO2 catalysts show high activity at high throughput.

  2. Efficient production of a gene mutant cell line through integrating TALENs and high-throughput cell cloning.

    PubMed

    Sun, Changhong; Fan, Yu; Li, Juan; Wang, Gancheng; Zhang, Hanshuo; Xi, Jianzhong Jeff

    2015-02-01

    Transcription activator-like effectors (TALEs) are becoming powerful DNA-targeting tools in a variety of mammalian cells and model organisms. However, generating a stable cell line with specific gene mutations in a simple and rapid manner remains a challenging task. Here, we report a new method to efficiently produce monoclonal cells using integrated TALE nuclease technology and a series of high-throughput cell cloning approaches. Following this method, we obtained three mTOR mutant 293T cell lines within 2 months, which included one homozygous mutant line. © 2014 Society for Laboratory Automation and Screening.

  3. The Development of CyberLearning in Dual-Mode: Higher Education Institutions in Taiwan.

    ERIC Educational Resources Information Center

    Chen, Yau Jane

    2002-01-01

    Open and distance education in Taiwan has evolved into cyberlearning. Over half (56 percent) of the conventional universities and colleges have been upgraded to dual-mode institutions offering real-time multicast instructional systems using videoconferencing, cable television, virtual classrooms, and curriculum-on-demand systems. The Ministry of…

  4. Digital Video and the Internet: A Powerful Combination.

    ERIC Educational Resources Information Center

    Barron, Ann E.; Orwig, Gary W.

    1995-01-01

    Provides an overview of digital video and outlines hardware and software necessary for interactive training on the World Wide Web and for videoconferences via the Internet. Lists sites providing additional information on digital video, on CU-SeeMe software, and on MBONE (Multicast BackBONE), a technology that permits real-time transmission of…

  5. 78 FR 53757 - Information Collections Being Submitted for Review and Approval to the Office of Management and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-30

    ...'s channel number, as stated on the station's license, and/or the station's network affiliation may... Stations, choosing to include the station's channel number in the station identification must use the station's major channel number and may distinguish multicast program streams. For example, a DTV station...

  6. Multimedia C for Remote Language Teaching over SuperJANET.

    ERIC Educational Resources Information Center

    Matthews, E.; And Others

    1996-01-01

    Describes work carried out as part of a remote language teaching research investigation, which is looking into the use of multicast, multimedia conferencing over SuperJANET. The fundamental idea is to investigate the feasibility of sharing language teaching resources among universities within the United Kingdom by using the broadband SuperJANET…

  7. 37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...

  8. 37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...

  9. 75 FR 53198 - Rate Adjustment for the Satellite Carrier Compulsory License

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... LIBRARY OF CONGRESS Copyright Royalty Board 37 CFR Part 386 [Docket No. 2010-4 CRB Satellite Rate] Rate Adjustment for the Satellite Carrier Compulsory License AGENCY: Copyright Royalty Board, Library... last day of a given month. (2) In the case of a station engaged in digital multicasting, the rates set...

  10. 37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...

  11. 37 CFR 386.2 - Royalty fee for secondary transmission by satellite carriers.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES ADJUSTMENT OF ROYALTY FEES FOR... a given month. (2) In the case of a station engaged in digital multicasting, the rates set forth in paragraph (b) of this section shall apply to each digital stream that a satellite carrier or distributor...

  12. Multipoint Multimedia Conferencing System with Group Awareness Support and Remote Management

    ERIC Educational Resources Information Center

    Osawa, Noritaka; Asai, Kikuo

    2008-01-01

    A multipoint, multimedia conferencing system called FocusShare is described that uses IPv6/IPv4 multicasting for real-time collaboration, enabling video, audio, and group awareness information to be shared. Multiple telepointers provide group awareness information and make it easy to share attention and intention. In addition to pointing with the…

  13. Using Interactive Broadband Multicasting in a Museum Lifelong Learning Program.

    ERIC Educational Resources Information Center

    Steinbach, Leonard

    The Cleveland Museum of Art has embarked on an innovative approach for delivering high quality video-on-demand and live interactive cultural programming, along with Web-based complementary material, to seniors in assisted living residence facilities, community-based centers, and disabled persons in their homes. The project is made possible in part…

  14. Cooperation and information replication in wireless networks.

    PubMed

    Poularakis, Konstantinos; Tassiulas, Leandros

    2016-03-06

    A significant portion of today's network traffic is due to recurring downloads of a few popular contents. It has been observed that replicating the latter in caches installed at network edges-close to users-can drastically reduce network bandwidth usage and improve content access delay. Such caching architectures are gaining increasing interest in recent years as a way of dealing with the explosive traffic growth, fuelled further by the downward slope in storage space price. In this work, we provide an overview of caching with a particular emphasis on emerging network architectures that enable caching at the radio access network. In this context, novel challenges arise due to the broadcast nature of the wireless medium, which allows simultaneously serving multiple users tuned into a multicast stream, and the mobility of the users who may be frequently handed off from one cell tower to another. Existing results indicate that caching at the wireless edge has a great potential in removing bottlenecks on the wired backbone networks. Taking into consideration the schedule of multicast service and mobility profiles is crucial to extract maximum benefit in network performance. © 2016 The Author(s).

  15. Protocol Architecture Model Report

    NASA Technical Reports Server (NTRS)

    Dhas, Chris

    2000-01-01

    NASA's Glenn Research Center (GRC) defines and develops advanced technology for high priority national needs in communications technologies for application to aeronautics and space. GRC tasked Computer Networks and Software Inc. (CNS) to examine protocols and architectures for an In-Space Internet Node. CNS has developed a methodology for network reference models to support NASA's four mission areas: Earth Science, Space Science, Human Exploration and Development of Space (REDS), Aerospace Technology. This report applies the methodology to three space Internet-based communications scenarios for future missions. CNS has conceptualized, designed, and developed space Internet-based communications protocols and architectures for each of the independent scenarios. The scenarios are: Scenario 1: Unicast communications between a Low-Earth-Orbit (LEO) spacecraft inspace Internet node and a ground terminal Internet node via a Tracking and Data Rela Satellite (TDRS) transfer; Scenario 2: Unicast communications between a Low-Earth-Orbit (LEO) International Space Station and a ground terminal Internet node via a TDRS transfer; Scenario 3: Multicast Communications (or "Multicasting"), 1 Spacecraft to N Ground Receivers, N Ground Transmitters to 1 Ground Receiver via a Spacecraft.

  16. Design alternatives for process group membership and multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.

  17. 77 FR 68773 - FIFRA Scientific Advisory Panel; Notice of Public Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-16

    ... for physical chemical properties that cannot be easily tested in in vitro systems or stable enough for.... Quantitative structural-activity relationship (QSAR) models and estrogen receptor (ER) expert systems development. High-throughput data generation and analysis (expertise focused on how this methodology can be...

  18. A Very Stable High Throughput Taylor Cone-jet in Electrohydrodynamics

    PubMed Central

    Morad, M. R.; Rajabi, A.; Razavi, M.; Sereshkeh, S. R. Pejman

    2016-01-01

    A stable capillary liquid jet formed by an electric field is an important physical phenomenon for formation of controllable small droplets, power generation and chemical reactions, printing and patterning, and chemical-biological investigations. In electrohydrodynamics, the well-known Taylor cone-jet has a stability margin within a certain range of the liquid flow rate (Q) and the applied voltage (V). Here, we introduce a simple mechanism to greatly extend the Taylor cone-jet stability margin and produce a very high throughput. For an ethanol cone-jet emitting from a simple nozzle, the stability margin is obtained within 1 kV for low flow rates, decaying with flow rate up to 2 ml/h. By installing a hemispherical cap above the nozzle, we demonstrate that the stability margin could increase to 5 kV for low flow rates, decaying to zero for a maximum flow rate of 65 ml/h. The governing borders of stability margins are discussed and obtained for three other liquids: methanol, 1-propanol and 1-butanol. For a gravity-directed nozzle, the produced cone-jet is more stable against perturbations and the axis of the spray remains in the same direction through the whole stability margin, unlike the cone-jet of conventional simple nozzles. PMID:27917956

  19. Information-based management mode based on value network analysis for livestock enterprises

    NASA Astrophysics Data System (ADS)

    Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng

    2018-01-01

    With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.

  20. CLON: Overlay Networks and Gossip Protocols for Cloud Environments

    NASA Astrophysics Data System (ADS)

    Matos, Miguel; Sousa, António; Pereira, José; Oliveira, Rui; Deliot, Eric; Murray, Paul

    Although epidemic or gossip-based multicast is a robust and scalable approach to reliable data dissemination, its inherent redundancy results in high resource consumption on both links and nodes. This problem is aggravated in settings that have costlier or resource constrained links as happens in Cloud Computing infrastructures composed by several interconnected data centers across the globe.

  1. A Security Architecture for Fault-Tolerant Systems

    DTIC Science & Technology

    1993-06-03

    aspect of our effort to achieve better performance is integrating the system into microkernel -based operating systems. 4 Summary and discussion In...135-171, June 1983. [vRBC+92] R. van Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels . In...Proceedings of the USENIX Microkernels and Other Kernel Architectures Workshop, April 1992. 29

  2. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.

  3. Emergence of a few distinct structures from a single formal structure type during high-throughput screening for stable compounds: The case of RbCuS and RbCuSe

    NASA Astrophysics Data System (ADS)

    Trimarchi, Giancarlo; Zhang, Xiuwen; DeVries Vermeer, Michael J.; Cantwell, Jacqueline; Poeppelmeier, Kenneth R.; Zunger, Alex

    2015-10-01

    Theoretical sorting of stable and synthesizable "missing compounds" from those that are unstable is a crucial step in the discovery of previously unknown functional materials. This active research area often involves high-throughput (HT) examination of the total energy of a given compound in a list of candidate formal structure types (FSTs), searching for those with the lowest energy within that list. While it is well appreciated that local relaxation methods based on a fixed list of structure types can lead to inaccurate geometries, this approach is widely used in HT studies because it produces answers faster than global optimization methods (that vary lattice vectors and atomic positions without local restrictions). We find, however, a different failure mode of the HT protocol: specific crystallographic classes of formal structure types each correspond to a series of chemically distinct "daughter structure types" (DSTs) that have the same space group but possess totally different local bonding configurations, including coordination types. Failure to include such DSTs in the fixed list of examined candidate structures used in contemporary high-throughput approaches can lead to qualitative misidentification of the stable bonding pattern, not just quantitative inaccuracies. In this work, we (i) clarify the understanding of the general DST-FST relationship, thus improving current discovery HT approaches, (ii) illustrate this failure mode for RbCuS and RbCuSe (the latter being a yet unreported compound and is predicted here) by developing a synthesis method and accelerated crystal-structure determination, and (iii) apply the genetic-algorithm-based global space-group optimization (GSGO) approach which is not vulnerable to the failure mode of HT searches of fixed lists, demonstrating a correct identification of the stable DST. The broad impact of items (i)-(iii) lies in the demonstrated predictive ability of a more comprehensive search strategy than what is currently used—use HT calculations as the preliminary broad screening followed by unbiased GSGO of the final candidates.

  4. Membrane inlet laser spectroscopy to measure H and O stable isotope compositions of soil and sediment pore water with high sample throughput

    DOE PAGES

    Oerter, Erik J.; Perelet, Alexei; Pardyjak, Eric; ...

    2016-10-20

    Here, the fast and accurate measurement of H and O stable isotope compositions (δ 2H and δ 18O values) of soil and sediment pore water remains an impediment to scaling-up the application of these isotopes in soil and vadose hydrology. Here we describe a method and its calibration to measuring soil and sediment pore water δ 2H and δ 18O values using a water vapor-permeable probe coupled to an isotope ratio infrared spectroscopy analyzer.

  5. 75 FR 52267 - Waiver of Statement of Account Filing Deadline for the 2010/1 Period

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-25

    ... available in a print format, a PDF format, and a software ``fill-in'' format created by Gralin Associates... retransmission of multicast streams. The paper and PDF versions of the form have been available to cable... recognize that the paper and PDF versions of the SOA have been available since July, many large and small...

  6. High Performance Computing Multicast

    DTIC Science & Technology

    2012-02-01

    responsiveness, first-tier applications often implement replicated in- memory key-value stores , using them to store state or to cache data from services...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response. A...alternative that replicates data , combines agreement on update ordering with amnesia freedom, and supports both good scalability and fast response

  7. 77 FR 65596 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    ... to offer remote multi-cast ITCH Wave Ports for clients co-located at other third party data centers... delivery of third party market data to market center clients via a wireless network using millimeter wave... Multi- cast ITCH Wave Ports for clients co-located at other third-party data centers, through which...

  8. Design and Implementation of Replicated Object Layer

    NASA Technical Reports Server (NTRS)

    Koka, Sudhir

    1996-01-01

    One of the widely used techniques for construction of fault tolerant applications is the replication of resources so that if one copy fails sufficient copies may still remain operational to allow the application to continue to function. This thesis involves the design and implementation of an object oriented framework for replicating data on multiple sites and across different platforms. Our approach, called the Replicated Object Layer (ROL) provides a mechanism for consistent replication of data over dynamic networks. ROL uses the Reliable Multicast Protocol (RMP) as a communication protocol that provides for reliable delivery, serialization and fault tolerance. Besides providing type registration, this layer facilitates distributed atomic transactions on replicated data. A novel algorithm called the RMP Commit Protocol, which commits transactions efficiently in reliable multicast environment is presented. ROL provides recovery procedures to ensure that site and communication failures do not corrupt persistent data, and male the system fault tolerant to network partitions. ROL will facilitate building distributed fault tolerant applications by performing the burdensome details of replica consistency operations, and making it completely transparent to the application.Replicated databases are a major class of applications which could be built on top of ROL.

  9. Detecting and Preventing Sybil Attacks in Wireless Sensor Networks Using Message Authentication and Passing Method.

    PubMed

    Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani

    2015-01-01

    Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting.

  10. Detecting and Preventing Sybil Attacks in Wireless Sensor Networks Using Message Authentication and Passing Method

    PubMed Central

    Dhamodharan, Udaya Suriya Raj Kumar; Vayanaperumal, Rajamani

    2015-01-01

    Wireless sensor networks are highly indispensable for securing network protection. Highly critical attacks of various kinds have been documented in wireless sensor network till now by many researchers. The Sybil attack is a massive destructive attack against the sensor network where numerous genuine identities with forged identities are used for getting an illegal entry into a network. Discerning the Sybil attack, sinkhole, and wormhole attack while multicasting is a tremendous job in wireless sensor network. Basically a Sybil attack means a node which pretends its identity to other nodes. Communication to an illegal node results in data loss and becomes dangerous in the network. The existing method Random Password Comparison has only a scheme which just verifies the node identities by analyzing the neighbors. A survey was done on a Sybil attack with the objective of resolving this problem. The survey has proposed a combined CAM-PVM (compare and match-position verification method) with MAP (message authentication and passing) for detecting, eliminating, and eventually preventing the entry of Sybil nodes in the network. We propose a scheme of assuring security for wireless sensor network, to deal with attacks of these kinds in unicasting and multicasting. PMID:26236773

  11. Toward fidelity between specification and implementation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.; Morrison, Jeff; Wu, Yunqing

    1994-01-01

    This paper describes the methods used to specify and implement a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally by two complementary teams using a combination of formal and informal techniques in an attempt to ensure the correctness of the protocol implementation. The first team, called the Design team, initially specified protocol requirements using a variant of SCR requirements tables and implemented a prototype solution. The second team, called the V&V team, developed a state model based on the requirements tables and derived test cases from these tables to exercise the implementation. In a series of iterative steps, the Design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation through testing. Test cases derived from state transition paths in the formal model formed the dialogue between teams during development and served as the vehicles for keeping the model and implementation in fidelity with each other. This paper describes our experiences in developing our process model, details of our approach, and some example problems found during the development of RMP.

  12. Ammonia-oxidizing bacteria dominate ammonia oxidation in a full-scale wastewater treatment plant revealed by DNA-based stable isotope probing.

    PubMed

    Pan, Kai-Ling; Gao, Jing-Feng; Li, Hong-Yu; Fan, Xiao-Yan; Li, Ding-Chang; Jiang, Hao

    2018-05-01

    A full-scale wastewater treatment plant (WWTP) with three separate treatment processes was selected to investigate the effects of seasonality and treatment process on the community structures of ammonia-oxidizing archaea (AOA) and bacteria (AOB). And then DNA-based stable isotope probing (DNA-SIP) was applied to explore the active ammonia oxidizers. The results of high-throughput sequencing indicated that treatment processes varied AOB communities rather than AOA communities. AOA slightly outnumbered AOB in most of the samples, whose abundance was significantly correlated with temperature. DNA-SIP results showed that the majority of AOB amoA gene was labeled by 13 C-substrate, while just a small amount of AOA amoA gene was labeled. As revealed by high-throughput sequencing of heavy DNA, Nitrosomonadaceae-like AOB, Nitrosomonas sp. NP1, Nitrosomonas oligotropha and Nitrosomonas marina were the active AOB, and Nitrososphaera viennensis dominated the active AOA. The results indicated that AOB, not AOA, dominated active ammonia oxidation in the test WWTP. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    PubMed

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  14. Stable cellular models of nuclear receptor PXR for high-throughput evaluation of small molecules.

    PubMed

    Negi, Seema; Singh, Shashi Kala; Kumar, Sanjay; Kumar, Subodh; Tyagi, Rakesh K

    2018-06-19

    Pregnane & Xenobiotic Receptor (PXR) is one of the 48 members of the ligand-modulated transcription factors belonging to nuclear receptor superfamily. Though PXR is now well-established as a 'xenosensor', regulating the central detoxification and drug metabolizing machinery, it has also emerged as a key player in several metabolic disorders. This makes PXR attractive to both, researchers and pharmaceutical industry since clinical success of small drug molecules can be pre-evaluated on PXR platform. At the early stages of drug discovery, cell-based assays are used for high-throughput screening of small molecules. The future success or failure of a drug can be predicted by this approach saving expensive resources and time. In view of this, we have developed human liver cell line-based, dual-level screening and validation protocol on PXR platform having application to assess small molecules. We have generated two different stably transfected cell lines, (i) a stable promoter-reporter cell line (HepXREM) expressing PXR and a commonly used CYP3A4 promoter-reporter i.e. XREM-luciferase; and (ii) two stable cell lines integrated with proximal PXR-promoter-reporter (Hepx-1096/+43 and Hepx-497/+43). Employing HepXREM, Hepx-1096/+43 and Hepx-497/+43 stable cell lines > 25 anti-cancer herbal drug ingredients were screened for examining their modulatory effects on a) PXR transcriptional activity and, b) PXR-promoter activity. In conclusion, the present report provides a convenient and economical, dual-level screening system to facilitate the identification of superior therapeutic small molecules. Copyright © 2018. Published by Elsevier Ltd.

  15. Automated, high-throughput platform for protein solubility screening using a split-GFP system

    PubMed Central

    Listwan, Pawel; Terwilliger, Thomas C.

    2010-01-01

    Overproduction of soluble and stable proteins for functional and structural studies is a major bottleneck for structural genomics programs and traditional biochemistry laboratories. Many high-payoff proteins that are important in various biological processes are “difficult to handle” as protein reagents in their native form. We have recently made several advances in enabling biochemical technologies for improving protein stability (http://www.lanl.gov/projects/gfp/), allowing stratagems for efficient protein domain trapping, solubility-improving mutations, and finding protein folding partners. In particular split-GFP protein tags are a very powerful tool for detection of stable protein domains. Soluble, stable proteins tagged with the 15 amino acid GFP fragment (amino acids 216–228) can be detected in vivo and in vitro using the engineered GFP 1–10 “detector” fragment (amino acids 1–215). If the small tag is accessible, the detector fragment spontaneously binds resulting in fluorescence. Here, we describe our current and on-going efforts to move this process from the bench (manual sample manipulation) to an automated, high-throughput, liquid-handling platform. We discuss optimization and validation of bacterial culture growth, lysis protocols, protein extraction, and assays of soluble and insoluble protein in multiple 96 well plate format. The optimized liquid-handling protocol can be used for rapid determination of the optimal, compact domains from single ORFS, collections of ORFS, or cDNA libraries. PMID:19039681

  16. A Novel Group Coordination Protocol for Collaborative Multimedia Systems

    DTIC Science & Technology

    1998-01-01

    technology have advanced considerably, ef- ficient group coordination support for applications characterized by synchronous and wide-area groupwork is...As a component within a general coordination architecture for many-to-many groupwork , floor control coexists with proto- cols for reliable ordered...multicast and media synchronization at a sub-application level. Orchestration of multiparty groupwork with fine-grained and fair floor control is an

  17. 78 FR 19051 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... that is in test mode in excess of one. (c)-(f) No change. (g) Other Port Fees Remote Multi-cast ITCH... environment to test upcoming NASDAQ releases and product enhancements, as well as test software prior to... public in accordance with the provisions of 5 U.S.C. 552, will be available for Web site viewing and...

  18. Multiuser Transmit Beamforming for Maximum Sum Capacity in Tactical Wireless Multicast Networks

    DTIC Science & Technology

    2006-08-01

    commonly used extended Kalman filter . See [2, 5, 6] for recent tutorial overviews. In particle filtering , continuous distributions are approximated by...signals (using and developing associated particle filtering tools). Our work on these topics has been reported in seven (IEEE, SIAM) journal papers and...multidimensional scaling, tracking, intercept, particle filters . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT 18. SECURITY CLASSIFICATION OF

  19. Opinion: Why we need a centralized repository for isotopic data

    USGS Publications Warehouse

    Pauli, Jonathan N.; Newsome, Seth D.; Cook, Joseph A.; Harrod, Chris; Steffan, Shawn A.; Baker, Christopher J. O.; Ben-David, Merav; Bloom, David; Bowen, Gabriel J.; Cerling, Thure E.; Cicero, Carla; Cook, Craig; Dohm, Michelle; Dharampal, Prarthana S.; Graves, Gary; Gropp, Robert; Hobson, Keith A.; Jordan, Chris; MacFadden, Bruce; Pilaar Birch, Suzanne; Poelen, Jorrit; Ratnasingham, Sujeevan; Russell, Laura; Stricker, Craig A.; Uhen, Mark D.; Yarnes, Christopher T.; Hayden, Brian

    2017-01-01

    Stable isotopes encode and integrate the origin of matter; thus, their analysis offers tremendous potential to address questions across diverse scientific disciplines (1, 2). Indeed, the broad applicability of stable isotopes, coupled with advancements in high-throughput analysis, have created a scientific field that is growing exponentially, and generating data at a rate paralleling the explosive rise of DNA sequencing and genomics (3). Centralized data repositories, such as GenBank, have become increasingly important as a means for archiving information, and “Big Data” analytics of these resources are revolutionizing science and everyday life.

  20. A high-throughput assay of NK cell activity in whole blood and its clinical application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Saet-byul; Cha, Junhoe; Kim, Im-kyung

    2014-03-14

    Graphical abstract: - Highlights: • We demonstrated a simple assay of NK cell activity from whole blood. • The measurement of secreted IFN-γ from NK cell enables high-throughput screening. • The NKA assay was validated by clinical results of colorectal cancer patients. - Abstract: Natural killer (NK) cells are lymphocytes of the innate immune system and have the ability to kill tumor cells and virus-infected cells without prior sensitization. Malignant tumors and viruses have developed, however, strategies to suppress NK cells to escape from their responses. Thus, the evaluation of NK cell activity (NKA) could be invaluable to estimate themore » status and the outcome of cancers, viral infections, and immune-mediated diseases. Established methods that measure NKA, such as {sup 51}Cr release assay and CD107a degranulation assay, may be used to determine NK cell function, but they are complicated and time-consuming because they require isolation of peripheral blood mononuclear cells (PBMC) or NK cells. In some cases these assays require hazardous material such as radioactive isotopes. To overcome these difficulties, we developed a simple assay that uses whole blood instead of PBMC or isolated NK cells. This novel assay is suitable for high-throughput screening and the monitoring of diseases, because it employs serum of ex vivo stimulated whole blood to detect interferon (IFN)-γ secreted from NK cells as an indicator of NKA. After the stimulation of NK cells, the determination of IFNγ concentration in serum samples by enzyme-linked immunosorbent assay (ELISA) provided a swift, uncomplicated, and high-throughput assay of NKA ex vivo. The NKA results microsatellite stable (MSS) colorectal cancer patients was showed significantly lower NKA, 263.6 ± 54.5 pg/mL compared with healthy subjects, 867.5 ± 50.2 pg/mL (p value <0.0001). Therefore, the NKA could be utilized as a supportive diagnostic marker for microsatellite stable (MSS) colorectal cancer.« less

  1. Generation and characterization of West Nile pseudo-infectious reporter virus for antiviral screening.

    PubMed

    Zhang, Hong-Lei; Ye, Han-Qing; Deng, Cheng-Lin; Liu, Si-Qing; Shi, Pei-Yong; Qin, Cheng-Feng; Yuan, Zhi-Ming; Zhang, Bo

    2017-05-01

    West Nile virus (WNV), a mosquito-borne flavivirus, is an important neurotropic human pathogen. As a biosafety level-3 (BSL-3) agent, WNV is strictly to BSL-3 laboratories for experimentations, thus greatly hindering the development of vaccine and antiviral drug. Here, we developed a novel pseudo-infectious WNV reporter virus expressing the Gaussia luciferase (Gluc). A stable 293T NS1 cell line expressing NS1 was selected for trans-supplying NS1 protein to support the replication of WNV-ΔNS1 virus and WNV-ΔNS1-Gluc reporter virus with large-fragment deletion of NS1. WNV-ΔNS1 virus and WNV-Gluc-ΔNS1 reporter virus were confined to complete their replication cycle in this 293T NS1 cell line, displaying nearly identical growth kinetics to WT WNV although the viral titers were lower than those of WT WNV. The reporter gene was stably maintained in virus genome at least within three rounds of passage in 293T NS1 cell line. Using a known flaviviruses inhibitor, NITD008, we demonstrated that the pseudo-infectious WNV-Gluc-ΔNS1 could be used for antiviral screening. Furthermore, a high-throughput screening (HTS) assay in a 96-well format was optimized and validated using several known WNV inhibitors, indicating that the optimized HTS assay was suitable for high-throughput screening WNV inhibitors. Our work provides a stable and safe tool to handle WNV outside of a BSL-3 facility and facilitates high throughput screening for anti-WNV drugs. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Enhanced Performance & Functionality of Tunable Delay Lines

    DTIC Science & Technology

    2012-08-01

    Figure 6. Experimental setup. Transmitter is capable of generating 80-Gb/s RZ-DQPSK, 40-Gb/s RZ-DPSK and 40-Gb/s RZ-OOK modulation formats. Phase...Power penalty with respect to B2B of each channel for 2-, 4-, 8-fold multicasting. (c) Pulsewidth as a function of DGD along with eye diagrams of 2...63 Figure 99. Concept. (a) A distributed optical network ; (b) NOLMs for

  3. Scalable Technology for a New Generation of Collaborative Applications

    DTIC Science & Technology

    2007-04-01

    of the International Symposium on Distributed Computing (DISC), Cracow, Poland, September 2005. Classic Paxos vs. Fast Paxos: Caveat Emptor, Flavio...grou or able and fast multicast primitive to layer under high-level latency across dimensions as varied as group size [10, 17],abstractions such as...servers, networked via fast , dedicated interconnects. The system to subscribe to a fraction of the equities on the software stack running on a single

  4. Saguaro: A Distributed Operating System Based on Pools of Servers.

    DTIC Science & Technology

    1988-03-25

    asynchronous message passing, multicast, and semaphores are supported. We have found this flexibility to be very useful for distributed programming. The...variety of communication primitives provided by SR has facilitated the research of Stella Atkins, who was a visiting professor at Arizona during Spring...data bits in a raw communication channel to help keep the source and destination synchronized , Psync explicitly embeds timing information drawn from the

  5. Extensible Interest Management for Scalable Persistent Distributed Virtual Environments

    DTIC Science & Technology

    1999-12-01

    Calvin, Cebula et al. 1995; Morse, Bic et al. 2000) uses a two grid, with each grid cell having two multicast addresses. An entity expresses interest...Entity distribution for experimental runs 78 s I * • ...... ^..... * * a» Sis*«*»* 1 ***** Jj |r...Multiple Users and Shared Applications with VRML. VRML 97, Monterey, CA. pp. 33-40. Calvin, J. O., D. P. Cebula , et al. (1995). Data Subscription in

  6. Improved Lower Bounds on the Price of Stability of Undirected Network Design Games

    NASA Astrophysics Data System (ADS)

    Bilò, Vittorio; Caragiannis, Ioannis; Fanelli, Angelo; Monaco, Gianpiero

    Bounding the price of stability of undirected network design games with fair cost allocation is a challenging open problem in the Algorithmic Game Theory research agenda. Even though the generalization of such games in directed networks is well understood in terms of the price of stability (it is exactly H n , the n-th harmonic number, for games with n players), far less is known for network design games in undirected networks. The upper bound carries over to this case as well while the best known lower bound is 42/23 ≈ 1.826. For more restricted but interesting variants of such games such as broadcast and multicast games, sublogarithmic upper bounds are known while the best known lower bound is 12/7 ≈ 1.714. In the current paper, we improve the lower bounds as follows. We break the psychological barrier of 2 by showing that the price of stability of undirected network design games is at least 348/155 ≈ 2.245. Our proof uses a recursive construction of a network design game with a simple gadget as the main building block. For broadcast and multicast games, we present new lower bounds of 20/11 ≈ 1.818 and 1.862, respectively.

  7. A framework using cluster-based hybrid network architecture for collaborative virtual surgery.

    PubMed

    Qin, Jing; Choi, Kup-Sze; Poon, Wai-Sang; Heng, Pheng-Ann

    2009-12-01

    Research on collaborative virtual environments (CVEs) opens the opportunity for simulating the cooperative work in surgical operations. It is however a challenging task to implement a high performance collaborative surgical simulation system because of the difficulty in maintaining state consistency with minimum network latencies, especially when sophisticated deformable models and haptics are involved. In this paper, an integrated framework using cluster-based hybrid network architecture is proposed to support collaborative virtual surgery. Multicast transmission is employed to transmit updated information among participants in order to reduce network latencies, while system consistency is maintained by an administrative server. Reliable multicast is implemented using distributed message acknowledgment based on cluster cooperation and sliding window technique. The robustness of the framework is guaranteed by the failure detection chain which enables smooth transition when participants join and leave the collaboration, including normal and involuntary leaving. Communication overhead is further reduced by implementing a number of management approaches such as computational policies and collaborative mechanisms. The feasibility of the proposed framework is demonstrated by successfully extending an existing standalone orthopedic surgery trainer into a collaborative simulation system. A series of experiments have been conducted to evaluate the system performance. The results demonstrate that the proposed framework is capable of supporting collaborative surgical simulation.

  8. High-throughput and selective solid-phase extraction of urinary catecholamines by crown ether-modified resin composite fiber.

    PubMed

    Chen, LiQin; Wang, Hui; Xu, Zhen; Zhang, QiuYue; Liu, Jia; Shen, Jun; Zhang, WanQi

    2018-08-03

    In the present study, we developed a simple and high-throughput solid phase extraction (SPE) procedure for selective extraction of catecholamines (CAs) in urine samples. The SPE adsorbents were electrospun composite fibers functionalized with 4-carboxybenzo-18-crown-6 ether modified XAD resin and polystyrene, which were packed into 96-well columns and used for high-throughput selective extraction of CAs in healthy human urine samples. Moreover, the extraction efficiency of packed-fiber SPE (PFSPE) was examined by high performance liquid chromatography coupled with fluorescence detector. The parameters affecting the extraction efficiency and impurity removal efficiency were optimized, and good linearity ranging from 0.5 to 400 ng/mL was obtained with a low limit of detection (LOD, 0.2-0.5 ng/mL) and a good repeatability (2.7%-3.7%, n = 6). The extraction recoveries of three CAs ranged from 70.5% to 119.5%. Furthermore, stable and reliable results obtained by the fluorescence detector were superior to those obtained by the electrochemical detector. Collectively, PFSPE coupled with 96-well columns was a simple, rapid, selective, high-throughput and cost-efficient method, and the proposed method could be applied in clinical chemistry. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Design of high-throughput and low-power true random number generator utilizing perpendicularly magnetized voltage-controlled magnetic tunnel junction

    NASA Astrophysics Data System (ADS)

    Lee, Hochul; Ebrahimi, Farbod; Amiri, Pedram Khalili; Wang, Kang L.

    2017-05-01

    A true random number generator based on perpendicularly magnetized voltage-controlled magnetic tunnel junction devices (MRNG) is presented. Unlike MTJs used in memory applications where a stable bit is needed to store information, in this work, the MTJ is intentionally designed with small perpendicular magnetic anisotropy (PMA). This allows one to take advantage of the thermally activated fluctuations of its free layer as a stochastic noise source. Furthermore, we take advantage of the voltage dependence of anisotropy to temporarily change the MTJ state into an unstable state when a voltage is applied. Since the MTJ has two energetically stable states, the final state is randomly chosen by thermal fluctuation. The voltage controlled magnetic anisotropy (VCMA) effect is used to generate the metastable state of the MTJ by lowering its energy barrier. The proposed MRNG achieves a high throughput (32 Gbps) by implementing a 64 ×64 MTJ array into CMOS circuits and executing operations in a parallel manner. Furthermore, the circuit consumes very low energy to generate a random bit (31.5 fJ/bit) due to the high energy efficiency of the voltage-controlled MTJ switching.

  10. MATIN: A Random Network Coding Based Framework for High Quality Peer-to-Peer Live Video Streaming

    PubMed Central

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay. PMID:23940530

  11. An Evaluation of the Network Efficiency Required in Order to Support Multicast and Synchronous Distributed Learning Network Traffic

    DTIC Science & Technology

    2003-09-01

    This restriction limits the deployment to small and medium sized enterprises. The Internet cannot universally use DVMRP for this reason. In addition...20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE September 2003 3. REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE... University , 1996 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN COMPUTER SCIENCE from

  12. Traffic Generator (TrafficGen) Version 1.4.2: Users Guide

    DTIC Science & Technology

    2016-06-01

    events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with

  13. GUMP: Adapting Client/Server Messaging Protocols into Peer-to-Peer Serverless Environments

    DTIC Science & Technology

    2010-06-11

    and other related metadata, such as message re- ceiver ID (for supporting multiple connections) and so forth. The Proxy consumes the message and uses...the underlying discovery subsystem and multicast to process the message and translate the request into behaviour suitable for the un- derlying...communication i.e. a chat. Jingle (XEP-0166) [26] is a related specification that de- fines an extension to the XMPP protocol for initiating and

  14. Tactical Mobile Communications (Communications tactiques mobiles)

    DTIC Science & Technology

    1999-11-01

    13]. randomly at the network nodes. Each multicast group Our studies do, in fact, support this conjecture. consists of the source node plus at least...Capability investigate the MMR concept in some more detail. The study was contracted to a group which Multi-role denotes the capability to support a...through the HW- and SW-resources of the frontends can be incorporated in a task-dedicated GPU. Functions can be grouped into four categories: MMR

  15. Multimedia Data Capture with Multicast Dissemination for Online Distance Learning

    DTIC Science & Technology

    2001-12-01

    Juan Gril and Dr. Don Brutzman to wrap the multiple videos in a user- friendly environment. The web pages also contain the original PowerPoint...this CD, Juan Gril , a volunteer for the Siggraph 2001 Online Committee, created web pages that match the style and functionality desired by the...leader. The Committee for 2001 consisted of Don Brutzman, Stephen. Matsuba, Mike Collins, Allen Dutton, Juan Gril , Mike Hunsberger, Jerry Isdale

  16. The Impact on Quality of Service When Using Security-Enabling Filters to Provide for the Security of Run-Time Virtual Environments

    DTIC Science & Technology

    2002-09-01

    Secure Multicast......................................................................24 i. Message Digests and Message Authentication Codes ( MACs ...that is, the needs of the VE will determine what the design will look like (e.g., reliable vs . unreliable data communications). In general, there...Molva00] and [Abdalla00]. i. Message Digests and Message Authentication Codes ( MACs ) Message digests and MACs are used for data integrity verification

  17. Robust Airborne Networking Extensions (RANGE)

    DTIC Science & Technology

    2008-02-01

    IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code

  18. Rapid Catalyst Screening by a Continuous-Flow Microreactor Interfaced with Ultra High Pressure Liquid Chromatography

    PubMed Central

    Fang, Hui; Xiao, Qing; Wu, Fanghui; Floreancig, Paul E.; Weber, Stephen G.

    2010-01-01

    A high-throughput screening system for homogeneous catalyst discovery has been developed by integrating a continuous-flow capillary-based microreactor with ultra-high pressure liquid chromatography (UHPLC) for fast online analysis. Reactions are conducted in distinct and stable zones in a flow stream that allows for time and temperature regulation. UHPLC detection at high temperature allows high throughput online determination of substrate, product, and byproduct concentrations. We evaluated the efficacies of a series of soluble acid catalysts for an intramolecular Friedel-Crafts addition into an acyliminium ion intermediate within one day and with minimal material investment. The effects of catalyst loading, reaction time, and reaction temperature were also screened. This system exhibited high reproducibility for high-throughput catalyst screening and allowed several acid catalysts for the reaction to be identified. Major side products from the reactions were determined through off-line mass spectrometric detection. Er(OTf)3, the catalyst that showed optimal efficiency in the screening, was shown to be effective at promoting the cyclization reaction on a preparative scale. PMID:20666502

  19. Mass Transfer Testing of a 12.5-cm Rotor Centrifugal Contactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. H. Meikrantz; T. G. Garn; J. D. Law

    2008-09-01

    TRUEX mass transfer tests were performed using a single stage commercially available 12.5 cm centrifugal contactor and stable cerium (Ce) and europium (Eu). Test conditions included throughputs ranging from 2.5 to 15 Lpm and rotor speeds of 1750 and 2250 rpm. Ce and Eu extraction forward distribution coefficients ranged from 13 to 19. The first and second stage strip back distributions were 0.5 to 1.4 and .002 to .004, respectively, throughout the dynamic test conditions studied. Visual carryover of aqueous entrainment in all organic phase samples was estimated at < 0.1 % and organic carryover into all aqueous phase samplesmore » was about ten times less. Mass transfer efficiencies of = 98 % for both Ce and Eu in the extraction section were obtained over the entire range of test conditions. The first strip stage mass transfer efficiencies ranged from 75 to 93% trending higher with increasing throughput. Second stage mass transfer was greater than 99% in all cases. Increasing the rotor speed from 1750 to 2250 rpm had no significant effect on efficiency for all throughputs tested.« less

  20. A multilayer microdevice for cell-based high-throughput drug screening

    NASA Astrophysics Data System (ADS)

    Liu, Chong; Wang, Lei; Xu, Zheng; Li, Jingmin; Ding, Xiping; Wang, Qi; Chunyu, Li

    2012-06-01

    A multilayer polydimethylsiloxane microdevice for cell-based high-throughput drug screening is described in this paper. This established microdevice was based on a modularization method and it integrated a drug/medium concentration gradient generator (CGG), pneumatic microvalves and a cell culture microchamber array. The CGG was able to generate five steps of linear concentrations with the same outlet flow rate. The medium/drug flowed through CGG and then into the pear-shaped cell culture microchambers vertically. This vertical perfusion mode was used to reduce the impact of the shear stress on the physiology of cells induced by the fluid flow in the microchambers. Pear-shaped microchambers with two arrays of miropillars at each outlet were adopted in this microdevice, which were beneficial to cell distribution. The chemotherapeutics Cisplatin (DDP)-induced Cisplatin-resistant cell line A549/DDP apoptotic experiments were performed well on this platform. The results showed that this novel microdevice could not only provide well-defined and stable conditions for cell culture, but was also useful for cell-based high-throughput drug screening with less reagents and time consumption.

  1. Genome editing in the mushroom-forming basidiomycete Coprinopsis cinerea, optimized by a high-throughput transformation system.

    PubMed

    Sugano, Shigeo S; Suzuki, Hiroko; Shimokita, Eisuke; Chiba, Hirofumi; Noji, Sumihare; Osakabe, Yuriko; Osakabe, Keishi

    2017-04-28

    Mushroom-forming basidiomycetes produce a wide range of metabolites and have great value not only as food but also as an important global natural resource. Here, we demonstrate CRISPR/Cas9-based genome editing in the model species Coprinopsis cinerea. Using a high-throughput reporter assay with cryopreserved protoplasts, we identified a novel promoter, CcDED1 pro , with seven times stronger activity in this assay than the conventional promoter GPD2. To develop highly efficient genome editing using CRISPR/Cas9 in C. cinerea, we used the CcDED1 pro to express Cas9 and a U6-snRNA promoter from C. cinerea to express gRNA. Finally, CRISPR/Cas9-mediated GFP mutagenesis was performed in a stable GFP expression line. Individual genome-edited lines were isolated, and loss of GFP function was detected in hyphae and fruiting body primordia. This novel method of high-throughput CRISPR/Cas9-based genome editing using cryopreserved protoplasts should be a powerful tool in the study of edible mushrooms.

  2. iPSC-derived neurons as a higher-throughput readout for autism: Promises and pitfalls

    PubMed Central

    Prilutsky, Daria; Palmer, Nathan P.; Smedemark-Margulies, Niklas; Schlaeger, Thorsten M.; Margulies, David M.; Kohane, Isaac S.

    2014-01-01

    The elucidation of disease etiologies and establishment of robust, scalable, high-throughput screening assays for autism spectrum disorders (ASDs) have been impeded by both inaccessibility of disease-relevant neuronal tissue and the genetic heterogeneity of the disorder. Neuronal cells derived from induced pluripotent stem cells (iPSCs) from autism patients may circumvent these obstacles and serve as relevant cell models. To date, derived cells are characterized and screened by assessing their neuronal phenotypes. These characterizations are often etiology-specific or lack reproducibility and stability. In this manuscript, we present an overview of efforts to study iPSC-derived neurons as a model for autism, and we explore the plausibility of gene expression profiling as a reproducible and stable disease marker. PMID:24374161

  3. Interference-Assisted Techniques for Transmission and Multiple Access in Optical Communications

    NASA Astrophysics Data System (ADS)

    Guan, Xun

    Optical communications can be in wired or wireless form. Fiber optics communication (FOC) connects transmitters and receivers with optical fiber. Benefiting from its high bandwidth, low cost per volume and stability, it gains a significant market share in long-haul networks, access networks and data centers. Meanwhile, optical wireless communication (OWC) is also emerging as a crucial player in the communication market. In OWC, free-space optical communication (FSO) and visible light communication (VLC) are being studied and commercially deployed extensively. Interference is a common phenomenon in multi-user communication systems. In both FOC and OWC, interference has long been treated as a detrimental effect. However, it could also be beneficial to system applications. The effort of harnessing interference has spurred numerous innovations. Interesting examples are physical-layer network coding (PNC) and non-orthogonal multiple access (NOMA). The first part of this thesis in on the topic of PNC. PNC was firstly proposed in wireless communication to improve the throughput of a two-way relay network (TWRN). As a variation of network coding (NC), PNC turns the common channel interference (CCI) as a natural network coding operation. In this thesis, PNC is introduced into optical communication. Three schemes are proposed in different scenarios. Firstly, PNC is applied to a coherent optical orthogonal frequency division multiplexing (CO-OFDM) system so as to improve the throughput of the multicast network. The optical signal to noise ratio (OSNR) penalty is quite low. Secondly, we investigate the application of PNC in an OFDM passive optical network (OFDM-PON) supporting heterogeneous services. It is found that only minor receiver power penalties are observed to realize PNC-based virtual private networks (VPN), both in the wired service part and the wireless service part in an OFDM-PON with heterogeneous services. Thirdly, we innovate relay-based visible light communication (VLC) by adopting PNC, with a newly proposed phase-aligning method. PNC could improve the throughput at the bottlenecking relay node in a VLC system, and the proposed phase aligning method can improve the BER performance. The second part of this thesis discusses another interference-assisted technology in communication, that is, non-orthogonal multiple access (NOMA). NOMA multiplexes signals from multiple users in another dimension: power domain, with a non-orthogonal multiplexing in other dimensions such as time, frequency and code. Three schemes are proposed in this part. The first and the second schemes both realize NOMA in VLC, with different multiuser detection (MUD) techniques and a proposed phase pre-distortion method. Although both can decrease the system BER compared to conventional NOMA, the scheme using joint detection (JD) outperforms the one using successive interference cancellation (SIC). The third scheme investigated in this part is a combination of NOMA and a multicarrier precoding (MP) technology based on an orthogonal circulant transform matrix (OCT). This combination can avoid the complicated adaptive bit loading or electronic equalization, making NOMA more attractive in a practical system.

  4. Collaboration Services: Enabling Chat in Disadvantaged Grids

    DTIC Science & Technology

    2014-06-01

    grids in the tactical domain" [2]. The main focus of this group is to identify what we call tactical SOA foundation services. By this we mean which...Here, only IPv4 is supported, as differences relating to IPv4 and IPv6 addressing meant that this functionality was not easily extended to use IPv6 ...multicast groups. Our IPv4 implementation is fully compliant with the specification, whereas the IPv6 implementation uses our own interpretation of

  5. Design and Implementation of the MARG Human Body Motion Tracking System

    DTIC Science & Technology

    2004-10-01

    7803-8463-6/041$20.00 ©:!004 IEEE 625 OPTOTRAK from Northern Digital Inc. is a typical example of a marker-based system [I 0]. Another is the...technique called tunneling is :used to overcome this problem. Tunneling is a software solution that runs on the end point routers/computers and allows...multicast packets to traverse the network by putting them into unicast packets. MUTUP overcomes the tunneling problem using shared memory in the

  6. A Secure Group Communication Architecture for a Swarm of Autonomous Unmanned Aerial Vehicles

    DTIC Science & Technology

    2008-03-01

    members to use the same decryption key. This shared decryption key is called the Session Encryption Key ( SEK ) or Traffic Encryption Key (TEK...Since everyone shares the SEK , members need to hold additional Key Encryption Keys (KEK) that are used to securely distribute the SEK to each valid...managing this process. To preserve the secrecy of the multicast data, the SEK needs to be updated upon certain events such as a member joining and

  7. Efficient File Sharing by Multicast - P2P Protocol Using Network Coding and Rank Based Peer Selection

    NASA Technical Reports Server (NTRS)

    Stoenescu, Tudor M.; Woo, Simon S.

    2009-01-01

    In this work, we consider information dissemination and sharing in a distributed peer-to-peer (P2P highly dynamic communication network. In particular, we explore a network coding technique for transmission and a rank based peer selection method for network formation. The combined approach has been shown to improve information sharing and delivery to all users when considering the challenges imposed by the space network environments.

  8. Network connectivity enhancement by exploiting all optical multicast in semiconductor ring laser

    NASA Astrophysics Data System (ADS)

    Siraj, M.; Memon, M. I.; Shoaib, M.; Alshebeili, S.

    2015-03-01

    The use of smart phone and tablet applications will provide the troops for executing, controlling and analyzing sophisticated operations with the commanders providing crucial documents directly to troops wherever and whenever needed. Wireless mesh networks (WMNs) is a cutting edge networking technology which is capable of supporting Joint Tactical radio System (JTRS).WMNs are capable of providing the much needed bandwidth for applications like hand held radios and communication for airborne and ground vehicles. Routing management tasks can be efficiently handled through WMNs through a central command control center. As the spectrum space is congested, cognitive radios are a much welcome technology that will provide much needed bandwidth. They can self-configure themselves, can adapt themselves to the user requirement, provide dynamic spectrum access for minimizing interference and also deliver optimal power output. Sometimes in the indoor environment, there are poor signal issues and reduced coverage. In this paper, a solution utilizing (CR WMNs) over optical network is presented by creating nanocells (PCs) inside the indoor environment. The phenomenon of four-wave mixing (FWM) is exploited to generate all-optical multicast using semiconductor ring laser (SRL). As a result same signal is transmitted at different wavelengths. Every PC is assigned a unique wavelength. By using CR technology in conjunction with PC will not only solve network coverage issue but will provide a good bandwidth to the secondary users.

  9. Unidata LDM-7: a Hybrid Multicast/unicast System for Highly Efficient and Reliable Real-Time Data Distribution

    NASA Astrophysics Data System (ADS)

    Emmerson, S. R.; Veeraraghavan, M.; Chen, S.; Ji, X.

    2015-12-01

    Results of a pilot deployment of a major new version of the Unidata Local Data Manager (LDM-7) are presented. The Unidata LDM was developed by the University Corporation for Atmospheric Research (UCAR) and comprises a suite of software for the distribution and local processing of data in near real-time. It is widely used in the geoscience community to distribute observational data and model output, most notably as the foundation of the Unidata Internet Data Distribution (IDD) system run by UCAR, but also in private networks operated by NOAA, NASA, USGS, etc. The current version, LDM-6, uses at least one unicast TCP connection per receiving host. With over 900 connections, the bit-rate of total outgoing IDD traffic from UCAR averages approximately 3.0 GHz, with peak data rates exceeding 6.6 GHz. Expected increases in data volume suggest that a more efficient distribution mechanism will be required in the near future. LDM-7 greatly reduces the outgoing bandwidth requirement by incorporating a recently-developed "semi-reliable" IP multicast protocol while retaining the unicast TCP mechanism for reliability. During the summer of 2015, UCAR and the University of Virginia conducted a pilot deployment of the Unidata LDM-7 among U.S. university participants with access to the Internet2 network. Results of this pilot program, along with comparisons to the existing Unidata LDM-6 system, are presented.

  10. Digital One-Disc-One-Compound Method for High-Throughput Discovery of Prostate Cancer - Targeting Ligands

    DTIC Science & Technology

    2015-10-01

    shown in Fig. 1a, the prepolymer mixture was sandwiched between photo mask and glass slide. Microdiscs were fabricated on the glass substrate through...polymerization of the prepolymer mixture and the acrylated silane under UV exposure. To achieve the more stable microdiscs for peptide synthesis, the...composition of prepolymer mixture was changed to PEG (Polyethylene Glycol)-diacrylate, crosslinker, photo initiator, 2-aminoethylmethacrylate, water

  11. Multiplex enrichment quantitative PCR (ME-qPCR): a high-throughput, highly sensitive detection method for GMO identification.

    PubMed

    Fu, Wei; Zhu, Pengyu; Wei, Shuang; Zhixin, Du; Wang, Chenguang; Wu, Xiyang; Li, Feiwu; Zhu, Shuifang

    2017-04-01

    Among all of the high-throughput detection methods, PCR-based methodologies are regarded as the most cost-efficient and feasible methodologies compared with the next-generation sequencing or ChIP-based methods. However, the PCR-based methods can only achieve multiplex detection up to 15-plex due to limitations imposed by the multiplex primer interactions. The detection throughput cannot meet the demands of high-throughput detection, such as SNP or gene expression analysis. Therefore, in our study, we have developed a new high-throughput PCR-based detection method, multiplex enrichment quantitative PCR (ME-qPCR), which is a combination of qPCR and nested PCR. The GMO content detection results in our study showed that ME-qPCR could achieve high-throughput detection up to 26-plex. Compared to the original qPCR, the Ct values of ME-qPCR were lower for the same group, which showed that ME-qPCR sensitivity is higher than the original qPCR. The absolute limit of detection for ME-qPCR could achieve levels as low as a single copy of the plant genome. Moreover, the specificity results showed that no cross-amplification occurred for irrelevant GMO events. After evaluation of all of the parameters, a practical evaluation was performed with different foods. The more stable amplification results, compared to qPCR, showed that ME-qPCR was suitable for GMO detection in foods. In conclusion, ME-qPCR achieved sensitive, high-throughput GMO detection in complex substrates, such as crops or food samples. In the future, ME-qPCR-based GMO content identification may positively impact SNP analysis or multiplex gene expression of food or agricultural samples. Graphical abstract For the first-step amplification, four primers (A, B, C, and D) have been added into the reaction volume. In this manner, four kinds of amplicons have been generated. All of these four amplicons could be regarded as the target of second-step PCR. For the second-step amplification, three parallels have been taken for the final evaluation. After the second evaluation, the final amplification curves and melting curves have been achieved.

  12. Development of a Web-Based Distributed Interactive Simulation (DIS) Environment Using JavaScript

    DTIC Science & Technology

    2014-09-01

    scripting that let users change or interact with web content depending on user input, which is in contrast with server-side scripts such as PHP, Java and...transfer, DIS usually broadcasts or multicasts its PDUs based on UDP socket. 3. JavaScript JavaScript is the scripting language of the web, and all...IDE) for developing desktop, mobile and web applications with JAVA , C++, HTML5, JavaScript and more. b. Framework The DIS implementation of

  13. The Use of End-to-End Multicast Measurements for Characterizing Internal Network Behavior

    DTIC Science & Technology

    2002-08-01

    dropping on the basis Random Early Detection ( RED ) [17] is another mechanism by which packet loss may become decorrelated. It remains to be seen whether...this mechanism will be widely deployed in communications networks. On the other hand, the use of RED to merely mark packets will not break correlations...Tail and Random Early Detection ( RED ) buffer discard methods, [17]. We compared the inferred loss and delay with actual probe loss and delay. We found

  14. High-throughput Cloning and Expression of Integral Membrane Proteins in Escherichia coli

    PubMed Central

    Bruni, Renato

    2014-01-01

    Recently, several structural genomics centers have been established and a remarkable number of three-dimensional structures of soluble proteins have been solved. For membrane proteins, the number of structures solved has been significantly trailing those for their soluble counterparts, not least because over-expression and purification of membrane proteins is a much more arduous process. By using high throughput technologies, a large number of membrane protein targets can be screened simultaneously and a greater number of expression and purification conditions can be employed, leading to a higher probability of successfully determining the structure of membrane proteins. This unit describes the cloning, expression and screening of membrane proteins using high throughput methodologies developed in our laboratory. Basic Protocol 1 deals with the cloning of inserts into expression vectors by ligation-independent cloning. Basic Protocol 2 describes the expression and purification of the target proteins on a miniscale. Lastly, for the targets that express at the miniscale, basic protocols 3 and 4 outline the methods employed for the expression and purification of targets at the midi-scale, as well as a procedure for detergent screening and identification of detergent(s) in which the target protein is stable. PMID:24510647

  15. High-Throughput Synthesis and Structure of Zeolite ZSM-43 with Two-Directional 8-Ring Channels.

    PubMed

    Willhammar, Tom; Su, Jie; Yun, Yifeng; Zou, Xiaodong; Afeworki, Mobae; Weston, Simon C; Vroman, Hilda B; Lonergan, William W; Strohmaier, Karl G

    2017-08-07

    The aluminosilicate zeolite ZSM-43 (where ZSM = Zeolite Socony Mobil) was first synthesized more than 3 decades ago, but its chemical structure remained unsolved because of its poor crystallinity and small crystal size. Here we present optimization of the ZSM-43 synthesis using a high-throughput approach and subsequent structure determination by the combination of electron crystallographic methods and powder X-ray diffraction. The synthesis required the use of a combination of both inorganic (Cs + and K + ) and organic (choline) structure-directing agents. High-throughput synthesis enabled a screening of the synthesis conditions, which made it possible to optimize the synthesis, despite its complexity, in order to obtain a material with significantly improved crystallinity. When both rotation electron diffraction and high-resolution transmission electron microscopy imaging techniques are applied, the structure of ZSM-43 could be determined. The structure of ZSM-43 is a new zeolite framework type and possesses a unique two-dimensional channel system limited by 8-ring channels. ZSM-43 is stable upon calcination, and sorption measurements show that the material is suitable for adsorption of carbon dioxide as well as methane.

  16. The AFLOW Standard for High-throughput Materials Science Calculations

    DTIC Science & Technology

    2015-01-01

    84602, USA fDepartment of Physics and Department of Chemistry, University of North Texas, Denton, TX 76203, USA gMaterials Science, Electrical ...inversion in the iterative subspace (RMM– DIIS ) [10]. Of the two, DBS is known to be the slower and more stable option. Additionally, the subspace...RMM– DIIS steps as needed to fulfill the dEelec condition. Later determinations of system forces are performed by a similar sequence, but only a single

  17. A new dimethyl labeling-based SID-MRM-MS method and its application to three proteases involved in insulin maturation.

    PubMed

    Cheng, Dongwan; Zheng, Li; Hou, Junjie; Wang, Jifeng; Xue, Peng; Yang, Fuquan; Xu, Tao

    2015-01-01

    The absolute quantification of target proteins in proteomics involves stable isotope dilution coupled with multiple reactions monitoring mass spectrometry (SID-MRM-MS). The successful preparation of stable isotope-labeled internal standard peptides is an important prerequisite for the SID-MRM absolute quantification methods. Dimethyl labeling has been widely used in relative quantitative proteomics and it is fast, simple, reliable, cost-effective, and applicable to any protein sample, making it an ideal candidate method for the preparation of stable isotope-labeled internal standards. MRM mass spectrometry is of high sensitivity, specificity, and throughput characteristics and can quantify multiple proteins simultaneously, including low-abundance proteins in precious samples such as pancreatic islets. In this study, a new method for the absolute quantification of three proteases involved in insulin maturation, namely PC1/3, PC2 and CPE, was developed by coupling a stable isotope dimethyl labeling strategy for internal standard peptide preparation with SID-MRM-MS quantitative technology. This method offers a new and effective approach for deep understanding of the functional status of pancreatic β cells and pathogenesis in diabetes.

  18. Quantitative High-throughput Luciferase Screening in Identifying CAR Modulators

    PubMed Central

    Lynch, Caitlin; Zhao, Jinghua; Wang, Hongbing; Xia, Menghang

    2017-01-01

    Summary The constitutive androstane receptor (CAR, NR1I3) is responsible for the transcription of multiple drug metabolizing enzymes and transporters. There are two possible methods of activation for CAR, direct ligand binding and a ligand-independent method, which makes this a unique nuclear receptor. Both of these mechanisms require translocation of CAR from the cytoplasm into the nucleus. Interestingly, CAR is constitutively active in immortalized cell lines due to the basal nuclear location of this receptor. This creates an important challenge in most in vitro assay models because immortalized cells cannot be used without inhibiting the basal activity. In this book chapter, we go into detail of how to perform quantitative high-throughput screens to identify hCAR1 modulators through the employment of a double stable cell line. Using this line, we are able to identify activators, as well as deactivators, of the challenging nuclear receptor, CAR. PMID:27518621

  19. Quantitative High-Throughput Luciferase Screening in Identifying CAR Modulators.

    PubMed

    Lynch, Caitlin; Zhao, Jinghua; Wang, Hongbing; Xia, Menghang

    2016-01-01

    The constitutive androstane receptor (CAR, NR1I3) is responsible for the transcription of multiple drug metabolizing enzymes and transporters. There are two possible methods of activation for CAR, direct ligand binding and a ligand-independent method, which makes this a unique nuclear receptor. Both of these mechanisms require translocation of CAR from the cytoplasm into the nucleus. Interestingly, CAR is constitutively active in immortalized cell lines due to the basal nuclear location of this receptor. This creates an important challenge in most in vitro assay models because immortalized cells cannot be used without inhibiting the high basal activity. In this book chapter, we go into detail of how to perform quantitative high-throughput screens to identify hCAR1 modulators through the employment of a double stable cell line. Using this line, we are able to identify activators, as well as deactivators, of the challenging nuclear receptor, CAR.

  20. Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node

    NASA Astrophysics Data System (ADS)

    Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi

    2015-03-01

    In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.

  1. End-to-End Assessment of a Large Aperture Segmented Ultraviolet Optical Infrared (UVOIR) Telescope Architecture

    NASA Technical Reports Server (NTRS)

    Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon

    2016-01-01

    Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.

  2. Finding the Stable Structures of WxN1-x with an ab-initio High-Throughput Approach

    DTIC Science & Technology

    2014-03-13

    cubic boron nitride[4], carbonitrides,[5] and transition metal borides .[6, 7] Over the past several years there has been considerable theoretical...include ionic and covalent structures which seem chemically similar to W-N. These include borides , carbides, oxides, and other nitrides. In this paper we...metallic alloys, [23–27] we extended it to include over fifty new structures. These include nitrides, oxides, borides , and carbides. The important

  3. Analysis of Active Methylotrophic Communities: When DNA-SIP Meets High-Throughput Technologies.

    PubMed

    Taubert, Martin; Grob, Carolina; Howat, Alexandra M; Burns, Oliver J; Chen, Yin; Neufeld, Josh D; Murrell, J Colin

    2016-01-01

    Methylotrophs are microorganisms ubiquitous in the environment that can metabolize one-carbon (C1) compounds as carbon and/or energy sources. The activity of these prokaryotes impacts biogeochemical cycles within their respective habitats and can determine whether these habitats act as sources or sinks of C1 compounds. Due to the high importance of C1 compounds, not only in biogeochemical cycles, but also for climatic processes, it is vital to understand the contributions of these microorganisms to carbon cycling in different environments. One of the most challenging questions when investigating methylotrophs, but also in environmental microbiology in general, is which species contribute to the environmental processes of interest, or "who does what, where and when?" Metabolic labeling with C1 compounds substituted with (13)C, a technique called stable isotope probing, is a key method to trace carbon fluxes within methylotrophic communities. The incorporation of (13)C into the biomass of active methylotrophs leads to an increase in the molecular mass of their biomolecules. For DNA-based stable isotope probing (DNA-SIP), labeled and unlabeled DNA is separated by isopycnic ultracentrifugation. The ability to specifically analyze DNA of active methylotrophs from a complex background community by high-throughput sequencing techniques, i.e. targeted metagenomics, is the hallmark strength of DNA-SIP for elucidating ecosystem functioning, and a protocol is detailed in this chapter.

  4. Materials Screening for the Discovery of New Half-Heuslers: Machine Learning versus ab Initio Methods.

    PubMed

    Legrain, Fleur; Carrete, Jesús; van Roekeghem, Ambroise; Madsen, Georg K H; Mingo, Natalio

    2018-01-18

    Machine learning (ML) is increasingly becoming a helpful tool in the search for novel functional compounds. Here we use classification via random forests to predict the stability of half-Heusler (HH) compounds, using only experimentally reported compounds as a training set. Cross-validation yields an excellent agreement between the fraction of compounds classified as stable and the actual fraction of truly stable compounds in the ICSD. The ML model is then employed to screen 71 178 different 1:1:1 compositions, yielding 481 likely stable candidates. The predicted stability of HH compounds from three previous high-throughput ab initio studies is critically analyzed from the perspective of the alternative ML approach. The incomplete consistency among the three separate ab initio studies and between them and the ML predictions suggests that additional factors beyond those considered by ab initio phase stability calculations might be determinant to the stability of the compounds. Such factors can include configurational entropies and quasiharmonic contributions.

  5. Finding idle machines in a workstation-based distributed system

    NASA Technical Reports Server (NTRS)

    Theimer, Marvin M.; Lantz, Keith A.

    1989-01-01

    The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.

  6. A Real-Time Executive for Multiple-Computer Clusters.

    DTIC Science & Technology

    1984-12-01

    in a real-time environment is tantamount to speed and efficiency. By effectively co-locating real-time sensors and related processing modules, real...of which there are two ki n1 s : multicast group address - virtually any nur.,ber of node groups can be assigned a group address so they are all able...interfaceloopbark by 󈧅’b4, internal _loopback by 02"b4, clear loooback by 󈧇’b4, go offline by Ŝ"b4, eo online by 󈧍’b4, onboard _diagnostic by Oa’b4, cdr

  7. High-throughput analysis of the protein sequence-stability landscape using a quantitative "yeast surface two-hybrid" system and fragment reconstitution

    PubMed Central

    Dutta, Sanjib; Koide, Akiko; Koide, Shohei

    2008-01-01

    Stability evaluation of many mutants can lead to a better understanding of the sequence determinants of a structural motif and of factors governing protein stability and protein evolution. The traditional biophysical analysis of protein stability is low throughput, limiting our ability to widely explore the sequence space in a quantitative manner. In this study, we have developed a high-throughput library screening method for quantifying stability changes, which is based on protein fragment reconstitution and yeast surface display. Our method exploits the thermodynamic linkage between protein stability and fragment reconstitution and the ability of the yeast surface display technique to quantitatively evaluate protein-protein interactions. The method was applied to a fibronectin type III (FN3) domain. Characterization of fragment reconstitution was facilitated by the co-expression of two FN3 fragments, thus establishing a "yeast surface two-hybrid" method. Importantly, our method does not rely on competition between clones and thus eliminates a common limitation of high-throughput selection methods in which the most stable variants are predominantly recovered. Thus, it allows for the isolation of sequences that exhibits a desired level of stability. We identified over one hundred unique sequences for a β-bulge motif, which was significantly more informative than natural sequences of the FN3 family in revealing the sequence determinants for the β-bulge. Our method provides a powerful means to rapidly assess stability of many variants, to systematically assess contribution of different factors to protein stability and to enhance protein stability. PMID:18674545

  8. Finding the Stable Structures of N1-xWx with an Ab Initio High-Throughput Approach

    DTIC Science & Technology

    2015-05-26

    W. These include borides , carbides, oxides, and other nitrides. We also invented many structures to mimic the random pattern of vacancies on both the...structures. These include nitrides, oxides, borides , and carbides, as well as supercells of standard structures with atoms removed to mimic the random patter...1930). [15] R. Kiessling and Y. H. Liu, Thermal stability of the chromium, iron, and tungsten borides in streaming ammonia and the existence of a new

  9. Nanocrystalline Cobalt-Phosphorous Electroplating as an Alternative to Hard Chromium Electroplating

    DTIC Science & Technology

    2012-08-01

    Validate pulsed electrodeposition of Nanocrystalline Cobalt-Phosphorous (nCoP) alloy coatings as a Hard Chrome electroplating alternative for DoD...limits Cr+6  Cathode Efficiency Cr Plating *Co PEL is 20 µg/m3  ≈5X faster than Chrome plating  Increased throughput  One nCo-P tank can...replace several hard chrome tanks  Bath is Stable nCoP Plating Approaches 100% Efficiency  Process Comparison CoP Technical Approach

  10. High affinity γPNA sandwich hybridization assay for rapid detection of short nucleic acid targets with single mismatch discrimination.

    PubMed

    Goldman, Johnathan M; Zhang, Li Ang; Manna, Arunava; Armitage, Bruce A; Ly, Danith H; Schneider, James W

    2013-07-08

    Hybridization analysis of short DNA and RNA targets presents many challenges for detection. The commonly employed sandwich hybridization approach cannot be implemented for these short targets due to insufficient probe-target binding strengths for unmodified DNA probes. Here, we present a method capable of rapid and stable sandwich hybridization detection for 22 nucleotide DNA and RNA targets. Stable hybridization is achieved using an n-alkylated, polyethylene glycol γ-carbon modified peptide nucleic acid (γPNA) amphiphile. The γPNA's exceptionally high affinity enables stable hybridization of a second DNA-based probe to the remaining bases of the short target. Upon hybridization of both probes, an electrophoretic mobility shift is measured via interaction of the n-alkane modification on the γPNA with capillary electrophoresis running buffer containing nonionic surfactant micelles. We find that sandwich hybridization of both probes is stable under multiple binding configurations and demonstrate single base mismatch discrimination. The binding strength of both probes is also stabilized via coaxial stacking on adjacent hybridization to targets. We conclude with a discussion on the implementation of the proposed sandwich hybridization assay as a high-throughput microRNA detection method.

  11. Rapid directed evolution of stabilized proteins with cellular high-throughput encapsulation solubilization and screening (CHESS).

    PubMed

    Yong, K J; Scott, D J

    2015-03-01

    Directed evolution is a powerful method for engineering proteins towards user-defined goals and has been used to generate novel proteins for industrial processes, biological research and drug discovery. Typical directed evolution techniques include cellular display, phage display, ribosome display and water-in-oil compartmentalization, all of which physically link individual members of diverse gene libraries to their translated proteins. This allows the screening or selection for a desired protein function and subsequent isolation of the encoding gene from diverse populations. For biotechnological and industrial applications there is a need to engineer proteins that are functional under conditions that are not compatible with these techniques, such as high temperatures and harsh detergents. Cellular High-throughput Encapsulation Solubilization and Screening (CHESS), is a directed evolution method originally developed to engineer detergent-stable G proteins-coupled receptors (GPCRs) for structural biology. With CHESS, library-transformed bacterial cells are encapsulated in detergent-resistant polymers to form capsules, which serve to contain mutant genes and their encoded proteins upon detergent mediated solubilization of cell membranes. Populations of capsules can be screened like single cells to enable rapid isolation of genes encoding detergent-stable protein mutants. To demonstrate the general applicability of CHESS to other proteins, we have characterized the stability and permeability of CHESS microcapsules and employed CHESS to generate thermostable, sodium dodecyl sulfate (SDS) resistant green fluorescent protein (GFP) mutants, the first soluble proteins to be engineered using CHESS. © 2014 Wiley Periodicals, Inc.

  12. Intestinal Enteroids Model Guanylate Cyclase C-Dependent Secretion Induced by Heat-Stable Enterotoxins.

    PubMed

    Pattison, Amanda M; Blomain, Erik S; Merlino, Dante J; Wang, Fang; Crissey, Mary Ann S; Kraft, Crystal L; Rappaport, Jeff A; Snook, Adam E; Lynch, John P; Waldman, Scott A

    2016-10-01

    Enterotoxigenic Escherichia coli (ETEC) causes ∼20% of the acute infectious diarrhea (AID) episodes worldwide, often by producing heat-stable enterotoxins (STs), which are peptides structurally homologous to paracrine hormones of the intestinal guanylate cyclase C (GUCY2C) receptor. While molecular mechanisms mediating ST-induced intestinal secretion have been defined, advancements in therapeutics have been hampered for decades by the paucity of disease models that integrate molecular and functional endpoints amenable to high-throughput screening. Here, we reveal that mouse and human intestinal enteroids in three-dimensional ex vivo cultures express the components of the GUCY2C secretory signaling axis. ST and its structural analog, linaclotide, an FDA-approved oral secretagog, induced fluid accumulation quantified simultaneously in scores of enteroid lumens, recapitulating ETEC-induced intestinal secretion. Enteroid secretion depended on canonical molecular signaling events responsible for ETEC-induced diarrhea, including cyclic GMP (cGMP) produced by GUCY2C, activation of cGMP-dependent protein kinase (PKG), and opening of the cystic fibrosis transmembrane conductance regulator (CFTR). Importantly, pharmacological inhibition of CFTR abrogated enteroid fluid secretion, providing proof of concept for the utility of this model to screen antidiarrheal agents. Intestinal enteroids offer a unique model, integrating the GUCY2C signaling axis and luminal fluid secretion, to explore the pathophysiology of, and develop platforms for, high-throughput drug screening to identify novel compounds to prevent and treat ETEC diarrheal disease. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  13. High-throughput density functional calculations to optimize properties and interfacial chemistry of piezoelectric materials

    NASA Astrophysics Data System (ADS)

    Barr, Jordan A.; Lin, Fang-Yin; Ashton, Michael; Hennig, Richard G.; Sinnott, Susan B.

    2018-02-01

    High-throughput density functional theory calculations are conducted to search through 1572 A B O3 compounds to find a potential replacement material for lead zirconate titanate (PZT) that exhibits the same excellent piezoelectric properties as PZT and lacks both its use of the toxic element lead (Pb) and the formation of secondary alloy phases with platinum (Pt) electrodes. The first screening criterion employed a search through the Materials Project database to find A -B combinations that do not form ternary compounds with Pt. The second screening criterion aimed to eliminate potential candidates through first-principles calculations of their electronic structure, in which compounds with a band gap of 0.25 eV or higher were retained. Third, thermodynamic stability calculations were used to compare the candidates in a Pt environment to compounds already calculated to be stable within the Materials Project. Formation energies below or equal to 100 meV/atom were considered to be thermodynamically stable. The fourth screening criterion employed lattice misfit to identify those candidate perovskites that have low misfit with the Pt electrode and high misfit of potential secondary phases that can be formed when Pt alloys with the different A and B components. To aid in the final analysis, dynamic stability calculations were used to determine those perovskites that have dynamic instabilities that favor the ferroelectric distortion. Analysis of the data finds three perovskites warranting further investigation: CsNb O3 , RbNb O3 , and CsTa O3 .

  14. Structure-guided design of fluorescent S-adenosylmethionine analogs for a high-throughput screen to target SAM-I riboswitch RNAs.

    PubMed

    Hickey, Scott F; Hammond, Ming C

    2014-03-20

    Many classes of S-adenosylmethionine (SAM)-binding RNAs and proteins are of interest as potential drug targets in diverse therapeutic areas, from infectious diseases to cancer. In the former case, the SAM-I riboswitch is an attractive target because this structured RNA element is found only in bacterial mRNAs and regulates multiple genes in several human pathogens. Here, we describe the synthesis of stable and fluorescent analogs of SAM in which the fluorophore is introduced through a functionalizable linker to the ribose. A Cy5-labeled SAM analog was shown to bind several SAM-I riboswitches via in-line probing and fluorescence polarization assays, including one from Staphylococcus aureus that controls the expression of SAM synthetase in this organism. A fluorescent ligand displacement assay was developed and validated for high-throughput screening of compounds to target the SAM-I riboswitch class. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. High-throughput search of ternary chalcogenides for p-type transparent electrodes

    PubMed Central

    Shi, Jingming; Cerqueira, Tiago F. T.; Cui, Wenwen; Nogueira, Fernando; Botti, Silvana; Marques, Miguel A. L.

    2017-01-01

    Delafossite crystals are fascinating ternary oxides that have demonstrated transparent conductivity and ambipolar doping. Here we use a high-throughput approach based on density functional theory to find delafossite and related layered phases of composition ABX2, where A and B are elements of the periodic table, and X is a chalcogen (O, S, Se, and Te). From the 15 624 compounds studied in the trigonal delafossite prototype structure, 285 are within 50 meV/atom from the convex hull of stability. These compounds are further investigated using global structural prediction methods to obtain their lowest-energy crystal structure. We find 79 systems not present in the materials project database that are thermodynamically stable and crystallize in the delafossite or in closely related structures. These novel phases are then characterized by calculating their band gaps and hole effective masses. This characterization unveils a large diversity of properties, ranging from normal metals, magnetic metals, and some candidate compounds for p-type transparent electrodes. PMID:28266587

  16. G-Protein Coupled Receptors: Surface Display and Biosensor Technology

    NASA Astrophysics Data System (ADS)

    McMurchie, Edward; Leifert, Wayne

    Signal transduction by G-protein coupled receptors (GPCRs) underpins a multitude of physiological processes. Ligand recognition by the receptor leads to the activation of a generic molecular switch involving heterotrimeric G-proteins and guanine nucleotides. With growing interest and commercial investment in GPCRs in areas such as drug targets, orphan receptors, high-throughput screening of drugs and biosensors, greater attention will focus on assay development to allow for miniaturization, ultrahigh-throughput and, eventually, microarray/biochip assay formats that will require nanotechnology-based approaches. Stable, robust, cell-free signaling assemblies comprising receptor and appropriate molecular switching components will form the basis of future GPCR/G-protein platforms, which should be able to be adapted to such applications as microarrays and biosensors. This chapter focuses on cell-free GPCR assay nanotechnologies and describes some molecular biological approaches for the construction of more sophisticated, surface-immobilized, homogeneous, functional GPCR sensors. The latter points should greatly extend the range of applications to which technologies based on GPCRs could be applied.

  17. Nanophotonic Trapping for Precise Manipulation of Biomolecular Arrays

    PubMed Central

    Soltani, Mohammad; Lin, Jun; Forties, Robert A.; Inman, James T.; Saraf, Summer N.; Fulbright, Robert M.; Lipson, Michal; Wang, Michelle D.

    2014-01-01

    Optical trapping is a powerful manipulation and measurement technique widely employed in the biological and materials sciences1–8. Miniaturizing optical trap instruments onto optofluidic platforms holds promise for high throughput lab-on-chip applications9–16. However, a persistent challenge with existing optofluidic devices has been controlled and precise manipulation of trapped particles. Here we report a new class of on-chip optical trapping devices. Using photonic interference functionalities, an array of stable, three-dimensional on-chip optical traps is formed at the antinodes of a standing-wave evanescent field on a nanophotonic waveguide. By employing the thermo-optic effect via integrated electric microheaters, the traps can be repositioned at high speed (~ 30 kHz) with nanometer precision. We demonstrate sorting and manipulation of individual DNA molecules. In conjunction with laminar flows and fluorescence, we also show precise control of the chemical environment of a sample with simultaneous monitoring. Such a controllable trapping device has the potential for high-throughput precision measurements on chip. PMID:24776649

  18. Nanophotonic trapping for precise manipulation of biomolecular arrays.

    PubMed

    Soltani, Mohammad; Lin, Jun; Forties, Robert A; Inman, James T; Saraf, Summer N; Fulbright, Robert M; Lipson, Michal; Wang, Michelle D

    2014-06-01

    Optical trapping is a powerful manipulation and measurement technique widely used in the biological and materials sciences. Miniaturizing optical trap instruments onto optofluidic platforms holds promise for high-throughput lab-on-a-chip applications. However, a persistent challenge with existing optofluidic devices has been achieving controlled and precise manipulation of trapped particles. Here, we report a new class of on-chip optical trapping devices. Using photonic interference functionalities, an array of stable, three-dimensional on-chip optical traps is formed at the antinodes of a standing-wave evanescent field on a nanophotonic waveguide. By employing the thermo-optic effect via integrated electric microheaters, the traps can be repositioned at high speed (∼30 kHz) with nanometre precision. We demonstrate sorting and manipulation of individual DNA molecules. In conjunction with laminar flows and fluorescence, we also show precise control of the chemical environment of a sample with simultaneous monitoring. Such a controllable trapping device has the potential to achieve high-throughput precision measurements on chip.

  19. Differential Mobility Spectrometry-Mass Spectrometry (DMS-MS) in Radiation Biodosimetry: Rapid and High-Throughput Quantitation of Multiple Radiation Biomarkers in Nonhuman Primate Urine.

    PubMed

    Chen, Zhidan; Coy, Stephen L; Pannkuk, Evan L; Laiakis, Evagelia C; Fornace, Albert J; Vouros, Paul

    2018-05-07

    High-throughput methods to assess radiation exposure are a priority due to concerns that include nuclear power accidents, the spread of nuclear weapon capability, and the risk of terrorist attacks. Metabolomics, the assessment of small molecules in an easily accessible sample, is the most recent method to be applied for the identification of biomarkers of the biological radiation response with a useful dose-response profile. Profiling for biomarker identification is frequently done using an LC-MS platform which has limited throughput due to the time-consuming nature of chromatography. We present here a chromatography-free simplified method for quantitative analysis of seven metabolites in urine with radiation dose-response using urine samples provided from the Pannkuk et al. (2015) study of long-term (7-day) radiation response in nonhuman primates (NHP). The stable isotope dilution (SID) analytical method consists of sample preparation by strong cation exchange-solid phase extraction (SCX-SPE) to remove interferences and concentrate the metabolites of interest, followed by differential mobility spectrometry (DMS) ion filtration to select the ion of interest and reduce chemical background, followed by mass spectrometry (overall SID-SPE-DMS-MS). Since no chromatography is used, calibration curves were prepared rapidly, in under 2 h (including SPE) for six simultaneously analyzed radiation biomarkers. The seventh, creatinine, was measured separately after 2500× dilution. Creatinine plays a dual role, measuring kidney glomerular filtration rate (GFR), and indicating kidney damage at high doses. The current quantitative method using SID-SPE-DMS-MS provides throughput which is 7.5 to 30 times higher than that of LC-MS and provides a path to pre-clinical radiation dose estimation. Graphical Abstract.

  20. Differential Mobility Spectrometry-Mass Spectrometry (DMS-MS) in Radiation Biodosimetry: Rapid and High-Throughput Quantitation of Multiple Radiation Biomarkers in Nonhuman Primate Urine

    NASA Astrophysics Data System (ADS)

    Chen, Zhidan; Coy, Stephen L.; Pannkuk, Evan L.; Laiakis, Evagelia C.; Fornace, Albert J.; Vouros, Paul

    2018-05-01

    High-throughput methods to assess radiation exposure are a priority due to concerns that include nuclear power accidents, the spread of nuclear weapon capability, and the risk of terrorist attacks. Metabolomics, the assessment of small molecules in an easily accessible sample, is the most recent method to be applied for the identification of biomarkers of the biological radiation response with a useful dose-response profile. Profiling for biomarker identification is frequently done using an LC-MS platform which has limited throughput due to the time-consuming nature of chromatography. We present here a chromatography-free simplified method for quantitative analysis of seven metabolites in urine with radiation dose-response using urine samples provided from the Pannkuk et al. (2015) study of long-term (7-day) radiation response in nonhuman primates (NHP). The stable isotope dilution (SID) analytical method consists of sample preparation by strong cation exchange-solid phase extraction (SCX-SPE) to remove interferences and concentrate the metabolites of interest, followed by differential mobility spectrometry (DMS) ion filtration to select the ion of interest and reduce chemical background, followed by mass spectrometry (overall SID-SPE-DMS-MS). Since no chromatography is used, calibration curves were prepared rapidly, in under 2 h (including SPE) for six simultaneously analyzed radiation biomarkers. The seventh, creatinine, was measured separately after 2500× dilution. Creatinine plays a dual role, measuring kidney glomerular filtration rate (GFR), and indicating kidney damage at high doses. The current quantitative method using SID-SPE-DMS-MS provides throughput which is 7.5 to 30 times higher than that of LC-MS and provides a path to pre-clinical radiation dose estimation. [Figure not available: see fulltext.

  1. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  2. Automation Hooks Architecture for Flexible Test Orchestration - Concept Development and Validation

    NASA Technical Reports Server (NTRS)

    Lansdowne, C. A.; Maclean, John R.; Winton, Chris; McCartney, Pat

    2011-01-01

    The Automation Hooks Architecture Trade Study for Flexible Test Orchestration sought a standardized data-driven alternative to conventional automated test programming interfaces. The study recommended composing the interface using multicast DNS (mDNS/SD) service discovery, Representational State Transfer (Restful) Web Services, and Automatic Test Markup Language (ATML). We describe additional efforts to rapidly mature the Automation Hooks Architecture candidate interface definition by validating it in a broad spectrum of applications. These activities have allowed us to further refine our concepts and provide observations directed toward objectives of economy, scalability, versatility, performance, severability, maintainability, scriptability and others.

  3. Integrating security in a group oriented distributed system

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth; Gong, LI

    1992-01-01

    A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.

  4. An approach to verification and validation of a reliable multicasting protocol: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Montgomery, Todd L.

    1995-01-01

    This paper describes the process of implementing a complex communications protocol that provides reliable delivery of data in multicast-capable, packet-switching telecommunication networks. The protocol, called the Reliable Multicasting Protocol (RMP), was developed incrementally using a combination of formal and informal techniques in an attempt to ensure the correctness of its implementation. Our development process involved three concurrent activities: (1) the initial construction and incremental enhancement of a formal state model of the protocol machine; (2) the initial coding and incremental enhancement of the implementation; and (3) model-based testing of iterative implementations of the protocol. These activities were carried out by two separate teams: a design team and a V&V team. The design team built the first version of RMP with limited functionality to handle only nominal requirements of data delivery. This initial version did not handle off-nominal cases such as network partitions or site failures. Meanwhile, the V&V team concurrently developed a formal model of the requirements using a variant of SCR-based state tables. Based on these requirements tables, the V&V team developed test cases to exercise the implementation. In a series of iterative steps, the design team added new functionality to the implementation while the V&V team kept the state model in fidelity with the implementation. This was done by generating test cases based on suspected errant or off-nominal behaviors predicted by the current model. If the execution of a test in the model and implementation agreed, then the test either found a potential problem or verified a required behavior. However, if the execution of a test was different in the model and implementation, then the differences helped identify inconsistencies between the model and implementation. In either case, the dialogue between both teams drove the co-evolution of the model and implementation. We have found that this interactive, iterative approach to development allows software designers to focus on delivery of nominal functionality while the V&V team can focus on analysis of off nominal cases. Testing serves as the vehicle for keeping the model and implementation in fidelity with each other. This paper describes (1) our experiences in developing our process model; and (2) three example problems found during the development of RMP. Although RMP has provided our research effort with a rich set of test cases, it also has practical applications within NASA. For example, RMP is being considered for use in the NASA EOSDIS project due to its significant performance benefits in applications that need to replicate large amounts of data to many network sites.

  5. Optimization of post-run corrections for water stable isotope measurements by laser spectroscopy

    NASA Astrophysics Data System (ADS)

    van Geldern, Robert; Barth, Johannes A. C.

    2013-04-01

    Light stable isotope analyses of hydrogen and oxygen of water are used in numerous aquatic studies from various scientific fields. The advantage of using stable isotope ratios is that water molecules serve as ubiquitous and already present natural tracers. Traditionally, the samples were analyzed in the laboratory by isotope ratio mass spectrometry (IRMS). Within recent years these analyses have been revolutionized by the development of new isotope ratio laser spectroscopy (IRIS) systems that are said to be cheaper, more robust and mobile compared to IRMS. Although easier to operate, laser systems also need thorough calibration with international reference materials and raw data need correction for analytical effects. A major issue in systems that use liquid injection via a vaporizer module is the memory effect, i.e. the carry-over from the previous analyzed sample in a sequence. This study presents an optimized and simple post-run correction procedure for liquid water injection developed for a Picarro water analyzer. The Excel(TM) template will rely exclusively on standard features implemented in MS Office without the need to run macros, additional code written in Visual Basic for Applications (VBA) or to use a database-related software such as MS Access or SQL Server. These protocols will maximize precision, accuracy and sample throughput via an efficient memory correction. The number of injections per unknown sample can be reduced to 4 or less. This procedure meets the demands of faster throughput with reduced costs per analysis. Procedures were verified by an international proficiency test and traditional IRMS techniques. The template is available free for scientific use from the corresponding author or the journals web site (van Geldern and Barth, 2012). References van Geldern, R. and Barth, J.A.C. (2012) Limnol. Oceanogr. Methods 10:1024-1036 [doi: 10.4319/lom.2012.10.1024

  6. Performance of TCP variants over LTE network

    NASA Astrophysics Data System (ADS)

    Nor, Shahrudin Awang; Maulana, Ade Novia

    2016-08-01

    One of the implementation of a wireless network is based on mobile broadband technology Long Term Evolution (LTE). LTE offers a variety of advantages, especially in terms of access speed, capacity, architectural simplicity and ease of implementation, as well as the breadth of choice of the type of user equipment (UE) that can establish the access. The majority of the Internet connections in the world happen using the TCP (Transmission Control Protocol) due to the TCP's reliability in transmitting packets in the network. TCP reliability lies in the ability to control the congestion. TCP was originally designed for wired media, but LTE connected through a wireless medium that is not stable in comparison to wired media. A wide variety of TCP has been made to produce a better performance than its predecessor. In this study, we simulate the performance provided by the TCP NewReno and TCP Vegas based on simulation using network simulator version 2 (ns2). The TCP performance is analyzed in terms of throughput, packet loss and end-to-end delay. In comparing the performance of TCP NewReno and TCP Vegas, the simulation result shows that the throughput of TCP NewReno is slightly higher than TCP Vegas, while TCP Vegas gives significantly better end-to-end delay and packet loss. The analysis of throughput, packet loss and end-to-end delay are made to evaluate the simulation.

  7. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  8. Reduced dimensionality (3,2)D NMR experiments and their automated analysis: implications to high-throughput structural studies on proteins.

    PubMed

    Reddy, Jithender G; Kumar, Dinesh; Hosur, Ramakrishna V

    2015-02-01

    Protein NMR spectroscopy has expanded dramatically over the last decade into a powerful tool for the study of their structure, dynamics, and interactions. The primary requirement for all such investigations is sequence-specific resonance assignment. The demand now is to obtain this information as rapidly as possible and in all types of protein systems, stable/unstable, soluble/insoluble, small/big, structured/unstructured, and so on. In this context, we introduce here two reduced dimensionality experiments – (3,2)D-hNCOcanH and (3,2)D-hNcoCAnH – which enhance the previously described 2D NMR-based assignment methods quite significantly. Both the experiments can be recorded in just about 2-3 h each and hence would be of immense value for high-throughput structural proteomics and drug discovery research. The applicability of the method has been demonstrated using alpha-helical bovine apo calbindin-D9k P43M mutant (75 aa) protein. Automated assignment of this data using AUTOBA has been presented, which enhances the utility of these experiments. The backbone resonance assignments so derived are utilized to estimate secondary structures and the backbone fold using Web-based algorithms. Taken together, we believe that the method and the protocol proposed here can be used for routine high-throughput structural studies of proteins. Copyright © 2014 John Wiley & Sons, Ltd.

  9. Supplementary Material for Finding the Stable Structures of N1-xWX with an Ab-initio High-Throughput Approach

    DTIC Science & Technology

    2015-05-08

    around errors ENMAX=560 # 1.4*ENMAX (400) of pseudopotentials LREAL=.FALSE. # reciprocal space projection technique EDIFF=1E-6 # high accuracy...required ALGO=Fast # ALGO = Fast SYMPREC=1e-7 # Precise Symmetry ISPIN=1 # SPIN=OFF ISMEAR=-1 # Fermi broadening SIGMA =0.0272 # About 0.002 Ry The vdW-DF29...GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S

  10. Optimized Sleeping Beauty transposons rapidly generate stable transgenic cell lines.

    PubMed

    Kowarz, Eric; Löscher, Denise; Marschalek, Rolf

    2015-04-01

    Stable gene expression in mammalian cells is a prerequisite for many in vitro and in vivo experiments. However, either the integration of plasmids into mammalian genomes or the use of retro-/lentiviral systems have intrinsic limitations. The use of transposable elements, e.g. the Sleeping Beauty system (SB), circumvents most of these drawbacks (integration sites, size limitations) and allows the quick generation of stable cell lines. The integration process of SB is catalyzed by a transposase and the handling of this gene transfer system is easy, fast and safe. Here, we report our improvements made to the existing SB vector system and present two new vector types for robust constitutive or inducible expression of any gene of interest. Both types are available in 16 variants with different selection marker (puromycin, hygromycin, blasticidin, neomycin) and fluorescent protein expression (GFP, RFP, BFP) to fit most experimental requirements. With this system it is possible to generate cell lines from stable transfected cells quickly and reliably in a medium-throughput setting (three to five days). Cell lines robustly express any gene-of-interest, either constitutively or tightly regulated by doxycycline. This allows many laboratory experiments to speed up generation of data in a rapid and robust manner. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. A high-throughput immobilized bead screen for stable proteins and multi-protein complexes

    PubMed Central

    Lockard, Meghan A.; Listwan, Pawel; Pedelacq, Jean-Denis; Cabantous, Stéphanie; Nguyen, Hau B.; Terwilliger, Thomas C.; Waldo, Geoffrey S.

    2011-01-01

    We describe an in vitro colony screen to identify Escherichia coli expressing soluble proteins and stable, assembled multiprotein complexes. Proteins with an N-terminal 6His tag and C-terminal green fluorescent protein (GFP) S11 tag are fluorescently labeled in cells by complementation with a coexpressed GFP 1–10 fragment. After partial colony lysis, the fluorescent soluble proteins or complexes diffuse through a supporting filtration membrane and are captured on Talon® resin metal affinity beads immobilized in agarose. Images of the fluorescent colonies convey total expression and the level of fluorescence bound to the beads indicates how much protein is soluble. Both pieces of information can be used together when selecting clones. After the assay, colonies can be picked and propagated, eliminating the need to make replica plates. We used the method to screen a DNA fragment library of the human protein p85 and preferentially obtained clones expressing the full-length ‘breakpoint cluster region-homology' and NSH2 domains. The assay also distinguished clones expressing stable multi-protein complexes from those that are unstable due to missing subunits. Clones expressing stable, intact heterotrimeric E.coli YheNML complexes were readily identified in libraries dominated by complexes of YheML missing the N subunit. PMID:21642284

  12. Mass spectrometric-based stable isotopic 2-aminobenzoic acid glycan mapping for rapid glycan screening of biotherapeutics.

    PubMed

    Prien, Justin M; Prater, Bradley D; Qin, Qiang; Cockrill, Steven L

    2010-02-15

    Fast, sensitive, robust methods for "high-level" glycan screening are necessary during various stages of a biotherapeutic product's lifecycle, including clone selection, process changes, and quality control for lot release testing. Traditional glycan screening involves chromatographic or electrophoretic separation-based methods, and, although reproducible, these methods can be time-consuming. Even ultrahigh-performance chromatographic and microfluidic integrated LC/MS systems, which work on the tens of minute time scale, become lengthy when hundreds of samples are to be analyzed. Comparatively, a direct infusion mass spectrometry (MS)-based glycan screening method acquires data on a millisecond time scale, exhibits exquisite sensitivity and reproducibility, and is amenable to automated peak annotation. In addition, characterization of glycan species via sequential mass spectrometry can be performed simultaneously. Here, we demonstrate a quantitative high-throughput MS-based mapping approach using stable isotope 2-aminobenzoic acid (2-AA) for rapid "high-level" glycan screening.

  13. Improving the apo-state detergent stability of NTS1 with CHESS for pharmacological and structural studies.

    PubMed

    Scott, Daniel J; Kummer, Lutz; Egloff, Pascal; Bathgate, Ross A D; Plückthun, Andreas

    2014-11-01

    The largest single class of drug targets is the G protein-coupled receptor (GPCR) family. Modern high-throughput methods for drug discovery require working with pure protein, but this has been a challenge for GPCRs, and thus the success of screening campaigns targeting soluble, catalytic protein domains has not yet been realized for GPCRs. Therefore, most GPCR drug screening has been cell-based, whereas the strategy of choice for drug discovery against soluble proteins is HTS using purified proteins coupled to structure-based drug design. While recent developments are increasing the chances of obtaining GPCR crystal structures, the feasibility of screening directly against purified GPCRs in the unbound state (apo-state) remains low. GPCRs exhibit low stability in detergent micelles, especially in the apo-state, over the time periods required for performing large screens. Recent methods for generating detergent-stable GPCRs, however, offer the potential for researchers to manipulate GPCRs almost like soluble enzymes, opening up new avenues for drug discovery. Here we apply cellular high-throughput encapsulation, solubilization and screening (CHESS) to the neurotensin receptor 1 (NTS1) to generate a variant that is stable in the apo-state when solubilized in detergents. This high stability facilitated the crystal structure determination of this receptor and also allowed us to probe the pharmacology of detergent-solubilized, apo-state NTS1 using robotic ligand binding assays. NTS1 is a target for the development of novel antipsychotics, and thus CHESS-stabilized receptors represent exciting tools for drug discovery. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Guest Editor's introduction: Special issue on distributed virtual environments

    NASA Astrophysics Data System (ADS)

    Lea, Rodger

    1998-09-01

    Distributed virtual environments (DVEs) combine technology from 3D graphics, virtual reality and distributed systems to provide an interactive 3D scene that supports multiple participants. Each participant has a representation in the scene, often known as an avatar, and is free to navigate through the scene and interact with both the scene and other viewers of the scene. Changes to the scene, for example, position changes of one avatar as the associated viewer navigates through the scene, or changes to objects in the scene via manipulation, are propagated in real time to all viewers. This ensures that all viewers of a shared scene `see' the same representation of it, allowing sensible reasoning about the scene. Early work on such environments was restricted to their use in simulation, in particular in military simulation. However, over recent years a number of interesting and potentially far-reaching attempts have been made to exploit the technology for a range of other uses, including: Social spaces. Such spaces can be seen as logical extensions of the familiar text chat space. In 3D social spaces avatars, representing participants, can meet in shared 3D scenes and in addition to text chat can use visual cues and even in some cases spatial audio. Collaborative working. A number of recent projects have attempted to explore the use of DVEs to facilitate computer-supported collaborative working (CSCW), where the 3D space provides a context and work space for collaboration. Gaming. The shared 3D space is already familiar, albeit in a constrained manner, to the gaming community. DVEs are a logical superset of existing 3D games and can provide a rich framework for advanced gaming applications. e-commerce. The ability to navigate through a virtual shopping mall and to look at, and even interact with, 3D representations of articles has appealed to the e-commerce community as it searches for the best method of presenting merchandise to electronic consumers. The technology needed to support these systems crosses a number of disciplines in computer science. These include, but are certainly not limited to, real-time graphics for the accurate and realistic representation of scenes, group communications for the efficient update of shared consistent scene data, user interface modelling to exploit the use of the 3D representation and multimedia systems technology for the delivery of streamed graphics and audio-visual data into the shared scene. It is this intersection of technologies and the overriding need to provide visual realism that places such high demands on the underlying distributed systems infrastructure and makes DVEs such fertile ground for distributed systems research. Two examples serve to show how DVE developers have exploited the unique aspects of their domain. Communications. The usual tension between latency and throughput is particularly noticeable within DVEs. To ensure the timely update of multiple viewers of a particular scene requires that such updates be propagated quickly. However, the sheer volume of changes to any one scene calls for techniques that minimize the number of distinct updates that are sent to the network. Several techniques have been used to address this tension; these include the use of multicast communications, and in particular multicast in wide-area networks to reduce actual message traffic. Multicast has been combined with general group communications to partition updates to related objects or users of a scene. A less traditional approach has been the use of dead reckoning whereby a client application that visualizes the scene calculates position updates by extrapolating movement based on previous information. This allows the system to reduce the number of communications needed to update objects that move in a stable manner within the scene. Scaling. DVEs, especially those used for social spaces, are required to support large numbers of simultaneous users in potentially large shared scenes. The desire for scalability has driven different architectural designs, for example, the use of fully distributed architectures which scale well but often suffer performance costs versus centralized and hierarchical architectures in which the inverse is true. However, DVEs have also exploited the spatial nature of their domain to address scalability and have pioneered techniques that exploit the semantics of the shared space to reduce data updates and so allow greater scalability. Several of the systems reported in this special issue apply a notion of area of interest to partition the scene and so reduce the participants in any data updates. The specification of area of interest differs between systems. One approach has been to exploit a geographical notion, i.e. a regular portion of a scene, or a semantic unit, such as a room or building. Another approach has been to define the area of interest as a spatial area associated with an avatar in the scene. The five papers in this special issue have been chosen to highlight the distributed systems aspects of the DVE domain. The first paper, on the DIVE system, described by Emmanuel Frécon and Mårten Stenius explores the use of multicast and group communication in a fully peer-to-peer architecture. The developers of DIVE have focused on its use as the basis for collaborative work environments and have explored the issues associated with maintaining and updating large complicated scenes. The second paper, by Hiroaki Harada et al, describes the AGORA system, a DVE concentrating on social spaces and employing a novel communication technique that incorporates position update and vector information to support dead reckoning. The paper by Simon Powers et al explores the application of DVEs to the gaming domain. They propose a novel architecture that separates out higher-level game semantics - the conceptual model - from the lower-level scene attributes - the dynamic model, both running on servers, from the actual visual representation - the visual model - running on the client. They claim a number of benefits from this approach, including better predictability and consistency. Wolfgang Broll discusses the SmallView system which is an attempt to provide a toolkit for DVEs. One of the key features of SmallView is a sophisticated application level protocol, DWTP, that provides support for a variety of communication models. The final paper, by Chris Greenhalgh, discusses the MASSIVE system which has been used to explore the notion of awareness in the 3D space via the concept of `auras'. These auras define an area of interest for users and support a mapping between what a user is aware of, and what data update rate the communications infrastructure can support. We hope that this selection of papers will serve to provide a clear introduction to the distributed system issues faced by the DVE community and the approaches they have taken in solving them. Finally, we wish to thank Hubert Le Van Gong for his tireless efforts in pulling together all these papers and both the referees and the authors of the papers for the time and effort in ensuring that their contributions teased out the interesting distributed systems issues for this special issue. † E-mail address: rodger@arch.sel.sony.com

  15. High-Throughput Sequencing of Plasma MicroRNA in Chronic Fatigue Syndrome/Myalgic Encephalomyelitis

    PubMed Central

    Brenu, Ekua W.; Ashton, Kevin J.; Batovska, Jana; Staines, Donald R.; Marshall-Gradisnik, Sonya M.

    2014-01-01

    Background MicroRNAs (miRNAs) are known to regulate many biological processes and their dysregulation has been associated with a variety of diseases including Chronic Fatigue Syndrome/Myalgic Encephalomyelitis (CFS/ME). The recent discovery of stable and reproducible miRNA in plasma has raised the possibility that circulating miRNAs may serve as novel diagnostic markers. The objective of this study was to determine the role of plasma miRNA in CFS/ME. Results Using Illumina high-throughput sequencing we identified 19 miRNAs that were differentially expressed in the plasma of CFS/ME patients in comparison to non-fatigued controls. Following RT-qPCR analysis, we were able to confirm the significant up-regulation of three miRNAs (hsa-miR-127-3p, hsa-miR-142-5p and hsa-miR-143-3p) in the CFS/ME patients. Conclusion Our study is the first to identify circulating miRNAs from CFS/ME patients and also to confirm three differentially expressed circulating miRNAs in CFS/ME patients, providing a basis for further study to find useful CFS/ME biomarkers. PMID:25238588

  16. Towards ambient temperature-stable vaccines: the identification of thermally stabilizing liquid formulations for measles virus using an innovative high-throughput infectivity assay.

    PubMed

    Schlehuber, Lisa D; McFadyen, Iain J; Shu, Yu; Carignan, James; Duprex, W Paul; Forsyth, William R; Ho, Jason H; Kitsos, Christine M; Lee, George Y; Levinson, Douglas A; Lucier, Sarah C; Moore, Christopher B; Nguyen, Niem T; Ramos, Josephine; Weinstock, B André; Zhang, Junhong; Monagle, Julie A; Gardner, Colin R; Alvarez, Juan C

    2011-07-12

    As a result of thermal instability, some live attenuated viral (LAV) vaccines lose substantial potency from the time of manufacture to the point of administration. Developing regions lacking extensive, reliable refrigeration ("cold-chain") infrastructure are particularly vulnerable to vaccine failure, which in turn increases the burden of disease. Development of a robust, infectivity-based high throughput screening process for identifying thermostable vaccine formulations offers significant promise for vaccine development across a wide variety of LAV products. Here we describe a system that incorporates thermal stability screening into formulation design using heat labile measles virus as a prototype. The screening of >11,000 unique formulations resulted in the identification of liquid formulations with marked improvement over those used in commercial monovalent measles vaccines, with <1.0 log loss of activity after incubation for 8h at 40°C. The approach was shown to be transferable to a second unrelated virus, and therefore offers significant promise towards the optimization of formulation for LAV vaccine products. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Targeting α-synuclein oligomers by protein-fragment complementation for drug discovery in synucleinopathies.

    PubMed

    Moussaud, Simon; Malany, Siobhan; Mehta, Alka; Vasile, Stefan; Smith, Layton H; McLean, Pamela J

    2015-05-01

    Reducing the burden of α-synuclein oligomeric species represents a promising approach for disease-modifying therapies against synucleinopathies such as Parkinson's disease and dementia with Lewy bodies. However, the lack of efficient drug discovery strategies that specifically target α-synuclein oligomers has been a limitation to drug discovery programs. Here we describe an innovative strategy that harnesses the power of bimolecular protein-fragment complementation to monitor synuclein-synuclein interactions. We have developed two robust models to monitor α-synuclein oligomerization by generating novel stable cell lines expressing α-synuclein fusion proteins for either fluorescent or bioluminescent protein-fragment complementation under the tetracycline-controlled transcriptional activation system. A pilot screen was performed resulting in the identification of two potential hits, a p38 MAPK inhibitor and a casein kinase 2 inhibitor, thereby demonstrating the suitability of our protein-fragment complementation assay for the measurement of α-synuclein oligomerization in living cells at high throughput. The application of the strategy described herein to monitor α-synuclein oligomer formation in living cells with high throughput will facilitate drug discovery efforts for disease-modifying therapies against synucleinopathies and other proteinopathies.

  18. Development and Implementation of a High Throughput Screen for the Human Sperm-Specific Isoform of Glyceraldehyde 3-Phosphate Dehydrogenase (GAPDHS)

    PubMed Central

    Sexton, Jonathan Z; Danshina, Polina V; Lamson, David R; Hughes, Mark; House, Alan J; Yeh, Li-An; O’Brien, Deborah A; Williams, Kevin P

    2011-01-01

    Glycolytic isozymes that are restricted to the male germline are potential targets for the development of reversible, non-hormonal male contraceptives. GAPDHS, the sperm-specific isoform of glyceraldehyde-3-phosphate dehydrogenase, is an essential enzyme for glycolysis making it an attractive target for rational drug design. Toward this goal, we have optimized and validated a high-throughput spectrophotometric assay for GAPDHS in 384-well format. The assay was stable over time and tolerant to DMSO. Whole plate validation experiments yielded Z’ values >0.8 indicating a robust assay for HTS. Two compounds were identified and confirmed from a test screen of the Prestwick collection. This assay was used to screen a diverse chemical library and identified fourteen small molecules that modulated the activity of recombinant purified GAPDHS with confirmed IC50 values ranging from 1.8 to 42 µM. These compounds may provide useful scaffolds as molecular tools to probe the role of GAPDHS in sperm motility and long term to develop potent and selective GAPDHS inhibitors leading to novel contraceptive agents. PMID:21760877

  19. Improvement of an automated protein crystal exchange system PAM for high-throughput data collection

    PubMed Central

    Hiraki, Masahiko; Yamada, Yusuke; Chavas, Leonard M. G.; Wakatsuki, Soichi; Matsugaki, Naohiro

    2013-01-01

    Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable. PMID:24121334

  20. Crystallization of bovine insulin on a flow-free droplet-based platform

    NASA Astrophysics Data System (ADS)

    Chen, Fengjuan; Du, Guanru; Yin, Di; Yin, Ruixue; Zhang, Hongbo; Zhang, Wenjun; Yang, Shih-Mo

    2017-03-01

    Crystallization is an important process in the pharmaceutical manufacturing industry. In this work, we report a study to create the zinc-free crystals of bovine insulin on a flow-free droplet-based platform we previously developed. The benefit of this platform is its promise to create a single type of crystals under a simpler and more stable environment and with a high throughput. The experimental result shows that the bovine insulin forms a rhombic dodecahedra shape and the coefficient variation (CV) in the size of crystals is less than 5%. These results are very promising for the insulin production.

  1. Interplay of Interfacial Layers and Blend Composition To Reduce Thermal Degradation of Polymer Solar Cells at High Temperature.

    PubMed

    Ben Dkhil, Sadok; Pfannmöller, Martin; Schröder, Rasmus R; Alkarsifi, Riva; Gaceur, Meriem; Köntges, Wolfgang; Heidari, Hamed; Bals, Sara; Margeat, Olivier; Ackermann, Jörg; Videlot-Ackermann, Christine

    2018-01-31

    The thermal stability of printed polymer solar cells at elevated temperatures needs to be improved to achieve high-throughput fabrication including annealing steps as well as long-term stability. During device processing, thermal annealing impacts both the organic photoactive layer, and the two interfacial layers make detailed studies of degradation mechanism delicate. A recently identified thermally stable poly[[4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b']dithiophene-2,6-diyl][3-fluoro-2-[(2-ethylhexyl)carbonyl]thieno[3,4-b]thiophenediyl

  2. Micro- and nanofabrication methods for ion channel reconstitution in bilayer lipid membranes

    NASA Astrophysics Data System (ADS)

    Tadaki, Daisuke; Yamaura, Daichi; Arata, Kohei; Ohori, Takeshi; Ma, Teng; Yamamoto, Hideaki; Niwano, Michio; Hirano-Iwata, Ayumi

    2018-03-01

    The self-assembled bilayer lipid membrane (BLM) forms the basic structure of the cell membrane and serves as a major barrier against ion movement. Ion channel proteins function as gated pores that permit ion permeation across the BLM. The reconstitution of ion channel proteins in artificially formed BLMs represents a well-defined system for investigating channel functions and screening drug effects on ion channels. In this review, we will discuss our recent microfabrication approaches to the formation of stable BLMs containing ion channel proteins as a potential platform for next-generation drug screening systems. BLMs formed in a microaperture having a tapered edge exhibited highly stable properties, such as a lifetime of ∼65 h and tolerance to solution changes even after the incorporation of the human ether-a-go-go-related gene (hERG) channel. We also explore a new method of efficiently incorporating human ion channels into BLMs by centrifugation. Our approaches to the formation of stable BLMs and efficient channel incorporation markedly improve the experimental efficiency of BLM reconstitution systems, leading to the realization of a BLM-based high-throughput platform for functional assays of various ion channels.

  3. Tunable molecular orientation and elevated thermal stability of vapor-deposited organic semiconductors

    PubMed Central

    Walters, Diane M.; Lyubimov, Ivan; de Pablo, Juan J.; Ediger, M. D.

    2015-01-01

    Physical vapor deposition is commonly used to prepare organic glasses that serve as the active layers in light-emitting diodes, photovoltaics, and other devices. Recent work has shown that orienting the molecules in such organic semiconductors can significantly enhance device performance. We apply a high-throughput characterization scheme to investigate the effect of the substrate temperature (Tsubstrate) on glasses of three organic molecules used as semiconductors. The optical and material properties are evaluated with spectroscopic ellipsometry. We find that molecular orientation in these glasses is continuously tunable and controlled by Tsubstrate/Tg, where Tg is the glass transition temperature. All three molecules can produce highly anisotropic glasses; the dependence of molecular orientation upon substrate temperature is remarkably similar and nearly independent of molecular length. All three compounds form “stable glasses” with high density and thermal stability, and have properties similar to stable glasses prepared from model glass formers. Simulations reproduce the experimental trends and explain molecular orientation in the deposited glasses in terms of the surface properties of the equilibrium liquid. By showing that organic semiconductors form stable glasses, these results provide an avenue for systematic performance optimization of active layers in organic electronics. PMID:25831545

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence ofmore » the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  6. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    DOE PAGES

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...

    2016-06-01

    Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less

  7. Novel Approach for High-Throughput Metabolic Screening of Whole Plants by Stable Isotopes

    PubMed Central

    Beckers, Veronique; Kiep, Katina; Becker, Horst; Bläsing, Oliver Ernst; Fuchs, Regine

    2016-01-01

    Here, we demonstrate whole-plant metabolic profiling by stable isotope labeling and combustion isotope-ratio mass spectrometry for precise quantification of assimilation, translocation, and molecular reallocation of 13CO2 and 15NH4NO3. The technology was applied to rice (Oryza sativa) plants at different growth stages. For adult plants, 13CO2 labeling revealed enhanced carbon assimilation of the flag leaf from flowering to late grain-filling stage, linked to efficient translocation into the panicle. Simultaneous 13CO2 and 15NH4NO3 labeling with hydroponically grown seedlings was used to quantify the relative distribution of carbon and nitrogen. Two hours after labeling, assimilated carbon was mainly retained in the shoot (69%), whereas 7% entered the root and 24% was respired. Nitrogen, taken up via the root, was largely translocated into the shoot (85%). Salt-stressed seedlings showed decreased uptake and translocation of nitrogen (69%), whereas carbon metabolism was unaffected. Coupled to a gas chromatograph, labeling analysis provided enrichment of proteinogenic amino acids. This revealed significant protein synthesis in the panicle of adult plants, whereas protein biosynthesis in adult leaves was 8-fold lower than that in seedling shoots. Generally, amino acid enrichment was similar among biosynthetic families and allowed us to infer labeling dynamics of their precursors. On this basis, early and strong 13C enrichment of Embden-Meyerhof-Parnas pathway and pentose phosphate pathway intermediates indicated high activity of these routes. Applied to mode-of-action analysis of herbicides, the approach showed severe disturbance in the synthesis of branched-chain amino acids upon treatment with imazapyr. The established technology displays a breakthrough for quantitative high-throughput plant metabolic phenotyping. PMID:26966172

  8. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  9. Domain selection combined with improved cloning strategy for high throughput expression of higher eukaryotic proteins

    PubMed Central

    Chen, Yunjia; Qiu, Shihong; Luan, Chi-Hao; Luo, Ming

    2007-01-01

    Background Expression of higher eukaryotic genes as soluble, stable recombinant proteins is still a bottleneck step in biochemical and structural studies of novel proteins today. Correct identification of stable domains/fragments within the open reading frame (ORF), combined with proper cloning strategies, can greatly enhance the success rate when higher eukaryotic proteins are expressed as these domains/fragments. Furthermore, a HTP cloning pipeline incorporated with bioinformatics domain/fragment selection methods will be beneficial to studies of structure and function genomics/proteomics. Results With bioinformatics tools, we developed a domain/domain boundary prediction (DDBP) method, which was trained by available experimental data. Combined with an improved cloning strategy, DDBP had been applied to 57 proteins from C. elegans. Expression and purification results showed there was a 10-fold increase in terms of obtaining purified proteins. Based on the DDBP method, the improved GATEWAY cloning strategy and a robotic platform, we constructed a high throughput (HTP) cloning pipeline, including PCR primer design, PCR, BP reaction, transformation, plating, colony picking and entry clones extraction, which have been successfully applied to 90 C. elegans genes, 88 Brucella genes, and 188 human genes. More than 97% of the targeted genes were obtained as entry clones. This pipeline has a modular design and can adopt different operations for a variety of cloning/expression strategies. Conclusion The DDBP method and improved cloning strategy were satisfactory. The cloning pipeline, combined with our recombinant protein HTP expression pipeline and the crystal screening robots, constitutes a complete platform for structure genomics/proteomics. This platform will increase the success rate of purification and crystallization dramatically and promote the further advancement of structure genomics/proteomics. PMID:17663785

  10. Urinary Amino Acid Analysis: A Comparison of iTRAQ®-LC-MS/MS, GC-MS, and Amino Acid Analyzer

    PubMed Central

    Kaspar, Hannelore; Dettmer, Katja; Chan, Queenie; Daniels, Scott; Nimkar, Subodh; Daviglus, Martha L.; Stamler, Jeremiah; Elliott, Paul; Oefner, Peter J.

    2009-01-01

    Urinary amino acid analysis is typically done by cation-exchange chromatography followed by post-column derivatization with ninhydrin and UV detection. This method lacks throughput and specificity. Two recently introduced stable isotope ratio mass spectrometric methods promise to overcome those shortcomings. Using two blinded sets of urine replicates and a certified amino acid standard, we compared the precision and accuracy of gas chromatography/mass spectrometry (GC-MS) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) of propyl chloroformate and iTRAQ® derivatized amino acids, respectively, to conventional amino acid analysis. The GC-MS method builds on the direct derivatization of amino acids in diluted urine with propyl chloroformate, GC separation and mass spectrometric quantification of derivatives using stable isotope labeled standards. The LC-MS/MS method requires prior urinary protein precipitation followed by labeling of urinary and standard amino acids with iTRAQ® tags containing different cleavable reporter ions distinguishable by MS/MS fragmentation. Means and standard deviations of percent technical error (%TE) computed for 20 amino acids determined by amino acid analyzer, GC-MS, and iTRAQ®-LC-MS/MS analyses of 33 duplicate and triplicate urine specimens were 7.27±5.22, 21.18±10.94, and 18.34±14.67, respectively. Corresponding values for 13 amino acids determined in a second batch of 144 urine specimens measured in duplicate or triplicate were 8.39±5.35, 6.23±3.84, and 35.37±29.42. Both GC-MS and iTRAQ®-LC-MS/MS are suited for high-throughput amino acid analysis, with the former offering at present higher reproducibility and completely automated sample pretreatment, while the latter covers more amino acids and related amines. PMID:19481989

  11. Urinary amino acid analysis: a comparison of iTRAQ-LC-MS/MS, GC-MS, and amino acid analyzer.

    PubMed

    Kaspar, Hannelore; Dettmer, Katja; Chan, Queenie; Daniels, Scott; Nimkar, Subodh; Daviglus, Martha L; Stamler, Jeremiah; Elliott, Paul; Oefner, Peter J

    2009-07-01

    Urinary amino acid analysis is typically done by cation-exchange chromatography followed by post-column derivatization with ninhydrin and UV detection. This method lacks throughput and specificity. Two recently introduced stable isotope ratio mass spectrometric methods promise to overcome those shortcomings. Using two blinded sets of urine replicates and a certified amino acid standard, we compared the precision and accuracy of gas chromatography/mass spectrometry (GC-MS) and liquid chromatography-tandem mass spectrometry (LC-MS/MS) of propyl chloroformate and iTRAQ derivatized amino acids, respectively, to conventional amino acid analysis. The GC-MS method builds on the direct derivatization of amino acids in diluted urine with propyl chloroformate, GC separation and mass spectrometric quantification of derivatives using stable isotope labeled standards. The LC-MS/MS method requires prior urinary protein precipitation followed by labeling of urinary and standard amino acids with iTRAQ tags containing different cleavable reporter ions distinguishable by MS/MS fragmentation. Means and standard deviations of percent technical error (%TE) computed for 20 amino acids determined by amino acid analyzer, GC-MS, and iTRAQ-LC-MS/MS analyses of 33 duplicate and triplicate urine specimens were 7.27+/-5.22, 21.18+/-10.94, and 18.34+/-14.67, respectively. Corresponding values for 13 amino acids determined in a second batch of 144 urine specimens measured in duplicate or triplicate were 8.39+/-5.35, 6.23+/-3.84, and 35.37+/-29.42. Both GC-MS and iTRAQ-LC-MS/MS are suited for high-throughput amino acid analysis, with the former offering at present higher reproducibility and completely automated sample pretreatment, while the latter covers more amino acids and related amines.

  12. Novel Approach for High-Throughput Metabolic Screening of Whole Plants by Stable Isotopes.

    PubMed

    Dersch, Lisa Maria; Beckers, Veronique; Rasch, Detlev; Melzer, Guido; Bolten, Christoph; Kiep, Katina; Becker, Horst; Bläsing, Oliver Ernst; Fuchs, Regine; Ehrhardt, Thomas; Wittmann, Christoph

    2016-05-01

    Here, we demonstrate whole-plant metabolic profiling by stable isotope labeling and combustion isotope-ratio mass spectrometry for precise quantification of assimilation, translocation, and molecular reallocation of (13)CO2 and (15)NH4NO3 The technology was applied to rice (Oryza sativa) plants at different growth stages. For adult plants, (13)CO2 labeling revealed enhanced carbon assimilation of the flag leaf from flowering to late grain-filling stage, linked to efficient translocation into the panicle. Simultaneous (13)CO2 and (15)NH4NO3 labeling with hydroponically grown seedlings was used to quantify the relative distribution of carbon and nitrogen. Two hours after labeling, assimilated carbon was mainly retained in the shoot (69%), whereas 7% entered the root and 24% was respired. Nitrogen, taken up via the root, was largely translocated into the shoot (85%). Salt-stressed seedlings showed decreased uptake and translocation of nitrogen (69%), whereas carbon metabolism was unaffected. Coupled to a gas chromatograph, labeling analysis provided enrichment of proteinogenic amino acids. This revealed significant protein synthesis in the panicle of adult plants, whereas protein biosynthesis in adult leaves was 8-fold lower than that in seedling shoots. Generally, amino acid enrichment was similar among biosynthetic families and allowed us to infer labeling dynamics of their precursors. On this basis, early and strong (13)C enrichment of Embden-Meyerhof-Parnas pathway and pentose phosphate pathway intermediates indicated high activity of these routes. Applied to mode-of-action analysis of herbicides, the approach showed severe disturbance in the synthesis of branched-chain amino acids upon treatment with imazapyr. The established technology displays a breakthrough for quantitative high-throughput plant metabolic phenotyping. © 2016 American Society of Plant Biologists. All Rights Reserved.

  13. Scalable 96-well Plate Based iPSC Culture and Production Using a Robotic Liquid Handling System.

    PubMed

    Conway, Michael K; Gerger, Michael J; Balay, Erin E; O'Connell, Rachel; Hanson, Seth; Daily, Neil J; Wakatsuki, Tetsuro

    2015-05-14

    Continued advancement in pluripotent stem cell culture is closing the gap between bench and bedside for using these cells in regenerative medicine, drug discovery and safety testing. In order to produce stem cell derived biopharmaceutics and cells for tissue engineering and transplantation, a cost-effective cell-manufacturing technology is essential. Maintenance of pluripotency and stable performance of cells in downstream applications (e.g., cell differentiation) over time is paramount to large scale cell production. Yet that can be difficult to achieve especially if cells are cultured manually where the operator can introduce significant variability as well as be prohibitively expensive to scale-up. To enable high-throughput, large-scale stem cell production and remove operator influence novel stem cell culture protocols using a bench-top multi-channel liquid handling robot were developed that require minimal technician involvement or experience. With these protocols human induced pluripotent stem cells (iPSCs) were cultured in feeder-free conditions directly from a frozen stock and maintained in 96-well plates. Depending on cell line and desired scale-up rate, the operator can easily determine when to passage based on a series of images showing the optimal colony densities for splitting. Then the necessary reagents are prepared to perform a colony split to new plates without a centrifugation step. After 20 passages (~3 months), two iPSC lines maintained stable karyotypes, expressed stem cell markers, and differentiated into cardiomyocytes with high efficiency. The system can perform subsequent high-throughput screening of new differentiation protocols or genetic manipulation designed for 96-well plates. This technology will reduce the labor and technical burden to produce large numbers of identical stem cells for a myriad of applications.

  14. Multimedia And Internetworking Architecture Infrastructure On Interactive E-Learning System

    NASA Astrophysics Data System (ADS)

    Indah, K. A. T.; Sukarata, G.

    2018-01-01

    Interactive e-learning is a distance learning method that involves information technology, electronic system or computer as one means of learning system used for teaching and learning process that is implemented without having face to face directly between teacher and student. A strong dependence on emerging technologies greatly influences the way in which the architecture is designed to produce a powerful interactive e-learning network. In this paper analyzed an architecture model where learning can be done interactively, involving many participants (N-way synchronized distance learning) using video conferencing technology. Also used broadband internet network as well as multicast techniques as a troubleshooting method for bandwidth usage can be efficient.

  15. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.

  16. Parallel Electrochemical Treatment System and Application for Identifying Acid-Stable Oxygen Evolution Electrocatalysts

    DOE PAGES

    Jones, Ryan J. R.; Shinde, Aniketa; Guevarra, Dan; ...

    2015-01-05

    There are many energy technologies require electrochemical stability or preactivation of functional materials. Due to the long experiment duration required for either electrochemical preactivation or evaluation of operational stability, parallel screening is required to enable high throughput experimentation. We found that imposing operational electrochemical conditions to a library of materials in parallel creates several opportunities for experimental artifacts. We discuss the electrochemical engineering principles and operational parameters that mitigate artifacts int he parallel electrochemical treatment system. We also demonstrate the effects of resistive losses within the planar working electrode through a combination of finite element modeling and illustrative experiments. Operationmore » of the parallel-plate, membrane-separated electrochemical treatment system is demonstrated by exposing a composition library of mixed metal oxides to oxygen evolution conditions in 1M sulfuric acid for 2h. This application is particularly important because the electrolysis and photoelectrolysis of water are promising future energy technologies inhibited by the lack of highly active, acid-stable catalysts containing only earth abundant elements.« less

  17. Rapid, quantitative analysis of ppm/ppb nicotine using surface-enhanced Raman scattering from polymer-encapsulated Ag nanoparticles (gel-colls).

    PubMed

    Bell, Steven E J; Sirimuthu, Narayana M S

    2004-11-01

    Rapid, quantitative SERS analysis of nicotine at ppm/ppb levels has been carried out using stable and inexpensive polymer-encapsulated Ag nanoparticles (gel-colls). The strongest nicotine band (1030 cm(-1)) was measured against d(5)-pyridine internal standard (974 cm(-1)) which was introduced during preparation of the stock gel-colls. Calibration plots of I(nic)/I(pyr) against the concentration of nicotine were non-linear but plotting I(nic)/I(pyr) against [nicotine](x)(x = 0.6-0.75, depending on the exact experimental conditions) gave linear calibrations over the range (0.1-10 ppm) with R(2) typically ca. 0.998. The RMS prediction error was found to be 0.10 ppm when the gel-colls were used for quantitative determination of unknown nicotine samples in 1-5 ppm level. The main advantages of the method are that the gel-colls constitute a highly stable and reproducible SERS medium that allows high throughput (50 sample h(-1)) measurements.

  18. Efficient 41Ca measurements for biomedical applications

    NASA Astrophysics Data System (ADS)

    Vockenhuber, C.; Schulze-König, T.; Synal, H.-A.; Aeberli, I.; Zimmermann, M. B.

    2015-10-01

    We present the performance of 41Ca measurements using low-energy Accelerator Mass Spectrometry (AMS) at the 500 kV facility TANDY at ETH Zurich. We optimized the measurement procedure for biomedical applications where reliability and high sample throughput is required. The main challenge for AMS measurements of 41Ca is the interfering stable isobar 41K. We use a simplified sample preparation procedure to produce calcium fluoride (CaF2) and extract calcium tri-fluoride ions (CaF3-) ions to suppress the stable isobar 41K. Although 41K is not completely suppressed we reach 41Ca/40Ca background level in the 10-12 range which is adequate for biomedical studies. With helium as a stripper gas we can use charge state 2+ at high transmission (∼50%). The new measurement procedure with the approximately 10 × improved efficiency and the higher accuracy due to 41K correction allowed us to measure more than 600 samples for a large biomedical study within only a few weeks of measurement time.

  19. Transparent Nanopore Cavity Arrays Enable Highly Parallelized Optical Studies of Single Membrane Proteins on Chip.

    PubMed

    Diederichs, Tim; Nguyen, Quoc Hung; Urban, Michael; Tampé, Robert; Tornow, Marc

    2018-06-13

    Membrane proteins involved in transport processes are key targets for pharmaceutical research and industry. Despite continuous improvements and new developments in the field of electrical readouts for the analysis of transport kinetics, a well-suited methodology for high-throughput characterization of single transporters with nonionic substrates and slow turnover rates is still lacking. Here, we report on a novel architecture of silicon chips with embedded nanopore microcavities, based on a silicon-on-insulator technology for high-throughput optical readouts. Arrays containing more than 14 000 inverted-pyramidal cavities of 50 femtoliter volumes and 80 nm circular pore openings were constructed via high-resolution electron-beam lithography in combination with reactive ion etching and anisotropic wet etching. These cavities feature both, an optically transparent bottom and top cap. Atomic force microscopy analysis reveals an overall extremely smooth chip surface, particularly in the vicinity of the nanopores, which exhibits well-defined edges. Our unprecedented transparent chip design provides parallel and independent fluorescent readout of both cavities and buffer reservoir for unbiased single-transporter recordings. Spreading of large unilamellar vesicles with efficiencies up to 96% created nanopore-supported lipid bilayers, which are stable for more than 1 day. A high lipid mobility in the supported membrane was determined by fluorescent recovery after photobleaching. Flux kinetics of α-hemolysin were characterized at single-pore resolution with a rate constant of 0.96 ± 0.06 × 10 -3 s -1 . Here, we deliver an ideal chip platform for pharmaceutical research, which features high parallelism and throughput, synergistically combined with single-transporter resolution.

  20. Design and synthesis of the BODIPY-BSA complex for biological applications.

    PubMed

    Vedamalai, Mani; Gupta, Iti

    2018-02-01

    A quinoxaline-functionalized styryl-BODIPY derivative (S1) was synthesized by microwave-assisted Knoevenagel condensation. It exhibited fluorescence enhancement upon micro-encapsulation into the hydrophobic cavity of bovine serum albumin (BSA). The S1-BSA complex was characterized systematically using ultraviolet (UV)-visible absorption, fluorescence emission, kinetics, circular dichroism and time-resolved lifetime measurements. The binding nature of BSA towards S1 was strong, and was found to be stable over a period of days. The studies showed that the S1-BSA complex could be used as a new biomaterial for fluorescence-based high-throughput assay for kinase enzymes. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Parallelization of Catalytic Packed-Bed Microchannels with Pressure-Drop Microstructures for Gas-Liquid Multiphase Reactions

    NASA Astrophysics Data System (ADS)

    Murakami, Sunao; Ohtaki, Kenichiro; Matsumoto, Sohei; Inoue, Tomoya

    2012-06-01

    High-throughput and stable treatments are required to achieve the practical production of chemicals with microreactors. However, the flow maldistribution to the paralleled microchannels has been a critical problem in achieving the productive use of multichannel microreactors for multiphase flow conditions. In this study, we newly designed and fabricated a glass four-channel catalytic packed-bed microreactor for the scale-up of gas-liquid multiphase chemical reactions. We embedded microstructures generating high pressure losses at the upstream side of each packed bed, and experimentally confirmed the efficacy of the microstructures in decreasing the maldistribution of the gas-liquid flow to the parallel microchannels.

  2. High-throughput formation and control of monodisperse liquid crystals droplets driven by an alternating current electric field in a microfluidic device

    NASA Astrophysics Data System (ADS)

    Belloul, M.; Bartolo, J.-F.; Ziraoui, B.; Coldren, F.; Taly, V.; El Abed, A. I.

    2013-07-01

    We investigate the effect of an applied ac high voltage on a confined stable nematic liquid crystal (LC) in a microfluidic device and show that this actuation leads to the formation of highly monodisperse microdroplets with an unexpected constant mean size over a large interval of the forcing frequency F and with a droplets production frequency f ≃2F. We show also that despite the nonlinear feature of the droplets formation mechanism, droplets size, and size distribution are governed simply by the LC flow rate Qd and the forcing frequency F.

  3. Inverse hexagonal and cubic micellar lyotropic liquid crystalline phase behaviour of novel double chain sugar-based amphiphiles.

    PubMed

    Feast, George C; Lepitre, Thomas; Tran, Nhiem; Conn, Charlotte E; Hutt, Oliver E; Mulet, Xavier; Drummond, Calum J; Savage, G Paul

    2017-03-01

    The lyotropic phase behaviour of a library of sugar-based amphiphiles was investigated using high-throughput small-angle X-ray scattering (SAXS). Double unsaturated-chain monosaccharide amphiphiles formed inverse hexagonal and cubic micellar (Fd3m) lyotropic phases under excess water conditions. A galactose-oleyl amphiphile from the library was subsequently formulated into hexosome nanoparticles, which have potential uses as drug delivery vehicles. The nanoparticles were shown to be stable at elevated temperatures and non-cytotoxic up to at least 200μgmL -1 . Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  4. Detection and Reconstruction of Circular RNAs from Transcriptomic Data.

    PubMed

    Zheng, Yi; Zhao, Fangqing

    2018-01-01

    Recent studies have shown that circular RNAs (circRNAs) are a novel class of abundant, stable, and ubiquitous noncoding RNA molecules in eukaryotic organisms. Comprehensive detection and reconstruction of circRNAs from high-throughput transcriptome data is an initial step to study their biogenesis and function. Several tools have been developed to deal with this issue, but they require many steps and are difficult to use. To solve this problem, we provide a protocol for researchers to detect and reconstruct circRNA by employing CIRI2, CIRI-AS, and CIRI-full. This protocol can not only simplify the usage of above tools but also integrate their results.

  5. Highly Expandable Human iPS Cell-Derived Neural Progenitor Cells (NPC) and Neurons for Central Nervous System Disease Modeling and High-Throughput Screening.

    PubMed

    Cheng, Chialin; Fass, Daniel M; Folz-Donahue, Kat; MacDonald, Marcy E; Haggarty, Stephen J

    2017-01-11

    Reprogramming of human somatic cells into induced pluripotent stem (iPS) cells has greatly expanded the set of research tools available to investigate the molecular and cellular mechanisms underlying central nervous system (CNS) disorders. Realizing the promise of iPS cell technology for the identification of novel therapeutic targets and for high-throughput drug screening requires implementation of methods for the large-scale production of defined CNS cell types. Here we describe a protocol for generating stable, highly expandable, iPS cell-derived CNS neural progenitor cells (NPC) using multi-dimensional fluorescence activated cell sorting (FACS) to purify NPC defined by cell surface markers. In addition, we describe a rapid, efficient, and reproducible method for generating excitatory cortical-like neurons from these NPC through inducible expression of the pro-neural transcription factor Neurogenin 2 (iNgn2-NPC). Finally, we describe methodology for the use of iNgn2-NPC for probing human neuroplasticity and mechanisms underlying CNS disorders using high-content, single-cell-level automated microscopy assays. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  6. Microbial dynamics in mixed culture biofilms of bacteria surviving sanitation of conveyor belts in salmon-processing plants.

    PubMed

    Langsrud, S; Moen, B; Møretrø, T; Løype, M; Heir, E

    2016-02-01

    The microbiota surviving sanitation of salmon-processing conveyor belts was identified and its growth dynamics further investigated in a model mimicking processing surfaces in such plants. A diverse microbiota dominated by Gram-negative bacteria was isolated after regular sanitation in three salmon processing plants. A cocktail of 14 bacterial isolates representing all genera isolated from conveyor belts (Listeria, Pseudomonas, Stenotrophomonas, Brochothrix, Serratia, Acinetobacter, Rhodococcus and Chryseobacterium) formed stable biofilms on steel coupons (12°C, salmon broth) of about 10(9) CFU cm(-2) after 2 days. High-throughput sequencing showed that Listeria monocytogenes represented 0·1-0·01% of the biofilm population and that Pseudomonas spp dominated. Interestingly, both Brochothrix sp. and a Pseudomonas sp. dominated in the surrounding suspension. The microbiota surviving sanitation is dominated by Pseudomonas spp. The background microbiota in biofilms inhibit, but do not eliminate L. monocytogenes. The results highlights that sanitation procedures have to been improved in the salmon-processing industry, as high numbers of a diverse microbiota survived practical sanitation. High-throughput sequencing enables strain level studies of population dynamics in biofilm. © 2015 The Society for Applied Microbiology.

  7. DDS as middleware of the Southern African Large Telescope control system

    NASA Astrophysics Data System (ADS)

    Maartens, Deneys S.; Brink, Janus D.

    2016-07-01

    The Southern African Large Telescope (SALT) software control system1 is realised as a distributed control system, implemented predominantly in National Instruments' LabVIEW. The telescope control subsystems communicate using cyclic, state-based messages. Currently, transmitting a message is accomplished by performing an HTTP PUT request to a WebDAV directory on a centralised Apache web server, while receiving is based on polling the web server for new messages. While the method works, it presents a number of drawbacks; a scalable distributed communication solution with minimal overhead is a better fit for control systems. This paper describes our exploration of the Data Distribution Service (DDS). DDS is a formal standard specification, defined by the Object Management Group (OMG), that presents a data-centric publish-subscribe model for distributed application communication and integration. It provides an infrastructure for platform- independent many-to-many communication. A number of vendors provide implementations of the DDS standard; RTI, in particular, provides a DDS toolkit for LabVIEW. This toolkit has been evaluated against the needs of SALT, and a few deficiencies have been identified. We have developed our own implementation that interfaces LabVIEW to DDS in order to address our specific needs. Our LabVIEW DDS interface implementation is built against the RTI DDS Core component, provided by RTI under their Open Community Source licence. Our needs dictate that the interface implementation be platform independent. Since we have access to the RTI DDS Core source code, we are able to build the RTI DDS libraries for any of the platforms on which we require support. The communications functionality is based on UDP multicasting. Multicasting is an efficient communications mechanism with low overheads which avoids duplicated point-to-point transmission of data on a network where there are multiple recipients of the data. In the paper we present a performance evaluation of DDS against the current HTTP-based implementation as well as the historical DataSocket implementation. We conclude with a summary and describe future work.

  8. Generation of stable PDX derived cell lines using conditional reprogramming.

    PubMed

    Borodovsky, Alexandra; McQuiston, Travis J; Stetson, Daniel; Ahmed, Ambar; Whitston, David; Zhang, Jingwen; Grondine, Michael; Lawson, Deborah; Challberg, Sharon S; Zinda, Michael; Pollok, Brian A; Dougherty, Brian A; D'Cruz, Celina M

    2017-12-06

    Efforts to develop effective cancer therapeutics have been hindered by a lack of clinically predictive preclinical models which recapitulate this complex disease. Patient derived xenograft (PDX) models have emerged as valuable tools for translational research but have several practical limitations including lack of sustained growth in vitro. In this study, we utilized Conditional Reprogramming (CR) cell technology- a novel cell culture system facilitating the generation of stable cultures from patient biopsies- to establish PDX-derived cell lines which maintain the characteristics of the parental PDX tumor. Human lung and ovarian PDX tumors were successfully propagated using CR technology to create stable explant cell lines (CR-PDX). These CR-PDX cell lines maintained parental driver mutations and allele frequency without clonal drift. Purified CR-PDX cell lines were amenable to high throughput chemosensitivity screening and in vitro genetic knockdown studies. Additionally, re-implanted CR-PDX cells proliferated to form tumors that retained the growth kinetics, histology, and drug responses of the parental PDX tumor. CR technology can be used to generate and expand stable cell lines from PDX tumors without compromising fundamental biological properties of the model. It offers the ability to expand PDX cells in vitro for subsequent 2D screening assays as well as for use in vivo to reduce variability, animal usage and study costs. The methods and data detailed here provide a platform to generate physiologically relevant and predictive preclinical models to enhance drug discovery efforts.

  9. Tunable molecular orientation and elevated thermal stability of vapor-deposited organic semiconductors

    DOE PAGES

    Dalal, Shakeel S.; Walters, Diane M.; Lyubimov, Ivan; ...

    2015-03-23

    Physical vapor deposition is commonly used to prepare organic glasses that serve as the active layers in light-emitting diodes, photovoltaics, and other devices. Recent work has shown that orienting the molecules in such organic semiconductors can significantly enhance device performance. In this paper, we apply a high-throughput characterization scheme to investigate the effect of the substrate temperature (T substrate) on glasses of three organic molecules used as semiconductors. The optical and material properties are evaluated with spectroscopic ellipsometry. We find that molecular orientation in these glasses is continuously tunable and controlled by T substrate/T g, where T g is themore » glass transition temperature. All three molecules can produce highly anisotropic glasses; the dependence of molecular orientation upon substrate temperature is remarkably similar and nearly independent of molecular length. All three compounds form “stable glasses” with high density and thermal stability, and have properties similar to stable glasses prepared from model glass formers. Simulations reproduce the experimental trends and explain molecular orientation in the deposited glasses in terms of the surface properties of the equilibrium liquid. Finally, by showing that organic semiconductors form stable glasses, these results provide an avenue for systematic performance optimization of active layers in organic electronics.« less

  10. Representation and Integration of Scientific Information

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The objective of this Joint Research Interchange with NASA-Ames was to investigate how the Tsimmis technology could be used to represent and integrate scientific information. The main goal of the Tsimmis project is to allow a decision maker to find information of interest from such sources, fuse it, and process it (e.g., summarize it, visualize it, discover trends). Another important goal is the easy incorporation of new sources, as well the ability to deal with sources whose structure or services evolve. During the Interchange we had research meetings approximately every month or two. The funds provided by NASA supported work that lead to the following two papers: Fusion Queries over Internet Databases; Efficient Query Subscription Processing in a Multicast Environment.

  11. RIACS

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1997-01-01

    Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.

  12. Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System

    PubMed Central

    Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi

    2015-01-01

    WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication. PMID:26351658

  13. Security Enhancement Using Cache Based Reauthentication in WiMAX Based E-Learning System.

    PubMed

    Rajagopal, Chithra; Bhuvaneshwaran, Kalaavathi

    2015-01-01

    WiMAX networks are the most suitable for E-Learning through their Broadcast and Multicast Services at rural areas. Authentication of users is carried out by AAA server in WiMAX. In E-Learning systems the users must be forced to perform reauthentication to overcome the session hijacking problem. The reauthentication of users introduces frequent delay in the data access which is crucial in delaying sensitive applications such as E-Learning. In order to perform fast reauthentication caching mechanism known as Key Caching Based Authentication scheme is introduced in this paper. Even though the cache mechanism requires extra storage to keep the user credentials, this type of mechanism reduces the 50% of the delay occurring during reauthentication.

  14. Application of the fractional Fourier transform to the design of LCOS based optical interconnects and fiber switches.

    PubMed

    Robertson, Brian; Zhang, Zichen; Yang, Haining; Redmond, Maura M; Collings, Neil; Liu, Jinsong; Lin, Ruisheng; Jeziorska-Chapman, Anna M; Moore, John R; Crossland, William A; Chu, D P

    2012-04-20

    It is shown that reflective liquid crystal on silicon (LCOS) spatial light modulator (SLM) based interconnects or fiber switches that use defocus to reduce crosstalk can be evaluated and optimized using a fractional Fourier transform if certain optical symmetry conditions are met. Theoretically the maximum allowable linear hologram phase error compared to a Fourier switch is increased by a factor of six before the target crosstalk for telecom applications of -40 dB is exceeded. A Gerchberg-Saxton algorithm incorporating a fractional Fourier transform modified for use with a reflective LCOS SLM is used to optimize multi-casting holograms in a prototype telecom switch. Experiments are in close agreement to predicted performance.

  15. Modelling and temporal performances evaluation of networked control systems using (max, +) algebra

    NASA Astrophysics Data System (ADS)

    Ammour, R.; Amari, S.

    2015-01-01

    In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.

  16. Solar Power Satellite (SPS) fiber optic link assessment

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A feasibility demonstration of a 980 MHz fiber optic link for the Solar Power Satellite (SPS) phase reference distribution system was accomplished. A dual fiber-optic link suitable for a phase distribution frequency of 980 MHz was built and tested. The major link components include single mode injection laser diodes, avalanche photodiodes, and multimode high bandwidth fibers. Signal throughput was demonstrated to be stable and of high quality in all cases. For a typical SPS link length of 200 meters, the transmitted phase at 980 MHz varies approximately 2.5 degrees for every deg C of fiber temperature change. This rate is acceptable because of the link length compensation feature of the phase control design.

  17. Bienzymatic Biosensor for Rapid Detection of Aspartame by Flow Injection Analysis

    PubMed Central

    Radulescu, Maria-Cristina; Bucur, Bogdan; Bucur, Madalina-Petruta; Radu, Gabriel Lucian

    2014-01-01

    A rapid, simple and stable biosensor for aspartame detection was developed. Alcohol oxidase (AOX), carboxyl esterase (CaE) and bovine serum albumin (BSA) were immobilised with glutaraldehyde (GA) onto screen-printed electrodes modified with cobalt-phthalocyanine (CoPC). The biosensor response was fast. The sample throughput using a flow injection analysis (FIA) system was 40 h−1 with an RSD of 2.7%. The detection limits for both batch and FIA measurements were 0.1 μM for methanol and 0.2 μM for aspartame, respectively. The enzymatic biosensor was successfully applied for aspartame determination in different sample matrices/commercial products (liquid and solid samples) without any pre-treatment step prior to measurement. PMID:24412899

  18. Bienzymatic biosensor for rapid detection of aspartame by flow injection analysis.

    PubMed

    Radulescu, Maria-Cristina; Bucur, Bogdan; Bucur, Madalina-Petruta; Radu, Gabriel Lucian

    2014-01-09

    A rapid, simple and stable biosensor for aspartame detection was developed. Alcohol oxidase (AOX), carboxyl esterase (CaE) and bovine serum albumin (BSA) were immobilised with glutaraldehyde (GA) onto screen-printed electrodes modified with cobalt-phthalocyanine (CoPC). The biosensor response was fast. The sample throughput using a flow injection analysis (FIA) system was 40 h⁻¹ with an RSD of 2.7%. The detection limits for both batch and FIA measurements were 0.1 µM for methanol and 0.2 µM for aspartame, respectively. The enzymatic biosensor was successfully applied for aspartame determination in different sample matrices/commercial products (liquid and solid samples) without any pre-treatment step prior to measurement.

  19. External optical imaging of freely moving mice with green fluorescent protein-expressing metastatic tumors

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Baranov, Eugene; Shimada, Hiroshi; Moossa, A. R.; Hoffman, Robert M.

    2000-04-01

    We report here a new approach to genetically engineering tumors to become fluorescence such that they can be imaged externally in freely-moving animals. We describe here external high-resolution real-time fluorescent optical imaging of metastatic tumors in live mice. Stable high-level green flourescent protein (GFP)-expressing human and rodent cell lines enable tumors and metastasis is formed from them to be externally imaged from freely-moving mice. Real-time tumor and metastatic growth were quantitated from whole-body real-time imaging in GFP-expressing melanoma and colon carcinoma models. This GFP optical imaging system is highly appropriate for high throughput in vivo drug screening.

  20. Laser Hot Wire Process: A Novel Process for Near-Net Shape Fabrication for High-Throughput Applications

    NASA Astrophysics Data System (ADS)

    Kottman, Michael; Zhang, Shenjia; McGuffin-Cawley, James; Denney, Paul; Narayanan, Badri K.

    2015-03-01

    The laser hot wire process has gained considerable interest for additive manufacturing applications, leveraging its high deposition rate, low dilution, thermal stability, and general metallurgical control including the ability to introduce and preserve desired meta-stable phases. Recent advancements in closed-loop process control and laser technology have increased productivity, process stability, and control of deposit metallurgy. The laser hot wire process has shown success in several applications: repairing and rejuvenating casting dies, depositing a variety of alloys including abrasion wear-resistant overlays with solid and tubular wires, and producing low-dilution (<5%) nickel alloy overlays for corrosion applications. The feasibility of fabricating titanium buildups is being assessed for aerospace applications.

  1. High-throughput kinase assays with protein substrates using fluorescent polymer superquenching.

    PubMed

    Rininsland, Frauke; Stankewicz, Casey; Weatherford, Wendy; McBranch, Duncan

    2005-05-31

    High-throughput screening is used by the pharmaceutical industry for identifying lead compounds that interact with targets of pharmacological interest. Because of the key role that aberrant regulation of protein phosphorylation plays in diseases such as cancer, diabetes and hypertension, kinases have become one of the main drug targets. With the exception of antibody-based assays, methods to screen for specific kinase activity are generally restricted to the use of small synthetic peptides as substrates. However, the use of natural protein substrates has the advantage that potential inhibitors can be detected that affect enzyme activity by binding to a site other than the catalytic site. We have previously reported a non-radioactive and non-antibody-based fluorescence quench assay for detection of phosphorylation or dephosphorylation using synthetic peptide substrates. The aim of this work is to develop an assay for detection of phosphorylation of chemically unmodified proteins based on this polymer superquenching platform. Using a modified QTL Lightspeed assay, phosphorylation of native protein was quantified by the interaction of the phosphorylated proteins with metal-ion coordinating groups co-located with fluorescent polymer deposited onto microspheres. The binding of phospho-protein inhibits a dye-labeled "tracer" peptide from associating to the phosphate-binding sites present on the fluorescent microspheres. The resulting inhibition of quench generates a "turn on" assay, in which the signal correlates with the phosphorylation of the substrate. The assay was tested on three different proteins: Myelin Basic Protein (MBP), Histone H1 and Phosphorylated heat- and acid-stable protein (PHAS-1). Phosphorylation of the proteins was detected by Protein Kinase Calpha (PKCalpha) and by the Interleukin -1 Receptor-associated Kinase 4 (IRAK4). Enzyme inhibition yielded IC50 values that were comparable to those obtained using peptide substrates. Statistical parameters that are used in the high-throughput community to determine assay robustness (Z'-value) demonstrate the suitability of this format for high-throughput screening applications for detection of inhibitors of enzyme activity. The QTL Lightspeed protein detection system provides a simple mix and measure "turn on" assay for the detection of kinase activity using natural protein substrates. The platform is robust and allows for identification of inhibitors of kinase activity.

  2. Lab-on-a-chip platform for high throughput drug discovery with DNA-encoded chemical libraries

    NASA Astrophysics Data System (ADS)

    Grünzner, S.; Reddavide, F. V.; Steinfelder, C.; Cui, M.; Busek, M.; Klotzbach, U.; Zhang, Y.; Sonntag, F.

    2017-02-01

    The fast development of DNA-encoded chemical libraries (DECL) in the past 10 years has received great attention from pharmaceutical industries. It applies the selection approach for small molecular drug discovery. Because of the limited choices of DNA-compatible chemical reactions, most DNA-encoded chemical libraries have a narrow structural diversity and low synthetic yield. There is also a poor correlation between the ranking of compounds resulted from analyzing the sequencing data and the affinity measured through biochemical assays. By combining DECL with dynamical chemical library, the resulting DNA-encoded dynamic library (EDCCL) explores the thermodynamic equilibrium of reversible reactions as well as the advantages of DNA encoded compounds for manipulation/detection, thus leads to enhanced signal-to-noise ratio of the selection process and higher library quality. However, the library dynamics are caused by the weak interactions between the DNA strands, which also result in relatively low affinity of the bidentate interaction, as compared to a stable DNA duplex. To take advantage of both stably assembled dual-pharmacophore libraries and EDCCLs, we extended the concept of EDCCLs to heat-induced EDCCLs (hi-EDCCLs), in which the heat-induced recombination process of stable DNA duplexes and affinity capture are carried out separately. To replace the extremely laborious and repetitive manual process, a fully automated device will facilitate the use of DECL in drug discovery. Herein we describe a novel lab-on-a-chip platform for high throughput drug discovery with hi-EDCCL. A microfluidic system with integrated actuation was designed which is able to provide a continuous sample circulation by reducing the volume to a minimum. It consists of a cooled and a heated chamber for constant circulation. The system is capable to generate stable temperatures above 75 °C in the heated chamber to melt the double strands of the DNA and less than 15 °C in the cooled chamber, to reanneal the reshuffled library. In the binding chamber (the cooled chamber) specific retaining structures are integrated. These hold back beads functionalized with the target protein, while the chamber is continuously flushed with library molecules. Afterwards the whole system can be flushed with buffer to wash out unspecific bound molecules. Finally the protein-loaded beads with attached molecules can be eluted for further investigation.

  3. Stable isotope dilution assay (SIDA) and HS-SPME-GCMS quantification of key aroma volatiles for fruit and sap of Australian mango cultivars.

    PubMed

    San, Anh T; Joyce, Daryl C; Hofman, Peter J; Macnish, Andrew J; Webb, Richard I; Matovic, Nicolas J; Williams, Craig M; De Voss, James J; Wong, Siew H; Smyth, Heather E

    2017-04-15

    Reported herein is a high throughput method to quantify in a single analysis the key volatiles that contribute to the aroma of commercially significant mango cultivars grown in Australia. The method constitutes stable isotope dilution analysis (SIDA) in conjunction with headspace (HS) solid-phase microextraction (SPME) coupled with gas-chromatography mass spectrometry (GCMS). Deuterium labelled analogues of the target analytes were either purchased commercially or synthesised for use as internal standards. Seven volatiles, hexanal, 3-carene, α-terpinene, p-cymene, limonene, α-terpinolene and ethyl octanoate, were targeted. The resulting calibration functions had determination coefficients (R 2 ) ranging from 0.93775 to 0.99741. High recovery efficiencies for spiked mango samples were also achieved. The method was applied to identify the key aroma volatile compounds produced by 'Kensington Pride' and 'B74' mango fruit and by 'Honey Gold' mango sap. This method represents a marked improvement over current methods for detecting and measuring concentrations of mango fruit and sap volatiles. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A Computational Framework for High-Throughput Isotopic Natural Abundance Correction of Omics-Level Ultra-High Resolution FT-MS Datasets

    PubMed Central

    Carreer, William J.; Flight, Robert M.; Moseley, Hunter N. B.

    2013-01-01

    New metabolomics applications of ultra-high resolution and accuracy mass spectrometry can provide thousands of detectable isotopologues, with the number of potentially detectable isotopologues increasing exponentially with the number of stable isotopes used in newer isotope tracing methods like stable isotope-resolved metabolomics (SIRM) experiments. This huge increase in usable data requires software capable of correcting the large number of isotopologue peaks resulting from SIRM experiments in a timely manner. We describe the design of a new algorithm and software system capable of handling these high volumes of data, while including quality control methods for maintaining data quality. We validate this new algorithm against a previous single isotope correction algorithm in a two-step cross-validation. Next, we demonstrate the algorithm and correct for the effects of natural abundance for both 13C and 15N isotopes on a set of raw isotopologue intensities of UDP-N-acetyl-D-glucosamine derived from a 13C/15N-tracing experiment. Finally, we demonstrate the algorithm on a full omics-level dataset. PMID:24404440

  5. A Summary of Actinide Enrichment Technologies and Capability Gaps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Bradley D.; Robinson, Sharon M.

    2017-01-01

    The evaluation performed in this study indicates that a new program is needed to efficiently provide a national actinide radioisotope enrichment capability to produce milligram-to-gram quantities of unique materials for user communities. This program should leverage past actinide enrichment, the recent advances in stable isotope enrichment, and assessments of the future requirements to cost effectively develop this capability while establishing an experience base for a new generation of researchers in this vital area. Preliminary evaluations indicate that an electromagnetic isotope separation (EMIS) device would have the capability to meet the future needs of the user community for enriched actinides. Themore » EMIS technology could be potentially coupled with other enrichment technologies, such as irradiation, as pre-enrichment and/or post-enrichment systems to increase the throughput, reduce losses of material, and/or reduce operational costs of the base EMIS system. Past actinide enrichment experience and advances in the EMIS technology applied in stable isotope separations should be leveraged with this new evaluation information to assist in the establishment of a domestic actinide radioisotope enrichment capability.« less

  6. High Throughput UPLC®-MSMS Method for the Analysis of Phosphatidylethanol (PEth) 16:0/18:1, a Specific Biomarker for Alcohol Consumption, in Whole Blood.

    PubMed

    Andreassen, Trine Naalsund; Havnen, Hilde; Spigset, Olav; Falch, Berit Margrethe Hasle; Skråstad, Ragnhild Bergene

    2018-01-01

    Phosphatidylethanol (PEth) is an alcohol biomarker formed in the presence of ethanol in the body. Both due to its specificity and because it has a detection window of up to several weeks after alcohol intake, its application potential is broader than for other ethanol biomarkers. The aim of this study was to develop and validate a robust method for PEth in whole blood with fast and efficient sample extraction and a short analytical runtime, suitable for high throughput routine purposes. A validated ultra-performance liquid chromatography tandem mass spectrometry (UPLC®-MSMS) method for quantification of PEth 16:0/18:1 in the range 0.05-4.00 μM (R2 ≥ 0.999) is presented. PEth 16:0/18:1 and the internal standard (IS) PEth-d5 (0.55 μM), were extracted from whole blood (150 μL) by simple protein precipitation with 2-propanol (450 μL). Chromatography was achieved using a BEH-phenyl (2.1 × 30 mm, 1.7 μm) column and a gradient elution combining ammonium formate (5 mM, pH 10.1) and acetonitrile at a flow rate of 0.5 mL/min. Runtime was 2.3 min. The mass spectrometer was monitored in negative mode with multiple reaction monitoring (MRM). The m/z 701.7 > 255.2 and 701.7 > 281.3 transitions were monitored for PEth 16:0/18:1 and the m/z 706.7 > 255.3 for PEth-d5. Limit of quantification was 0.03 μM (coefficient of variation, CV = 6.7%, accuracy = 99.3%). Within-assay and between-assay imprecision were 0.4-3.3% (CV ≤ 7.1%). Recoveries were 95-102% (CV ≤ 4.9%). Matrix effects after IS correction ranged from 107% to 112%. PEth 16:0/18:1 in patient samples were stable for several days at 30°C. Repeated freezing (-80°C) and thawing did not affect the concentration. After thawing and analysis patient samples were stable at 4-8°C for at least 4 weeks. Results from a proficiency test program, showing |Z| values ≤1.2, confirm the validity of the method. Analysis of the first 3,169 samples sent to our laboratory for routine use has demonstrated its properties as a robust method suitable for high throughput purposes.

  7. Routing optimization in networks based on traffic gravitational field model

    NASA Astrophysics Data System (ADS)

    Liu, Longgeng; Luo, Guangchun

    2017-04-01

    For research on the gravitational field routing mechanism on complex networks, we further analyze the gravitational effect of paths. In this study, we introduce the concept of path confidence degree to evaluate the unblocked reliability of paths that it takes the traffic state of all nodes on the path into account from the overall. On the basis of this, we propose an improved gravitational field routing protocol considering all the nodes’ gravities on the path and the path confidence degree. In order to evaluate the transmission performance of the routing strategy, an order parameter is introduced to measure the network throughput by the critical value of phase transition from a free-flow phase to a jammed phase, and the betweenness centrality is used to evaluate the transmission performance and traffic congestion of the network. Simulation results show that compared with the shortest-path routing strategy and the previous gravitational field routing strategy, the proposed algorithm improves the network throughput considerably and effectively balances the traffic load within the network, and all nodes in the network are utilized high efficiently. As long as γ ≥ α, the transmission performance can reach the maximum and remains unchanged for different α and γ, which ensures that the proposed routing protocol is high efficient and stable.

  8. Delivery of Formulated Industrial Enzymes with Acoustic Technology.

    PubMed

    Hwang, Jennifer Dorcas; Ortiz-Maldonado, Mariliz; Paramonov, Sergey

    2016-02-01

    Industrial enzymes are instrumental in many applications, including carbohydrate processing, fabric and household care, biofuels, food, and animal nutrition, among others. Enzymes have to be active and stable not only in harsh application conditions, but also during shipment and storage. In protein stability studies, formulated concentrated enzyme solutions are frequently diluted gravimetrically prior to enzyme activity measurements, making it challenging to move toward more high-throughput techniques using conventional robotic equipment. Current assay methods pose difficulties when measuring highly concentrated proteins. For example, plastic pipette tips can introduce error because proteins adsorb to the tip surface, despite the presence of detergents, decreasing precision and overall efficiency of protein activity assays. Acoustic liquid handling technology, frequently used for various dilute small-molecule assays, may overcome such problems. Originally shown to effectively deliver dilute solutions of small molecules, this technology is used here as an effective alternative to the aforementioned challenge with viscous concentrated protein solutions. Because the acoustic liquid handler transfers nanoliter quantities of liquids without using pipette tips and without sample loss, it rapidly and uniformly prepares assay plates for enzyme activity measurements within minutes. This increased efficiency transforms the nature of enzyme stability studies toward high precision and throughput. © 2015 Society for Laboratory Automation and Screening.

  9. Optimization and quality control of genome-wide Hi-C library preparation.

    PubMed

    Zhang, Xiang-Yuan; He, Chao; Ye, Bing-Yu; Xie, De-Jian; Shi, Ming-Lei; Zhang, Yan; Shen, Wen-Long; Li, Ping; Zhao, Zhi-Hu

    2017-09-20

    Highest-throughput chromosome conformation capture (Hi-C) is one of the key assays for genome- wide chromatin interaction studies. It is a time-consuming process that involves many steps and many different kinds of reagents, consumables, and equipments. At present, the reproducibility is unsatisfactory. By optimizing the key steps of the Hi-C experiment, such as crosslinking, pretreatment of digestion, inactivation of restriction enzyme, and in situ ligation etc., we established a robust Hi-C procedure and prepared two biological replicates of Hi-C libraries from the GM12878 cells. After preliminary quality control by Sanger sequencing, the two replicates were high-throughput sequenced. The bioinformatics analysis of the raw sequencing data revealed the mapping-ability and pair-mate rate of the raw data were around 90% and 72%, respectively. Additionally, after removal of self-circular ligations and dangling-end products, more than 96% of the valid pairs were reached. Genome-wide interactome profiling shows clear topological associated domains (TADs), which is consistent with previous reports. Further correlation analysis showed that the two biological replicates strongly correlate with each other in terms of both bin coverage and all bin pairs. All these results indicated that the optimized Hi-C procedure is robust and stable, which will be very helpful for the wide applications of the Hi-C assay.

  10. Monolithic Hydrogen Peroxide Catalyst Bed Development

    NASA Technical Reports Server (NTRS)

    Ponzo, J. B.

    2003-01-01

    With recent increased industry and government interest in rocket grade hydrogen peroxide as a viable propellant, significant effort has been expended to improve on earlier developments. This effort has been predominately centered in improving heterogeneous. typically catalyst beds; and homogeneous catalysts, which are typically solutions of catalytic substances. Heterogeneous catalyst beds have traditionally consisted of compressed wire screens plated with a catalytic substance, usually silver, and were used m many RCS applications (X-1, Mercury, and Centaur for example). Aerojet has devised a heterogeneous catalyst design that is monolithic (single piece), extremely compact, and has pressure drops equal to or less than traditional screen beds. The design consists of a bonded stack of very thin, photoetched metal plates, silver coated. This design leads to a high surface area per unit volume and precise flow area, resulting in high, stable, and repeatable performance. Very high throughputs have been demonstrated with 90% hydrogen peroxide. (0.60 lbm/s/sq in at 1775-175 psia) with no flooding of the catalyst bed. Bed life of over 900 seconds has also been demonstrated at throughputs of 0.60 lbm/s/sq in across varying chamber pressures. The monolithic design also exhibits good starting performance, short break-in periods, and will easily scale to various sizes.

  11. Shaped Apertures in Photoresist Films Enhance the Lifetime and Mechanical Stability of Suspended Lipid Bilayers

    PubMed Central

    Kalsi, Sumit; Powl, Andrew M.; Wallace, B.A.; Morgan, Hywel; de Planque, Maurits R.R.

    2014-01-01

    Planar lipid bilayers suspended in apertures provide a controlled environment for ion channel studies. However, short lifetimes and poor mechanical stability of suspended bilayers limit the experimental throughput of bilayer electrophysiology experiments. Although bilayers are more stable in smaller apertures, ion channel incorporation through vesicle fusion with the suspended bilayer becomes increasingly difficult. In an alternative bilayer stabilization approach, we have developed shaped apertures in SU8 photoresist that have tapered sidewalls and a minimum diameter between 60 and 100 μm. Bilayers formed at the thin tip of these shaped apertures, either with the painting or the folding method, display drastically increased lifetimes, typically >20 h, and mechanical stability, being able to withstand extensive perturbation of the buffer solution. Single-channel electrical recordings of the peptide alamethicin and of the proteoliposome-delivered potassium channel KcsA demonstrate channel conductance with low noise, made possible by the small capacitance of the 50 μm thick SU8 septum, which is only thinned around the aperture, and unimpeded proteoliposome fusion, enabled by the large aperture diameter. We anticipate that these shaped apertures with micrometer edge thickness can substantially enhance the throughput of channel characterization by bilayer lipid membrane electrophysiology, especially in combination with automated parallel bilayer platforms. PMID:24739164

  12. Quantitative High-Throughput Identification of Drugs as Modulators of Human Constitutive Androstane Receptor

    PubMed Central

    Lynch, Caitlin; Zhao, Jinghua; Huang, Ruili; Xiao, Jingwei; Li, Linhao; Heyward, Scott; Xia, Menghang; Wang, Hongbing

    2015-01-01

    The constitutive androstane receptor (CAR, NR1I3) plays a key role in governing the transcription of numerous hepatic genes that involve xenobiotic metabolism/clearance, energy homeostasis, and cell proliferation. Thus, identification of novel human CAR (hCAR) modulators may not only enhance early prediction of drug-drug interactions but also offer potentially novel therapeutics for diseases such as metabolic disorders and cancer. In this study, we have generated a double stable cell line expressing both hCAR and a CYP2B6-driven luciferase reporter for quantitative high-throughput screening (qHTS) of hCAR modulators. Approximately 2800 compounds from the NIH Chemical Genomics Center Pharmaceutical Collection were screened employing both the activation and deactivation modes of the qHTS. Activators (115) and deactivators (152) of hCAR were identified from the primary qHTS, among which 10 agonists and 10 antagonists were further validated in the physiologically relevant human primary hepatocytes for compound-mediated hCAR nuclear translocation and target gene expression. Collectively, our results reveal that hCAR modulators can be efficiently identified through this newly established qHTS assay. Profiling drug collections for hCAR activity would facilitate the prediction of metabolism-based drug-drug interactions, and may lead to the identification of potential novel therapeutics. PMID:25993555

  13. SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments.

    PubMed

    Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H

    2018-01-01

    DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments.

  14. An improved stable isotope N-terminal labeling approach with light/heavy TMPP to automate proteogenomics data validation: dN-TOP.

    PubMed

    Bertaccini, Diego; Vaca, Sebastian; Carapito, Christine; Arsène-Ploetze, Florence; Van Dorsselaer, Alain; Schaeffer-Reiss, Christine

    2013-06-07

    In silico gene prediction has proven to be prone to errors, especially regarding precise localization of start codons that spread in subsequent biological studies. Therefore, the high throughput characterization of protein N-termini is becoming an emerging challenge in the proteomics and especially proteogenomics fields. The trimethoxyphenyl phosphonium (TMPP) labeling approach (N-TOP) is an efficient N-terminomic approach that allows the characterization of both N-terminal and internal peptides in a single experiment. Due to its permanent positive charge, TMPP labeling strongly affects MS/MS fragmentation resulting in unadapted scoring of TMPP-derivatized peptide spectra by classical search engines. This behavior has led to difficulties in validating TMPP-derivatized peptide identifications with usual score filtering and thus to low/underestimated numbers of identified N-termini. We present herein a new strategy (dN-TOP) that overwhelmed the previous limitation allowing a confident and automated N-terminal peptide validation thanks to a combined labeling with light and heavy TMPP reagents. We show how this double labeling allows increasing the number of validated N-terminal peptides. This strategy represents a considerable improvement to the well-established N-TOP method with an enhanced and accelerated data processing making it now fully compatible with high-throughput proteogenomics studies.

  15. SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments

    PubMed Central

    Youngblut, Nicholas D.; Barnett, Samuel E.; Buckley, Daniel H.

    2018-01-01

    DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. PMID:29643843

  16. High-throughput 96-well solvent mediated sonic blending synthesis and on-plate solid/solution stability characterization of pharmaceutical cocrystals.

    PubMed

    Luu, Van; Jona, Janan; Stanton, Mary K; Peterson, Matthew L; Morrison, Henry G; Nagapudi, Karthik; Tan, Helming

    2013-01-30

    A 96-well high-throughput cocrystal screening workflow has been developed consisting of solvent-mediated sonic blending synthesis and on-plate solid/solution stability characterization by XRPD. A strategy of cocrystallization screening in selected blend solvents including water mixtures is proposed to not only manipulate solubility of the cocrystal components but also differentiate physical stability of the cocrystal products. Caffeine-oxalic acid and theophylline-oxalic acid cocrystals were prepared and evaluated in relation to saturation levels of the cocrystal components and stability of the cocrystal products in anhydrous and hydrous solvents. AMG 517 was screened with a number of coformers, and solid/solution stability of the resulting cocrystals on the 96-well plate was investigated. A stability trend was observed and confirmed that cocrystals comprised of lower aqueous solubility coformers tended to be more stable in water. Furthermore, cocrystals which could be isolated under hydrous solvent blending condition exhibited superior physical stability to those which could only be obtained under anhydrous condition. This integrated HTS workflow provides an efficient route in an API-sparing approach to screen and identify cocrystal candidates with proper solubility and solid/solution stability properties. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Microbial biogeography of a university campus.

    PubMed

    Ross, Ashley A; Neufeld, Josh D

    2015-12-01

    Microorganisms are distributed on surfaces within homes, workplaces, and schools, with the potential to impact human health and disease. University campuses represent a unique opportunity to explore the distribution of microorganisms within built environments because of high human population densities, throughput, and variable building usage. For example, the main campus of the University of Waterloo spans four square kilometres, hosts over 40,000 individuals daily, and is comprised of a variety of buildings, including lecture halls, gyms, restaurants, residences, and a daycare. Representative left and right entrance door handles from each of the 65 buildings at the University of Waterloo were swabbed at three time points during an academic term in order to determine if microbial community assemblages coincided with building usage and whether these communities are stable temporally. Across all door handles, the dominant phyla were Proteobacteria, Firmicutes, Actinobacteria, and Bacteroidetes, which comprised 89.0 % of all reads. A total of 713 genera were observed, 16 of which constituted a minimum of 1 % of the 2,458,094 classified and rarefied reads. Archaea were found in low abundance (~0.03 %) but were present on 42.8 % of the door handles on 96 % of buildings across all time points, indicating that they are ubiquitous at very low levels on door handle surfaces. Although inter-handle variability was high, several individual building entrances harbored distinct microbial communities that were consistent over time. The presence of visible environmental debris on a subset of handles was associated with distinct microbial communities (beta diversity), increased richness (alpha diversity), and higher biomass (adenosine 5'-triphosphate; ATP). This study demonstrates highly variable microbial communities associated with frequently contacted door handles on a university campus. Nonetheless, the data also revealed several building-specific and temporally stable bacterial and archaeal community patterns, with a potential impact of accumulated debris, a possible result of low human throughput, on detected microbial communities.

  18. An in-advance stable isotope labeling strategy for relative analysis of multiple acidic plant hormones in sub-milligram Arabidopsis thaliana seedling and a single seed.

    PubMed

    Sun, Xiaohong; Ouyang, Yue; Chu, Jinfang; Yan, Jing; Yu, Yan; Li, Xiaoqiang; Yang, Jun; Yan, Cunyu

    2014-04-18

    A sensitive and reliable in-advance stable isotope labeling strategy was developed for simultaneous relative quantification of 8 acidic plant hormones in sub-milligram amount of plant materials. Bromocholine bromide (BETA) and its deuterated counterpart D9-BETA were used to in-advance derivatize control and sample extracts individually, which were then combined and subjected to solid-phase extraction (SPE) purification followed by UPLC-MS/MS analysis. Relative quantification of target compounds was obtained by calculation of the peak area ratios of BETA/D9-BETA labeled plant hormones. The in-advance stable isotope labeling strategy realized internal standard-based relative quantification of multiple kinds of plant hormones independent of availability of internal standard of every analyte with enhanced sensitivity of 1-3 orders of magnitude. Meanwhile, the in-advance labeling contributes to higher sample throughput and more reliability. The method was successfully applied to determine 8 plant hormones in 0.8mg DW (dry weight) of seedlings and 4 plant hormones from single seed of Arabidopsis thaliana. The results show the potential of the method in relative quantification of multiple plant hormones in tiny plant tissues or organs, which will advance the knowledge of the crosstalk mechanism of plant hormones. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. A Protocol for Scalable Loop-Free Multicast Routing

    DTIC Science & Technology

    1997-01-01

    N ÁŜ+@"B�tQDVO:(#ÄŜ+ÄIO8+@D@DÀiÄq/"VÂaÀi:!8A:<B?V+B3/"V+I 8AQ "kVÄDV|¾VO¾K/"VO:J;¬K�@ÄDVNt�QDV:(𔄂+:=V|QD@D:zV+8cI#ඝ/RöV> Ä...8@DVOK¶Å�)/"Ài:�:zÀcQDÄzVO:yø|K¶@.W¢K¶ÅiQD:=Vù iIc5�T�@*(@�QR¾7/"VO:�ÀA :=VS8c;qÀi@"ILÀcQRÃMBqŜOWV I 8AQ "kV+B7¸qî�÷ï ½�ÄzÀ�8A:z:=K�WVI

  20. Optical signal processing techniques and applications of optical phase modulation in high-speed communication systems

    NASA Astrophysics Data System (ADS)

    Deng, Ning

    In recent years, optical phase modulation has attracted much research attention in the field of fiber optic communications. Compared with the traditional optical intensity-modulated signal, one of the main merits of the optical phase-modulated signal is the better transmission performance. For optical phase modulation, in spite of the comprehensive study of its transmission performance, only a little research has been carried out in terms of its functions, applications and signal processing for future optical networks. These issues are systematically investigated in this thesis. The research findings suggest that optical phase modulation and its signal processing can greatly facilitate flexible network functions and high bandwidth which can be enjoyed by end users. In the thesis, the most important physical-layer technology, signal processing and multiplexing, are investigated with optical phase-modulated signals. Novel and advantageous signal processing and multiplexing approaches are proposed and studied. Experimental investigations are also reported and discussed in the thesis. Optical time-division multiplexing and demultiplexing. With the ever-increasing demand on communication bandwidth, optical time division multiplexing (OTDM) is an effective approach to upgrade the capacity of each wavelength channel in current optical systems. OTDM multiplexing can be simply realized, however, the demultiplexing requires relatively complicated signal processing and stringent timing control, and thus hinders its practicability. To tackle this problem, in this thesis a new OTDM scheme with hybrid DPSK and OOK signals is proposed. Experimental investigation shows this scheme can greatly enhance the demultiplexing timing misalignment and improve the demultiplexing performance, and thus make OTDM more practical and cost effective. All-optical signal processing. In current and future optical communication systems and networks, the data rate per wavelength has been approaching the speed limitation of electronics. Thus, all-optical signal processing techniques are highly desirable to support the necessary optical switching functionalities in future ultrahigh-speed optical packet-switching networks. To cope with the wide use of optical phase-modulated signals, in the thesis, an all-optical logic for DPSK or PSK input signals is developed, for the first time. Based on four-wave mixing in semiconductor optical amplifier, the structure of the logic gate is simple, compact, and capable of supporting ultrafast operation. In addition to the general logic processing, a simple label recognition scheme, as a specific signal processing function, is proposed for phase-modulated label signals. The proposed scheme can recognize any incoming label pattern according to the local pattern, and is potentially capable of handling variable-length label patterns. Optical access network with multicast overlay and centralized light sources. In the arena of optical access networks, wavelength division multiplexing passive optical network (WDM-PON) is a promising technology to deliver high-speed data traffic. However, most of proposed WDM-PONs only support conventional point-to-point service, and cannot meet the requirement of increasing demand on broadcast and multicast service. In this thesis, a simple network upgrade is proposed based on the traditional PON architecture to support both point-to-point and multicast service. In addition, the two service signals are modulated on the same lightwave carrier. The upstream signal is also remodulated on the same carrier at the optical network unit, which can significantly relax the requirement on wavelength management at the network unit.

  1. Predicting the stability of ternary intermetallics with density functional theory and machine learning

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonathan; Chen, Liming; Botti, Silvana; Marques, Miguel A. L.

    2018-06-01

    We use a combination of machine learning techniques and high-throughput density-functional theory calculations to explore ternary compounds with the AB2C2 composition. We chose the two most common intermetallic prototypes for this composition, namely, the tI10-CeAl2Ga2 and the tP10-FeMo2B2 structures. Our results suggest that there may be ˜10 times more stable compounds in these phases than previously known. These are mostly metallic and non-magnetic. While the use of machine learning reduces the overall calculation cost by around 75%, some limitations of its predictive power still exist, in particular, for compounds involving the second-row of the periodic table or magnetic elements.

  2. Flip the tip: an automated, high quality, cost-effective patch clamp screen.

    PubMed

    Lepple-Wienhues, Albrecht; Ferlinz, Klaus; Seeger, Achim; Schäfer, Arvid

    2003-01-01

    The race for creating an automated patch clamp has begun. Here, we present a novel technology to produce true gigaseals and whole cell preparations at a high rate. Suspended cells are flushed toward the tip of glass micropipettes. Seal, whole-cell break-in, and pipette/liquid handling are fully automated. Extremely stable seals and access resistance guarantee high recording quality. Data obtained from different cell types sealed inside pipettes show long-term stability, voltage clamp and seal quality, as well as block by compounds in the pM range. A flexible array of independent electrode positions minimizes consumables consumption at maximal throughput. Pulled micropipettes guarantee a proven gigaseal substrate with ultra clean and smooth surface at low cost.

  3. Printed Carbon Nanotube Electronics and Sensor Systems.

    PubMed

    Chen, Kevin; Gao, Wei; Emaminejad, Sam; Kiriya, Daisuke; Ota, Hiroki; Nyein, Hnin Yin Yin; Takei, Kuniharu; Javey, Ali

    2016-06-01

    Printing technologies offer large-area, high-throughput production capabilities for electronics and sensors on mechanically flexible substrates that can conformally cover different surfaces. These capabilities enable a wide range of new applications such as low-cost disposable electronics for health monitoring and wearables, extremely large format electronic displays, interactive wallpapers, and sensing arrays. Solution-processed carbon nanotubes have been shown to be a promising candidate for such printing processes, offering stable devices with high performance. Here, recent progress made in printed carbon nanotube electronics is discussed in terms of materials, processing, devices, and applications. Research challenges and opportunities moving forward from processing and system-level integration points of view are also discussed for enabling practical applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Conjugation of (E)-5-[2-(Methoxycarbonyl)ethenyl]cytidine to hydrophilic microspheres: development of a mobile microscale UV light actinometer.

    PubMed

    Fang, Shiyue; Guan, Yousheng; Blatchley, Ernest R; Shen, Chengyue; Bergstrom, Donald E

    2008-03-01

    ( E)-5-[2-(Methoxycarbonyl)ethenyl]cytidine was biotinylated through a diisopropylsilylacetal linkage and attached to the surface of hydrophilic streptavidin-coated microspheres through the high-affinity noncovalent interaction between biotin and streptavidin. The functionalized microspheres form a stable suspension in water. Upon UV irradiation, the nonfluorescent ( E)-5-[2-(methoxycarbonyl)ethenyl]cytidine on the microspheres undergoes photocyclization to produce highly fluorescent 3-beta-D-ribofuranosyl-2,7-dioxopyrido[2,3-d]pyrimidine. The fluorescence intensity of the microspheres can be correlated to the particle-specific UV doses applied at different suspension concentrations. The microspheres allow one to measure the UV dose (fluence) distribution in high-throughput water disinfection systems.

  5. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.

    PubMed

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.

  6. Simultaneous polarization-insensitive phase-space trans-multiplexing and wavelength multicasting via cross-phase modulation in a photonic crystal fiber at 10 GBd

    NASA Astrophysics Data System (ADS)

    Cannon, Brice M.

    This thesis investigates the all-optical combination of amplitude and phase modulated signals into one unified multi-level phase modulated signal, utilizing the Kerr nonlinearity of cross-phase modulation (XPM). Predominantly, the first experimental demonstration of simultaneous polarization-insensitive phase-transmultiplexing and multicasting (PI-PTMM) will be discussed. The PI-PTMM operation combines the data of a single 10-Gbaud carrier-suppressed return-to-zero (CSRZ) on-off keyed (OOK) pump signal and 4x10-Gbaud return-to-zero (RZ) binary phase-shift keyed (BPSK) probe signals to generate 4x10-GBd RZ-quadrature phase-shift keyed (QPSK) signals utilizing a highly nonlinear, birefringent photonic crystal fiber (PCF). Since XPM is a highly polarization dependent nonlinearity, a polarization sensitivity reduction technique was used to alleviate the fluctuations due to the remotely generated signals' unpredictable states of polarization (SOP). The measured amplified spontaneous emission (ASE) limited receiver sensitivity optical signal-to-noise ratio (OSNR) penalty of the PI-PTMM signal relative to the field-programmable gate array (FPGA) pre-coded RZ-DQPSK baseline at a forward-error correction (FEC) limit of 10-3 BER was ≈ 0.3 dB. In addition, the OSNR of the remotely generated CSRZ-OOK signal could be degraded to ≈ 29 dB/0.1nm, before the bit error rate (BER) performance of the PI-PTMM operation began to exponentially degrade. A 138-km dispersion-managed recirculating loop system with a 100-GHz, 13-channel mixed-format dense-wavelength-division multiplexed (DWDM) transmitter was constructed to investigate the effect of metro/long-haul transmission impairments. The PI-PTMM DQPSK and the FPGA pre-coded RZ-DQPSK baseline signals were transmitted 1,900 km and 2,400 km in the nonlinearity-limited transmission regime before reaching the 10-3 BER FEC limit. The relative reduction in transmission distance for the PI-PTMM signal was due to the additional transmitter impairments in the PCF that interact negatively with the transmission fiber.

  7. Formation of stable small cell number three-dimensional ovarian cancer spheroids using hanging drop arrays for preclinical drug sensitivity assays.

    PubMed

    Raghavan, Shreya; Ward, Maria R; Rowley, Katelyn R; Wold, Rachel M; Takayama, Shuichi; Buckanovich, Ronald J; Mehta, Geeta

    2015-07-01

    Ovarian cancer grows and metastasizes from multicellular spheroidal aggregates within the ascites fluid. Multicellular tumor spheroids are therefore physiologically significant 3D in vitro models for ovarian cancer research. Conventional hanging drop cultures require high starting cell numbers, and are tedious for long-term maintenance. In this study, we generate stable, uniform multicellular spheroids using very small number of ovarian cancer cells in a novel 384 well hanging drop array platform. We used novel tumor spheroid platform and two ovarian cancer cell lines (A2780 and OVCAR3) to demonstrate the stable incorporation of as few as 10 cells into a single spheroid. Spheroids had uniform geometry, with projected areas (42.60×10(3)μm-475.22×10(3)μm(2) for A2780 spheroids and 37.24×10(3)μm(2)-281.01×10(3)μm(2) for OVCAR3 spheroids) that varied as a function of the initial cell seeding density. Phalloidin and nuclear stains indicated cells formed tightly packed spheroids with demarcated boundaries and cell-cell interaction within spheroids. Cells within spheroids demonstrated over 85% viability. 3D tumor spheroids demonstrated greater resistance (70-80% viability) to cisplatin chemotherapy compared to 2D cultures (30-50% viability). Ovarian cancer spheroids can be generated from limited cell numbers in high throughput 384 well plates with high viability. Spheroids demonstrate therapeutic resistance relative to cells in traditional 2D culture. Stable incorporation of low cell numbers is advantageous when translating this research to rare patient-derived cells. This system can be used to understand ovarian cancer spheroid biology, as well as carry out preclinical drug sensitivity assays. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Formation of stable small cell number three-dimensional ovarian cancer spheroids using hanging drop arrays for preclinical drug sensitivity assays

    PubMed Central

    Raghavan, Shreya; Ward, Maria R.; Rowley, Katelyn R.; Wold, Rachel M.; Takayama, Shuichi; Buckanovich, Ronald J.; Mehta, Geeta

    2015-01-01

    Background Ovarian cancer grows and metastasizes from multicellular spheroidal aggregates within the ascites fluid. Multicellular tumor spheroids are therefore physiologically significant3Din vitro models for ovarian cancer research. Conventional hanging drop cultures require high starting cell numbers, and are tedious for long-term maintenance. In this study, we generate stable, uniform multicellular spheroids using very small number of ovarian cancer cells in a novel 384 well hanging drop array platform. Methods We used novel tumor spheroid platform and two ovarian cancer cell lines (A2780 and OVCAR3) to demonstrate the stable incorporation of as few as 10 cells into a single spheroid. Results Spheroids had uniform geometry, with projected areas (42.60 × 103 μm–475.22 × 103 μm2 for A2780 spheroids and 37.24 × 103 μm2–281.01 × 103 μm2 for OVCAR3 spheroids) that varied as a function of the initial cell seeding density. Phalloidin and nuclear stains indicated cells formed tightly packed spheroids with demarcated boundaries and cell–cell interaction within spheroids. Cells within spheroids demonstrated over 85% viability. 3D tumor spheroids demonstrated greater resistance (70–80% viability) to cisplatin chemotherapy compared to 2D cultures (30–50% viability). Conclusions Ovarian cancer spheroids can be generated from limited cell numbers in high throughput 384 well plates with high viability. Spheroids demonstrate therapeutic resistance relative to cells in traditional 2D culture. Stable incorporation of low cell numbers is advantageous when translating this research to rare patient-derived cells. This system can be used to understand ovarian cancer spheroid biology, as well as carry out preclinical drug sensitivity assays. PMID:25913133

  9. Evaluation of soil water stable isotope analysis by H2O(liquid)-H2O(vapor) equilibration method

    NASA Astrophysics Data System (ADS)

    Gralher, Benjamin; Stumpp, Christine

    2014-05-01

    Environmental tracers like stable isotopes of water (δ18O, δ2H) have proven to be valuable tools to study water flow and transport processes in soils. Recently, a new technique for soil water isotope analysis has been developed that employs a vapor phase being in isothermal equilibrium with the liquid phase of interest. This has increased the potential application of water stable isotopes in unsaturated zone studies as it supersedes laborious extraction of soil water. However, uncertainties of analysis and influencing factors need to be considered. Therefore, the objective of this study was to evaluate different methodologies of analysing stable isotopes in soil water in order to reduce measurement uncertainty. The methodologies included different preparation procedures of soil cores for equilibration of vapor and soil water as well as raw data correction. Two different inflatable sample containers (freezer bags, bags containing a metal layer) and equilibration atmospheres (N2, dry air) were tested. The results showed that uncertainties for δ18O were higher compared to δ2H that cannot be attributed to any specific detail of the processing routine. Particularly, soil samples with high contents of organic matter showed an apparent isotope enrichment which is indicative for fractionation due to evaporation. However, comparison of water samples obtained from suction cups with the local meteoric water line indicated negligible fractionation processes in the investigated soils. Therefore, a method was developed to correct the raw data reducing the uncertainties of the analysis.. We conclude that the evaluated method is advantageous over traditional methods regarding simplicity, resource requirements and sample throughput but careful consideration needs to be made regarding sample handling and data processing. Thus, stable isotopes of water are still a good tool to determine water flow and transport processes in the unsaturated zone.

  10. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  11. In situ patterned micro 3D liver constructs for parallel toxicology testing in a fluidic device

    PubMed Central

    Skardal, Aleksander; Devarasetty, Mahesh; Soker, Shay; Hall, Adam R

    2017-01-01

    3D tissue models are increasingly being implemented for drug and toxicology testing. However, the creation of tissue-engineered constructs for this purpose often relies on complex biofabrication techniques that are time consuming, expensive, and difficult to scale up. Here, we describe a strategy for realizing multiple tissue constructs in a parallel microfluidic platform using an approach that is simple and can be easily scaled for high-throughput formats. Liver cells mixed with a UV-crosslinkable hydrogel solution are introduced into parallel channels of a sealed microfluidic device and photopatterned to produce stable tissue constructs in situ. The remaining uncrosslinked material is washed away, leaving the structures in place. By using a hydrogel that specifically mimics the properties of the natural extracellular matrix, we closely emulate native tissue, resulting in constructs that remain stable and functional in the device during a 7-day culture time course under recirculating media flow. As proof of principle for toxicology analysis, we expose the constructs to ethyl alcohol (0–500 mM) and show that the cell viability and the secretion of urea and albumin decrease with increasing alcohol exposure, while markers for cell damage increase. PMID:26355538

  12. In situ patterned micro 3D liver constructs for parallel toxicology testing in a fluidic device.

    PubMed

    Skardal, Aleksander; Devarasetty, Mahesh; Soker, Shay; Hall, Adam R

    2015-09-10

    3D tissue models are increasingly being implemented for drug and toxicology testing. However, the creation of tissue-engineered constructs for this purpose often relies on complex biofabrication techniques that are time consuming, expensive, and difficult to scale up. Here, we describe a strategy for realizing multiple tissue constructs in a parallel microfluidic platform using an approach that is simple and can be easily scaled for high-throughput formats. Liver cells mixed with a UV-crosslinkable hydrogel solution are introduced into parallel channels of a sealed microfluidic device and photopatterned to produce stable tissue constructs in situ. The remaining uncrosslinked material is washed away, leaving the structures in place. By using a hydrogel that specifically mimics the properties of the natural extracellular matrix, we closely emulate native tissue, resulting in constructs that remain stable and functional in the device during a 7-day culture time course under recirculating media flow. As proof of principle for toxicology analysis, we expose the constructs to ethyl alcohol (0-500 mM) and show that the cell viability and the secretion of urea and albumin decrease with increasing alcohol exposure, while markers for cell damage increase.

  13. Rapid and high throughput fabrication of high temperature stable structures through PDMS transfer printing

    NASA Astrophysics Data System (ADS)

    Hohenberger, Erik; Freitag, Nathan; Korampally, Venumadhav

    2017-07-01

    We report on a facile and low cost fabrication approach for structures—gratings and enclosed nanochannels, through simple solution processed chemistries in conjunction with nanotransfer printing techniques. The ink formulation primarily consisting of an organosilicate polymeric network with a small percentage of added 3-aminopropyl triethoxysilane crosslinker allows one to obtain robust structures that are not only stable towards high temperature processing steps as high as 550 °C but also exhibit exceptional stability against a host of organic solvent washes. No discernable structure distortion was observed compared to the as-printed structures (room temperature processed) when printed structures were subjected to temperatures as high as 550 °C. We further demonstrate the applicability of this technique towards the fabrication of more complex nanostructures such as enclosed channels through a double transfer method, leveraging the exceptional room temperature cross-linking ability of the printed structures and their subsequent resistance to dissolution in organic solvent washes. The exceptional temperature and physico-chemical stability of the nanotransfer printed structures makes this a useful fabrication tool that may be applied as is, or integrated with conventional lithographic techniques for the large area fabrication of functional nanostructures and devices.

  14. Drinking Water Microbiome as a Screening Tool for ...

    EPA Pesticide Factsheets

    Many water utilities in the US using chloramine as disinfectant treatment in their distribution systems have experienced nitrification episodes, which detrimentally impact the water quality. A chloraminated drinking water distribution system (DWDS) simulator was operated through four successive operational schemes, including two stable events (SS) and an episode of nitrification (SF), followed by a ‘chlorine burn’ (SR) by switching disinfectant from chloramine to free chlorine. The current research investigated the viability of biological signatures as potential indicators of operational failure and predictors of nitrification in DWDS. For this purpose, we examined the bulk water (BW) bacterial microbiome of a chloraminated DWDS simulator operated through successive operational schemes, including an episode of nitrification. BW data was chosen because sampling of BW in a DWDS by water utility operators is relatively simpler and easier than collecting biofilm samples from underground pipes. The methodology applied a supervised classification machine learning approach (naïve Bayes algorithm) for developing predictive models for nitrification. Classification models were trained with biological datasets (Operational Taxonomic Unit [OTU] and genus-level taxonomic groups) generated using next generation high-throughput technology, and divided into two groups (i.e. binary) of positives and negatives (Failure and Stable, respectively). We also invest

  15. Note: Real-time monitoring via second-harmonic interferometry of a flow gas cell for laser wakefield acceleration.

    PubMed

    Brandi, F; Giammanco, F; Conti, F; Sylla, F; Lambert, G; Gizzi, L A

    2016-08-01

    The use of a gas cell as a target for laser wakefield acceleration (LWFA) offers the possibility to obtain stable and manageable laser-plasma interaction process, a mandatory condition for practical applications of this emerging technique, especially in multi-stage accelerators. In order to obtain full control of the gas particle number density in the interaction region, thus allowing for a long term stable and manageable LWFA, real-time monitoring is necessary. In fact, the ideal gas law cannot be used to estimate the particle density inside the flow cell based on the preset backing pressure and the room temperature because the gas flow depends on several factors like tubing, regulators, and valves in the gas supply system, as well as vacuum chamber volume and vacuum pump speed/throughput. Here, second-harmonic interferometry is applied to measure the particle number density inside a flow gas cell designed for LWFA. The results demonstrate that real-time monitoring is achieved and that using low backing pressure gas (<1 bar) and different cell orifice diameters (<2 mm) it is possible to finely tune the number density up to the 10(19) cm(-3) range well suited for LWFA.

  16. Note: Real-time monitoring via second-harmonic interferometry of a flow gas cell for laser wakefield acceleration

    NASA Astrophysics Data System (ADS)

    Brandi, F.; Giammanco, F.; Conti, F.; Sylla, F.; Lambert, G.; Gizzi, L. A.

    2016-08-01

    The use of a gas cell as a target for laser wakefield acceleration (LWFA) offers the possibility to obtain stable and manageable laser-plasma interaction process, a mandatory condition for practical applications of this emerging technique, especially in multi-stage accelerators. In order to obtain full control of the gas particle number density in the interaction region, thus allowing for a long term stable and manageable LWFA, real-time monitoring is necessary. In fact, the ideal gas law cannot be used to estimate the particle density inside the flow cell based on the preset backing pressure and the room temperature because the gas flow depends on several factors like tubing, regulators, and valves in the gas supply system, as well as vacuum chamber volume and vacuum pump speed/throughput. Here, second-harmonic interferometry is applied to measure the particle number density inside a flow gas cell designed for LWFA. The results demonstrate that real-time monitoring is achieved and that using low backing pressure gas (<1 bar) and different cell orifice diameters (<2 mm) it is possible to finely tune the number density up to the 1019 cm-3 range well suited for LWFA.

  17. Discovery and Characterization of a Pourbaix-Stable, 1.8 eV Direct Gap Bismuth Manganate Photoanode

    DOE PAGES

    Newhouse, Paul F.; Reyes-Lillo, Sebastian E.; Li, Guo; ...

    2017-11-13

    Solar-driven oxygen evolution is a critical technology for renewably synthesizing hydrogen- and carbon-containing fuels in solar fuel generators. New photoanode materials are needed to meet efficiency and stability requirements, motivating materials explorations for semiconductors with (i) band-gap energy in the visible spectrum and (ii) stable operation in aqueous electrolyte at the electrochemical potential needed to evolve oxygen from water. Motivated by the oxygen evolution competency of many Mn-based oxides, the existence of several Bi-containing ternary oxide photoanode materials, and the variety of known oxide materials combining these elements with Sm, we explore the Bi-Mn-Sm oxide system for new photoanodes. Throughmore » the use of a ferri/ferrocyanide redox couple in high-throughput screening, BiMn 2O 5 and its alloy with Sm are identified as photoanode materials with a near-ideal optical band gap of 1.8 eV. Using density functional theory-based calculations of the mullite Bi 3+ Mn 3+ Mn 4+O 5 phase, we identify electronic analogues to the well-known BiVO 4 photoanode and demonstrate excellent Pourbaix stability above the oxygen evolution Nernstian potential from pH 4.5 to 15. Lastly, our suite of experimental and computational characterization indicates that BiMn 2O 5 is a complex oxide with the necessary optical and chemical properties to be an efficient, stable solar fuel photoanode.« less

  18. Discovery and Characterization of a Pourbaix-Stable, 1.8 eV Direct Gap Bismuth Manganate Photoanode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newhouse, Paul F.; Reyes-Lillo, Sebastian E.; Li, Guo

    Solar-driven oxygen evolution is a critical technology for renewably synthesizing hydrogen- and carbon-containing fuels in solar fuel generators. New photoanode materials are needed to meet efficiency and stability requirements, motivating materials explorations for semiconductors with (i) band-gap energy in the visible spectrum and (ii) stable operation in aqueous electrolyte at the electrochemical potential needed to evolve oxygen from water. Motivated by the oxygen evolution competency of many Mn-based oxides, the existence of several Bi-containing ternary oxide photoanode materials, and the variety of known oxide materials combining these elements with Sm, we explore the Bi-Mn-Sm oxide system for new photoanodes. Throughmore » the use of a ferri/ferrocyanide redox couple in high-throughput screening, BiMn 2O 5 and its alloy with Sm are identified as photoanode materials with a near-ideal optical band gap of 1.8 eV. Using density functional theory-based calculations of the mullite Bi 3+ Mn 3+ Mn 4+O 5 phase, we identify electronic analogues to the well-known BiVO 4 photoanode and demonstrate excellent Pourbaix stability above the oxygen evolution Nernstian potential from pH 4.5 to 15. Lastly, our suite of experimental and computational characterization indicates that BiMn 2O 5 is a complex oxide with the necessary optical and chemical properties to be an efficient, stable solar fuel photoanode.« less

  19. Stable Associations Masked by Temporal Variability in the Marine Copepod Microbiome.

    PubMed

    Moisander, Pia H; Sexton, Andrew D; Daley, Meaghan C

    2015-01-01

    Copepod-bacteria interactions include permanent and transient epi- and endobiotic associations that may play roles in copepod health, transfer of elements in the food web, and biogeochemical cycling. Microbiomes of three temperate copepod species (Acartia longiremis, Centropages hamatus, and Calanus finmarchicus) from the Gulf of Maine were investigated during the early summer season using high throughput amplicon sequencing. The most prominent stable component of the microbiome included several taxa within Gammaproteobacteria, with Pseudoalteromonas spp. especially abundant across copepod species. These Gammaproteobacteria appear to be promoted by the copepod association, likely benefitting from nutrient enriched microenvironments on copepods, and forming a more important part of the copepod-associated community than Vibrio spp. during the cold-water season in this temperate system. Taxon-specific associations included an elevated relative abundance of Piscirickettsiaceae and Colwelliaceae on Calanus, and Marinomonas sp. in Centropages. The communities in full and voided gut copepods had distinct characteristics, thus the presence of a food-associated microbiome was evident, including higher abundance of Rhodobacteraceae and chloroplast sequences in the transient communities. The observed variability was partially explained by collection date that may be linked to factors such as variable time since molting, gender differences, and changes in food availability and type over the study period. While some taxon-specific and stable associations were identified, temporal changes in environmental conditions, including food type, appear to be key in controlling the composition of bacterial communities associated with copepods in this temperate coastal system during the early summer.

  20. High-purity circular RNA isolation method (RPAD) reveals vast collection of intronic circRNAs.

    PubMed

    Panda, Amaresh C; De, Supriyo; Grammatikakis, Ioannis; Munk, Rachel; Yang, Xiaoling; Piao, Yulan; Dudekula, Dawood B; Abdelmohsen, Kotb; Gorospe, Myriam

    2017-07-07

    High-throughput RNA sequencing methods coupled with specialized bioinformatic analyses have recently uncovered tens of thousands of unique circular (circ)RNAs, but their complete sequences, genes of origin and functions are largely unknown. Given that circRNAs lack free ends and are thus relatively stable, their association with microRNAs (miRNAs) and RNA-binding proteins (RBPs) can influence gene expression programs. While exoribonuclease treatment is widely used to degrade linear RNAs and enrich circRNAs in RNA samples, it does not efficiently eliminate all linear RNAs. Here, we describe a novel method for the isolation of highly pure circRNA populations involving RNase R treatment followed by Polyadenylation and poly(A)+ RNA Depletion (RPAD), which removes linear RNA to near completion. High-throughput sequencing of RNA prepared using RPAD from human cervical carcinoma HeLa cells and mouse C2C12 myoblasts led to two surprising discoveries: (i) many exonic circRNA (EcircRNA) isoforms share an identical backsplice sequence but have different body sizes and sequences, and (ii) thousands of novel intronic circular RNAs (IcircRNAs) are expressed in cells. In sum, isolating high-purity circRNAs using the RPAD method can enable quantitative and qualitative analyses of circRNA types and sequence composition, paving the way for the elucidation of circRNA functions. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  1. Optimization of Time-Resolved Fluorescence Assay for Detection of Eu-DOTA-labeled Ligand-Receptor Interactions

    PubMed Central

    De Silva, Channa R.; Vagner, Josef; Lynch, Ronald; Gillies, Robert J.; Hruby, Victor J.

    2010-01-01

    Lanthanide-based luminescent ligand binding assays are superior to traditional radiolabel assays due to improved sensitivity and affordability in high throughput screening while eliminating the use of radioactivity. Despite significant progress using lanthanide(III)-coordinated chelators such as DTPA derivatives, dissociation-enhanced lanthanide fluoroimmunoassays (DELFIA) have not yet been successfully used with more stable chelators, e.g. DOTA derivatives, due to the incomplete release of lanthanide(III) ions from the complex. Here, a modified and an optimized DELFIA procedure incorporating an acid treatment protocol is introduced for use with Eu(III)-DOTA labeled peptides. Complete release of Eu(III) ions from DOTA labeled ligands was observed using hydrochloric acid (2.0 M) prior to the luminescent enhancement step. NDP-α-MSH labeled with Eu(III)-DOTA was synthesized and the binding affinity to cells overexpressing the human melanocortin-4 receptors (hMC4R) was evaluated using the modified protocol. Binding data indicate that the Eu(III)-DOTA linked peptide bound to these cells with an affinity similar to its DTPA analogue. The modified DELFIA procedure was further used to monitor the binding of an Eu(III)-DOTA labeled heterobivalent peptide to the cells expressing both hMC4R and CCK-2 (Cholecystokinin) receptors. The modified assay provides superior results and is appropriate for high-throughput screening of ligand libraries. PMID:19852924

  2. ComplexQuant: high-throughput computational pipeline for the global quantitative analysis of endogenous soluble protein complexes using high resolution protein HPLC and precision label-free LC/MS/MS.

    PubMed

    Wan, Cuihong; Liu, Jian; Fong, Vincent; Lugowski, Andrew; Stoilova, Snejana; Bethune-Waddell, Dylan; Borgeson, Blake; Havugimana, Pierre C; Marcotte, Edward M; Emili, Andrew

    2013-04-09

    The experimental isolation and characterization of stable multi-protein complexes are essential to understanding the molecular systems biology of a cell. To this end, we have developed a high-throughput proteomic platform for the systematic identification of native protein complexes based on extensive fractionation of soluble protein extracts by multi-bed ion exchange high performance liquid chromatography (IEX-HPLC) combined with exhaustive label-free LC/MS/MS shotgun profiling. To support these studies, we have built a companion data analysis software pipeline, termed ComplexQuant. Proteins present in the hundreds of fractions typically collected per experiment are first identified by exhaustively interrogating MS/MS spectra using multiple database search engines within an integrative probabilistic framework, while accounting for possible post-translation modifications. Protein abundance is then measured across the fractions based on normalized total spectral counts and precursor ion intensities using a dedicated tool, PepQuant. This analysis allows co-complex membership to be inferred based on the similarity of extracted protein co-elution profiles. Each computational step has been optimized for processing large-scale biochemical fractionation datasets, and the reliability of the integrated pipeline has been benchmarked extensively. This article is part of a Special Issue entitled: From protein structures to clinical applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Fluorophore Absorption Size Exclusion Chromatography (FA-SEC): An Alternative Method for High-Throughput Detergent Screening of Membrane Proteins.

    PubMed

    Lin, Sung-Yao; Sun, Xing-Han; Hsiao, Yu-Hsuan; Chang, Shao-En; Li, Guan-Syun; Hu, Nien-Jen

    2016-01-01

    Membrane proteins play key roles in many fundamental functions in cells including ATP synthesis, ion and molecule transporter, cell signalling and enzymatic reactions, accounting for ~30% genes of whole genomes. However, the hydrophobic nature of membrane proteins frequently hampers the progress of structure determination. Detergent screening is the critical step in obtaining stable detergent-solubilized membrane proteins and well-diffracting protein crystals. Fluorescence Detection Size Exclusion Chromatography (FSEC) has been developed to monitor the extraction efficiency and monodispersity of membrane proteins in detergent micelles. By tracing the FSEC profiles of GFP-fused membrane proteins, this method significantly enhances the throughput of detergent screening. However, current methods to acquire FSEC profiles require either an in-line fluorescence detector with the SEC equipment or an off-line spectrofluorometer microplate reader. Here, we introduce an alternative method detecting the absorption of GFP (FA-SEC) at 485 nm, thus making this methodology possible on conventional SEC equipment through the in-line absorbance spectrometer. The results demonstrate that absorption is in great correlation with fluorescence of GFP. The comparably weaker absorption signal can be improved by using a longer path-length flow cell. The FA-SEC profiles were congruent with the ones plotted by FSEC, suggesting FA-SEC could be a comparable and economical setup for detergent screening of membrane proteins.

  4. A high-throughput mass spectrometry assay to simultaneously measure intact insulin and C-peptide.

    PubMed

    Taylor, Steven W; Clarke, Nigel J; Chen, Zhaohui; McPhaul, Michael J

    2016-04-01

    Measurements of fasting levels of insulin and C-peptide are useful in documenting insulin resistance and may help predict development of diabetes mellitus. However, the specific insulin and C-peptide levels associated with specific degrees of insulin resistance have not been defined, owing to marked variability among immunoassays and lack of standardization. Herein, we describe a multiplexed liquid chromatography-tandem mass spectrometry (LC-MS/MS) assay for intact insulin and C-peptide. Insulin and C-peptide were enriched from patient sera using monoclonal antibodies immobilized on magnetic beads and processed on a robotic liquid handler. Eluted peptides were analyzed by LC-MS/MS. Bovine insulin and a stable isotopically-labeled (13C/15N) C-peptide were utilized as internal standards. The assay had an analytical measurement range of 3 to 320 μIU/ml (18 to 1920 pmol/l) for insulin and 0.11 to 27.2 ng/ml (36 to 9006 pmol/l) for C-peptide. Intra- and inter-day assay variation was less than 11% for both peptides. Of the 5 insulin analogs commonly prescribed to treat diabetes, only the recombinant drug insulin lispro caused significant interference for the determination of endogenous insulin. There were no observed interferences for C-peptide. We developed and validated a high-throughput, quantitative, multiplexed LC-MS/MS assay for intact insulin and C-peptide. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. High-throughput estimation of incident light, light interception and radiation-use efficiency of thousands of plants in a phenotyping platform.

    PubMed

    Cabrera-Bosquet, Llorenç; Fournier, Christian; Brichet, Nicolas; Welcker, Claude; Suard, Benoît; Tardieu, François

    2016-10-01

    Light interception and radiation-use efficiency (RUE) are essential components of plant performance. Their genetic dissections require novel high-throughput phenotyping methods. We have developed a suite of methods to evaluate the spatial distribution of incident light, as experienced by hundreds of plants in a glasshouse, by simulating sunbeam trajectories through glasshouse structures every day of the year; the amount of light intercepted by maize (Zea mays) plants via a functional-structural model using three-dimensional (3D) reconstructions of each plant placed in a virtual scene reproducing the canopy in the glasshouse; and RUE, as the ratio of plant biomass to intercepted light. The spatial variation of direct and diffuse incident light in the glasshouse (up to 24%) was correctly predicted at the single-plant scale. Light interception largely varied between maize lines that differed in leaf angles (nearly stable between experiments) and area (highly variable between experiments). Estimated RUEs varied between maize lines, but were similar in two experiments with contrasting incident light. They closely correlated with measured gas exchanges. The methods proposed here identified reproducible traits that might be used in further field studies, thereby opening up the way for large-scale genetic analyses of the components of plant performance. © 2016 INRA New Phytologist © 2016 New Phytologist Trust.

  6. Factor analysis and predictive validity of microcomputer-based tests

    NASA Technical Reports Server (NTRS)

    Kennedy, R. S.; Baltzley, D. R.; Turnage, J. J.; Jones, M. B.

    1989-01-01

    11 tests were selected from two microcomputer-based performance test batteries because previously these tests exhibited rapid stability (less than 10 min, of practice) and high retest reliability efficiencies (r greater than 0.707 for each 3 min. of testing). The battery was administered three times to each of 108 college students (48 men and 60 women) and a factor analysis was performed. Two of the three identified factors appear to be related to information processing ("encoding" and "throughput/decoding"), and the third named an "output/speed" factor. The spatial, memory, and verbal tests loaded on the "encoding" factor and included Grammatical Reasoning, Pattern Comparison, Continuous Recall, and Matrix Rotation. The "throughput/decoding" tests included perceptual/numerical tests like Math Processing, Code Substitution, and Pattern Comparison. The output speed factor was identified by Tapping and Reaction Time tests. The Wonderlic Personnel Test was group administered before the first and after the last administration of the performance tests. The multiple Rs in the total sample between combined Wonderlic as a criterion and less than 5 min. of microcomputer testing on Grammatical Reasoning and Math Processing as predictors ranged between 0.41 and 0.52 on the three test administrations. Based on these results, the authors recommend a core battery which, if time permits, would consist of two tests from each factor. Such a battery is now known to permit stable, reliable, and efficient assessment.

  7. High-purity circular RNA isolation method (RPAD) reveals vast collection of intronic circRNAs

    PubMed Central

    De, Supriyo; Grammatikakis, Ioannis; Munk, Rachel; Yang, Xiaoling; Piao, Yulan; Dudekula, Dawood B.; Gorospe, Myriam

    2017-01-01

    Abstract High-throughput RNA sequencing methods coupled with specialized bioinformatic analyses have recently uncovered tens of thousands of unique circular (circ)RNAs, but their complete sequences, genes of origin and functions are largely unknown. Given that circRNAs lack free ends and are thus relatively stable, their association with microRNAs (miRNAs) and RNA-binding proteins (RBPs) can influence gene expression programs. While exoribonuclease treatment is widely used to degrade linear RNAs and enrich circRNAs in RNA samples, it does not efficiently eliminate all linear RNAs. Here, we describe a novel method for the isolation of highly pure circRNA populations involving RNase R treatment followed by Polyadenylation and poly(A)+ RNA Depletion (RPAD), which removes linear RNA to near completion. High-throughput sequencing of RNA prepared using RPAD from human cervical carcinoma HeLa cells and mouse C2C12 myoblasts led to two surprising discoveries: (i) many exonic circRNA (EcircRNA) isoforms share an identical backsplice sequence but have different body sizes and sequences, and (ii) thousands of novel intronic circular RNAs (IcircRNAs) are expressed in cells. In sum, isolating high-purity circRNAs using the RPAD method can enable quantitative and qualitative analyses of circRNA types and sequence composition, paving the way for the elucidation of circRNA functions. PMID:28444238

  8. Development and Characterization of a High Throughput Screen to investigate the delayed Effects of Radiations Commonly Encountered in Space

    NASA Astrophysics Data System (ADS)

    Morgan, W. F.

    Astronauts based on the space station or on long-term space missions will be exposed to high Z radiations in the cosmic environment In order to evaluate the potentially deleterious effects of exposure to radiations commonly encountered in space we have developed and characterized a high throughput assay to detect mutation deletion events and or hyperrecombination in the progeny of exposed cells This assay is based on a plasmid vector containing a green fluorescence protein reporter construct We have shown that after stable transfection of the vector into human or hamster cells this construct can identify mutations specifically base changes and deletions as well as recombination events e g gene conversion or homologous recombination occurring as a result of exposure to ionizing radiation Our focus has been on those events occurring in the progeny of an irradiated cell that are potentially associated with radiation induced genomic instability rather than the more conventional assays that evaluate the direct immediate effects of radiation exposure Considerable time has been spent automating analysis of surviving colonies as a function of time after irradiation in order to determine when delayed instability is induced and the consequences of this delayed instability The assay is now automated permitting the evaluation of potentially rare events associated with low dose low dose rate radiations commonly encountered in space

  9. High-Throughput Screening of Vascular Endothelium-Destructive or Protective Microenvironments: Cooperative Actions of Extracellular Matrix Composition, Stiffness, and Structure.

    PubMed

    Ding, Yonghui; Floren, Michael; Tan, Wei

    2017-06-01

    Pathological modification of the subendothelial extracellular matrix (ECM) has closely been associated with endothelial activation and subsequent cardiovascular disease progression. To understand regulatory mechanisms of these matrix modifications, the majority of previous efforts have focused on the modulation of either chemical composition or matrix stiffness on 2D smooth surfaces without simultaneously probing their cooperative effects on endothelium function on in vivo like 3D fibrous matrices. To this end, a high-throughput, combinatorial microarray platform on 2D and 3D hydrogel settings to resemble the compositions, stiffness, and structure of healthy and diseased subendothelial ECM has been established, and further their respective and combined effects on endothelial attachment, proliferation, inflammation, and junctional integrity have been investigated. For the first time, the results demonstrate that 3D fibrous structure resembling native ECM is a critical endothelium-protective microenvironmental factor by maintaining the stable, quiescent endothelium with strong resistance to proinflammatory stimuli. It is also revealed that matrix stiffening, in concert with chemical compositions resembling diseased ECM, particularly collagen III, could aggravate activation of nuclear factor kappa B, disruption of endothelium integrity, and susceptibility to proinflammatory stimuli. This study elucidates cooperative effects of various microenvironmental factors on endothelial activation and sheds light on new in vitro model for cardiovascular diseases. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Quantifying Kinase-Specific Phosphorylation Stoichiometry Using Stable Isotope Labeling In a Reverse In-Gel Kinase Assay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xiang; Cox, Jonathan T.; Huang, Weiliang

    2016-12-06

    Reversible protein phosphorylation regulates essentially all cellular activities. Aberrant protein phosphorylation is an etiological factor in a wide array of diseases, including cancer1, diabetes2, and Alzheimer’s3. Given the broad impact of protein phosphorylation on cellular biology and organismal health, understanding how protein phosphorylation is regulated and the consequences of gain and loss of phosphoryl moieties from proteins is of primary importance. Advances in instrumentation, particularly in mass spectrometry, coupled with high throughput approaches have recently yielded large datasets cataloging tens of thousands of protein phosphorylation sites in multiple organisms4-6. While these studies are seminal in term of data collection, ourmore » understanding of protein phosphorylation regulation remains largely one-dimensional.« less

  11. Immediate drop on demand technology (I-DOT) coupled with mass spectrometry via an open port sampling interface.

    PubMed

    Van Berkel, Gary J; Kertesz, Vilmos; Boeltz, Harry

    2017-11-01

    The aim of this work was to demonstrate and evaluate the analytical performance of coupling the immediate drop on demand technology to a mass spectrometer via the recently introduced open port sampling interface and ESI. Methodology & results: A maximum sample analysis throughput of 5 s per sample was demonstrated. Signal reproducibility was 10% or better as demonstrated by the quantitative analysis of propranolol and its stable isotope-labeled internal standard propranolol-d7. The ability of the system to multiply charge and analyze macromolecules was demonstrated using the protein cytochrome c. This immediate drop on demand technology/open port sampling interface/ESI-MS combination allowed for the quantitative analysis of relatively small mass analytes and was used for the identification of macromolecules like proteins.

  12. Ultralow Thermal Conductivity in Full Heusler Semiconductors.

    PubMed

    He, Jiangang; Amsler, Maximilian; Xia, Yi; Naghavi, S Shahab; Hegde, Vinay I; Hao, Shiqiang; Goedecker, Stefan; Ozoliņš, Vidvuds; Wolverton, Chris

    2016-07-22

    Semiconducting half and, to a lesser extent, full Heusler compounds are promising thermoelectric materials due to their compelling electronic properties with large power factors. However, intrinsically high thermal conductivity resulting in a limited thermoelectric efficiency has so far impeded their widespread use in practical applications. Here, we report the computational discovery of a class of hitherto unknown stable semiconducting full Heusler compounds with ten valence electrons (X_{2}YZ, X=Ca, Sr, and Ba; Y=Au and Hg; Z=Sn, Pb, As, Sb, and Bi) through high-throughput ab initio screening. These new compounds exhibit ultralow lattice thermal conductivity κ_{L} close to the theoretical minimum due to strong anharmonic rattling of the heavy noble metals, while preserving high power factors, thus resulting in excellent phonon-glass electron-crystal materials.

  13. Software Voting in Asynchronous NMR (N-Modular Redundancy) Computer Structures.

    DTIC Science & Technology

    1983-05-06

    added reliability is exchanged for increased system cost and decreased throughput. Some applications require extremely reliable systems, so the only...not the other way around. Although no systems proidc abstract voting yet. as more applications are written for NMR systems, the programmers are going...throughput goes down, the overhead goes up. Mathematically : Overhead= Non redundant Throughput- Actual Throughput (1) In this section, the actual throughput

  14. On the Achievable Throughput Over TVWS Sensor Networks

    PubMed Central

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-01-01

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565

  15. Identifying the Active Microbiome Associated with Roots and Rhizosphere Soil of Oilseed Rape

    PubMed Central

    Mahmood, Shahid; Ekblad, Alf; Alström, Sadhna; Högberg, Nils; Finlay, Roger

    2017-01-01

    ABSTRACT RNA stable isotope probing and high-throughput sequencing were used to characterize the active microbiomes of bacteria and fungi colonizing the roots and rhizosphere soil of oilseed rape to identify taxa assimilating plant-derived carbon following 13CO2 labeling. Root- and rhizosphere soil-associated communities of both bacteria and fungi differed from each other, and there were highly significant differences between their DNA- and RNA-based community profiles. Verrucomicrobia, Proteobacteria, Planctomycetes, Acidobacteria, Gemmatimonadetes, Actinobacteria, and Chloroflexi were the most active bacterial phyla in the rhizosphere soil. Bacteroidetes were more active in roots. The most abundant bacterial genera were well represented in both the 13C- and 12C-RNA fractions, while the fungal taxa were more differentiated. Streptomyces, Rhizobium, and Flavobacterium were dominant in roots, whereas Rhodoplanes and Sphingomonas (Kaistobacter) were dominant in rhizosphere soil. “Candidatus Nitrososphaera” was enriched in 13C in rhizosphere soil. Olpidium and Dendryphion were abundant in the 12C-RNA fraction of roots; Clonostachys was abundant in both roots and rhizosphere soil and heavily 13C enriched. Cryptococcus was dominant in rhizosphere soil and less abundant, but was 13C enriched in roots. The patterns of colonization and C acquisition revealed in this study assist in identifying microbial taxa that may be superior competitors for plant-derived carbon in the rhizosphere of Brassica napus. IMPORTANCE This microbiome study characterizes the active bacteria and fungi colonizing the roots and rhizosphere soil of Brassica napus using high-throughput sequencing and RNA-stable isotope probing. It identifies taxa assimilating plant-derived carbon following 13CO2 labeling and compares these with other less active groups not incorporating a plant assimilate. Brassica napus is an economically and globally important oilseed crop, cultivated for edible oil, biofuel production, and phytoextraction of heavy metals; however, it is susceptible to several diseases. The identification of the fungal and bacterial species successfully competing for plant-derived carbon, enabling them to colonize the roots and rhizosphere soil of this plant, should enable the identification of microorganisms that can be evaluated in more detailed functional studies and ultimately be used to improve plant health and productivity in sustainable agriculture. PMID:28887416

  16. Identifying the Active Microbiome Associated with Roots and Rhizosphere Soil of Oilseed Rape.

    PubMed

    Gkarmiri, Konstantia; Mahmood, Shahid; Ekblad, Alf; Alström, Sadhna; Högberg, Nils; Finlay, Roger

    2017-11-15

    RNA stable isotope probing and high-throughput sequencing were used to characterize the active microbiomes of bacteria and fungi colonizing the roots and rhizosphere soil of oilseed rape to identify taxa assimilating plant-derived carbon following 13 CO 2 labeling. Root- and rhizosphere soil-associated communities of both bacteria and fungi differed from each other, and there were highly significant differences between their DNA- and RNA-based community profiles. Verrucomicrobia , Proteobacteria , Planctomycetes , Acidobacteria , Gemmatimonadetes , Actinobacteria , and Chloroflexi were the most active bacterial phyla in the rhizosphere soil. Bacteroidetes were more active in roots. The most abundant bacterial genera were well represented in both the 13 C- and 12 C-RNA fractions, while the fungal taxa were more differentiated. Streptomyces , Rhizobium , and Flavobacterium were dominant in roots, whereas Rhodoplanes and Sphingomonas ( Kaistobacter ) were dominant in rhizosphere soil. " Candidatus Nitrososphaera" was enriched in 13 C in rhizosphere soil. Olpidium and Dendryphion were abundant in the 12 C-RNA fraction of roots; Clonostachys was abundant in both roots and rhizosphere soil and heavily 13 C enriched. Cryptococcus was dominant in rhizosphere soil and less abundant, but was 13 C enriched in roots. The patterns of colonization and C acquisition revealed in this study assist in identifying microbial taxa that may be superior competitors for plant-derived carbon in the rhizosphere of Brassica napus IMPORTANCE This microbiome study characterizes the active bacteria and fungi colonizing the roots and rhizosphere soil of Brassica napus using high-throughput sequencing and RNA-stable isotope probing. It identifies taxa assimilating plant-derived carbon following 13 CO 2 labeling and compares these with other less active groups not incorporating a plant assimilate. Brassica napus is an economically and globally important oilseed crop, cultivated for edible oil, biofuel production, and phytoextraction of heavy metals; however, it is susceptible to several diseases. The identification of the fungal and bacterial species successfully competing for plant-derived carbon, enabling them to colonize the roots and rhizosphere soil of this plant, should enable the identification of microorganisms that can be evaluated in more detailed functional studies and ultimately be used to improve plant health and productivity in sustainable agriculture. Copyright © 2017 American Society for Microbiology.

  17. Virtually-synchronous communication based on a weak failure suspector

    NASA Technical Reports Server (NTRS)

    Schiper, Andre; Ricciardi, Aleta

    1993-01-01

    Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.

  18. Load balancing for massively-parallel soft-real-time systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hailperin, M.

    1988-09-01

    Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less

  19. Remote Observing and Automatic FTP on Kitt Peak

    NASA Astrophysics Data System (ADS)

    Seaman, Rob; Bohannan, Bruce

    As part of KPNO's Internet-based observing services we experimented with the publically available audio, video and whiteboard MBONE clients (vat, nv, wb and others) in both point-to-point and multicast modes. While bandwidth is always a constraint on the Internet, it is less of a constraint to operations than many might think. These experiments were part of two new Internet-based observing services offered to KPNO observers beginning with the Fall 1995 semester: a remote observing station and an automatic FTP data queue. The remote observing station seeks to duplicate the KPNO IRAF/ICE observing environment on a workstation at the observer's home institution. The automatic FTP queue is intended to support those observing programs that require quick transport of data back to the home institution, for instance, for near real time reductions to aid in observing tactics. We also discuss the early operational results of these services.

  20. Efficient Assignment of Multiple E-MBMS Sessions towards LTE

    NASA Astrophysics Data System (ADS)

    Alexiou, Antonios; Bouras, Christos; Kokkinos, Vasileios

    One of the major prerequisites for Long Term Evolution (LTE) networks is the mass provision of multimedia services to mobile users. To this end, Evolved - Multimedia Broadcast/Multicast Service (E-MBMS) is envisaged to play an instrumental role during LTE standardization process and ensure LTE’s proliferation in mobile market. E-MBMS targets at the economic delivery, in terms of power and spectral efficiency, of multimedia data from a single source entity to multiple destinations. This paper proposes a novel mechanism for efficient radio bearer selection during E-MBMS transmissions in LTE networks. The proposed mechanism is based on the concept of transport channels combination in any cell of the network. Most significantly, the mechanism manages to efficiently deliver multiple E-MBMS sessions. The performance of the proposed mechanism is evaluated and compared with several radio bearer selection mechanisms in order to highlight the enhancements that it provides.

  1. Enabling Optical Network Test Bed for 5G Tests

    NASA Astrophysics Data System (ADS)

    Giuntini, Marco; Grazioso, Paolo; Matera, Francesco; Valenti, Alessandro; Attanasio, Vincenzo; Di Bartolo, Silvia; Nastri, Emanuele

    2017-03-01

    In this work, we show some experimental approaches concerning optical network design dedicated to 5G infrastructures. In particular, we show some implementations of network slicing based on Carrier Ethernet forwarding, which will be very suitable in the context of 5G heterogeneous networks, especially looking at services for vertical enterprises. We also show how to adopt a central unit (orchestrator) to automatically manage such logical paths according to quality-of-service requirements, which can be monitored at the user location. We also illustrate how novel all-optical processes, such as the ones based on all-optical wavelength conversion, can be used for multicasting, enabling development of TV broadcasting based on 4G-5G terminals. These managing and forwarding techniques, operating on optical links, are tested in a wireless environment on Wi-Fi cells and emulating LTE and WiMAX systems by means of the NS-3 code.

  2. Class network routing

    DOEpatents

    Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-09-08

    Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.

  3. Strategic and Operational Plan for Integrating Transcriptomics ...

    EPA Pesticide Factsheets

    Plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT; the details are in the attached slide presentation presentation on plans for incorporating high throughput transcriptomics into the current high throughput screening activities at NCCT, given at the OECD meeting on June 23, 2016

  4. High-Throughput Experimental Approach Capabilities | Materials Science |

    Science.gov Websites

    NREL High-Throughput Experimental Approach Capabilities High-Throughput Experimental Approach by yellow and is for materials in the upper right sector. NREL's high-throughput experimental ,Te) and oxysulfide sputtering Combi-5: Nitrides and oxynitride sputtering We also have several non

  5. TMEM88, CCL14 and CLEC3B as prognostic biomarkers for prognosis and palindromia of human hepatocellular carcinoma.

    PubMed

    Zhang, Xin; Wan, Jin-Xiang; Ke, Zun-Ping; Wang, Feng; Chai, Hai-Xia; Liu, Jia-Qiang

    2017-07-01

    Hepatocellular carcinoma is one of the most mortal and prevalent cancers with increasing incidence worldwide. Elucidating genetic driver genes for prognosis and palindromia of hepatocellular carcinoma helps managing clinical decisions for patients. In this study, the high-throughput RNA sequencing data on platform IlluminaHiSeq of hepatocellular carcinoma were downloaded from The Cancer Genome Atlas with 330 primary hepatocellular carcinoma patient samples. Stable key genes with differential expressions were identified with which Kaplan-Meier survival analysis was performed using Cox proportional hazards test in R language. Driver genes influencing the prognosis of this disease were determined using clustering analysis. Functional analysis of driver genes was performed by literature search and Gene Set Enrichment Analysis. Finally, the selected driver genes were verified using external dataset GSE40873. A total of 5781 stable key genes were identified, including 156 genes definitely related to prognoses of hepatocellular carcinoma. Based on the significant key genes, samples were grouped into five clusters which were further integrated into high- and low-risk classes based on clinical features. TMEM88, CCL14, and CLEC3B were selected as driver genes which clustered high-/low-risk patients successfully (generally, p = 0.0005124445). Finally, survival analysis of the high-/low-risk samples from external database illustrated significant difference with p value 0.0198. In conclusion, TMEM88, CCL14, and CLEC3B genes were stable and available in predicting the survival and palindromia time of hepatocellular carcinoma. These genes could function as potential prognostic genes contributing to improve patients' outcomes and survival.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shao-Gang; Liao, Ji-Hai; Zhao, Yu-Jun

    The unique electronic property induced diversified structure of boron (B) cluster has attracted much interest from experimentalists and theorists. B{sub 30–40} were reported to be planar fragments of triangular lattice with proper concentrations of vacancies recently. Here, we have performed high-throughput screening for possible B clusters through the first-principles calculations, including various shapes and distributions of vacancies. As a result, we have determined the structures of B{sub n} clusters with n = 30–51 and found a stable planar cluster of B{sub 49} with a double-hexagon vacancy. Considering the 8-electron rule and the electron delocalization, a concise model for the distributionmore » of the 2c–2e and 3c–2e bonds has been proposed to explain the stability of B planar clusters, as well as the reported B cages.« less

  7. LC-MS/MS strategies for therapeutic antibodies and investigation into the quantitative impact of antidrug-antibodies.

    PubMed

    Ewles, Matthew; Mannu, Ranbir; Fox, Chris; Stanta, Johannes; Evans, Graeme; Goodwin, Lee; Duffy, James; Bell, Len; Estdale, Sian; Firth, David

    2016-12-01

    We aimed to establish novel, high-throughput LC-MS/MS strategies for quantification of monoclonal antibodies in human serum and examine the potential impact of antidrug antibodies. We present two strategies using a thermally stable immobilized trypsin. The first strategy uses whole serum digestion and the second introduces Protein G enrichment to improve the selectivity. The impact of anti-trastuzumab antibodies on the methods was tested. Whole serum digestion has been validated for trastuzumab (LLOQ 0.25 µg/ml). Protein G enrichment has been validated for trastuzumab (LLOQ 0.1 µg/ml), bevacizumab (LLOQ 0.1 µg/ml) and adalimumab (LLOQ 0.25 µg/ml). We have shown the potential for anti-drug antibodies to impact on the quantification and we have subsequently established a strategy to overcome this impact where total quantification is desired.

  8. Towards Personalized Medicine Mediated by in Vitro Virus-Based Interactome Approaches

    PubMed Central

    Ohashi, Hiroyuki; Miyamoto-Sato, Etsuko

    2014-01-01

    We have developed a simple in vitro virus (IVV) selection system based on cell-free co-translation, using a highly stable and efficient mRNA display method. The IVV system is applicable to the high-throughput and comprehensive analysis of proteins and protein–ligand interactions. Huge amounts of genomic sequence data have been generated over the last decade. The accumulated genetic alterations and the interactome networks identified within cells represent a universal feature of a disease, and knowledge of these aspects can help to determine the optimal therapy for the disease. The concept of the “integrome” has been developed as a means of integrating large amounts of data. We have developed an interactome analysis method aimed at providing individually-targeted health care. We also consider future prospects for this system. PMID:24756093

  9. Optically enhanced acoustophoresis

    NASA Astrophysics Data System (ADS)

    McDougall, Craig; O'Mahoney, Paul; McGuinn, Alan; Willoughby, Nicholas A.; Qiu, Yongqiang; Demore, Christine E. M.; MacDonald, Michael P.

    2017-08-01

    Regenerative medicine has the capability to revolutionise many aspects of medical care, but for it to make the step from small scale autologous treatments to larger scale allogeneic approaches, robust and scalable label free cell sorting technologies are needed as part of a cell therapy bioprocessing pipeline. In this proceedings we describe several strategies for addressing the requirements for high throughput without labeling via: dimensional scaling, rare species targeting and sorting from a stable state. These three approaches are demonstrated through a combination of optical and ultrasonic forces. By combining mostly conservative and non-conservative forces from two different modalities it is possible to reduce the influence of flow velocity on sorting efficiency, hence increasing robustness and scalability. One such approach can be termed "optically enhanced acoustophoresis" which combines the ability of acoustics to handle large volumes of analyte with the high specificity of optical sorting.

  10. Genomic and epigenomic heterogeneity in molecular subtypes of gastric cancer.

    PubMed

    Lim, Byungho; Kim, Jong-Hwan; Kim, Mirang; Kim, Seon-Young

    2016-01-21

    Gastric cancer is a complex disease that is affected by multiple genetic and environmental factors. For the precise diagnosis and effective treatment of gastric cancer, the heterogeneity of the disease must be simplified; one way to achieve this is by dividing the disease into subgroups. Toward this effort, recent advances in high-throughput sequencing technology have revealed four molecular subtypes of gastric cancer, which are classified as Epstein-Barr virus-positive, microsatellite instability, genomically stable, and chromosomal instability subtypes. We anticipate that this molecular subtyping will help to extend our knowledge for basic research purposes and will be valuable for clinical use. Here, we review the genomic and epigenomic heterogeneity of the four molecular subtypes of gastric cancer. We also describe a mutational meta-analysis and a reanalysis of DNA methylation that were performed using previously reported gastric cancer datasets.

  11. GFP-complementation assay to detect functional CPP and protein delivery into living cells

    PubMed Central

    Milech, Nadia; Longville, Brooke AC; Cunningham, Paula T; Scobie, Marie N; Bogdawa, Heique M; Winslow, Scott; Anastasas, Mark; Connor, Theresa; Ong, Ferrer; Stone, Shane R; Kerfoot, Maria; Heinrich, Tatjana; Kroeger, Karen M; Tan, Yew-Foon; Hoffmann, Katrin; Thomas, Wayne R; Watt, Paul M; Hopkins, Richard M

    2015-01-01

    Efficient cargo uptake is essential for cell-penetrating peptide (CPP) therapeutics, which deliver widely diverse cargoes by exploiting natural cell processes to penetrate the cell’s membranes. Yet most current CPP activity assays are hampered by limitations in assessing uptake, including confounding effects of conjugated fluorophores or ligands, indirect read-outs requiring secondary processing, and difficulty in discriminating internalization from endosomally trapped cargo. Split-complementation Endosomal Escape (SEE) provides the first direct assay visualizing true cytoplasmic-delivery of proteins at biologically relevant concentrations. The SEE assay has minimal background, is amenable to high-throughput processes, and adaptable to different transient and stable cell lines. This split-GFP-based platform can be useful to study transduction mechanisms, cellular imaging, and characterizing novel CPPs as pharmaceutical delivery agents in the treatment of disease. PMID:26671759

  12. Characterization of the fecal microbiota using high-throughput sequencing reveals a stable microbial community during storage.

    PubMed

    Carroll, Ian M; Ringel-Kulka, Tamar; Siddle, Jennica P; Klaenhammer, Todd R; Ringel, Yehuda

    2012-01-01

    The handling and treatment of biological samples is critical when characterizing the composition of the intestinal microbiota between different ecological niches or diseases. Specifically, exposure of fecal samples to room temperature or long term storage in deep freezing conditions may alter the composition of the microbiota. Thus, we stored fecal samples at room temperature and monitored the stability of the microbiota over twenty four hours. We also investigated the stability of the microbiota in fecal samples during a six month storage period at -80°C. As the stability of the fecal microbiota may be affected by intestinal disease, we analyzed two healthy controls and two patients with irritable bowel syndrome (IBS). We used high-throughput pyrosequencing of the 16S rRNA gene to characterize the microbiota in fecal samples stored at room temperature or -80°C at six and seven time points, respectively. The composition of microbial communities in IBS patients and healthy controls were determined and compared using the Quantitative Insights Into Microbial Ecology (QIIME) pipeline. The composition of the microbiota in fecal samples stored for different lengths of time at room temperature or -80°C clustered strongly based on the host each sample originated from. Our data demonstrates that fecal samples exposed to room or deep freezing temperatures for up to twenty four hours and six months, respectively, exhibit a microbial composition and diversity that shares more identity with its host of origin than any other sample.

  13. DEEM, a versatile platform of FRD measurement for highly multiplexed fibre systems in astronomy

    NASA Astrophysics Data System (ADS)

    Yan, Yunxiang; Yan, Qi; Wang, Gang; Sun, Weimin; Luo, A.-Li; Ma, Zhenyu; Zhang, Qiong; Li, Jian; Wang, Shuqing

    2018-06-01

    We present a new method of DEEM, the direct energy encircling method, for characterizing the performance of fibres in most astronomical spectroscopic applications. It is a versatile platform to measure focal ratio degradation (FRD), throughput, and point spread function. The principle of DEEM and the relation between the encircled energy and the spot size were derived and simulated based on the power distribution model (PDM). We analysed the errors of DEEM and pointed out the major error source for better understanding and optimization. The validation of DEEM has been confirmed by comparing the results with conventional method which shows that DEEM has good robustness with high accuracy in both stable and complex experiment environments. Applications on the integral field unit (IFU) show that the FRD of 50 μm core fibre is substandard for the requirement which requires the output focal ratio to be slower than 4.5. The homogeneity of throughput is acceptable and higher than 85 per cent. The prototype IFU of the first generation helps to find out the imperfections to optimize the new design of the next generation based on the staggered structure with 35 μm core fibres of N.A. = 0.12, which can improve the FRD performance. The FRD dependence on wavelength and core size is revealed that higher output focal ratio occurs at shorter wavelengths for large core fibres, which is in agreement with the prediction of PDM. But the dependence of the observed data is weaker than the prediction.

  14. LipidHome: a database of theoretical lipids optimized for high throughput mass spectrometry lipidomics.

    PubMed

    Foster, Joseph M; Moreno, Pablo; Fabregat, Antonio; Hermjakob, Henning; Steinbeck, Christoph; Apweiler, Rolf; Wakelam, Michael J O; Vizcaíno, Juan Antonio

    2013-01-01

    Protein sequence databases are the pillar upon which modern proteomics is supported, representing a stable reference space of predicted and validated proteins. One example of such resources is UniProt, enriched with both expertly curated and automatic annotations. Taken largely for granted, similar mature resources such as UniProt are not available yet in some other "omics" fields, lipidomics being one of them. While having a seasoned community of wet lab scientists, lipidomics lies significantly behind proteomics in the adoption of data standards and other core bioinformatics concepts. This work aims to reduce the gap by developing an equivalent resource to UniProt called 'LipidHome', providing theoretically generated lipid molecules and useful metadata. Using the 'FASTLipid' Java library, a database was populated with theoretical lipids, generated from a set of community agreed upon chemical bounds. In parallel, a web application was developed to present the information and provide computational access via a web service. Designed specifically to accommodate high throughput mass spectrometry based approaches, lipids are organised into a hierarchy that reflects the variety in the structural resolution of lipid identifications. Additionally, cross-references to other lipid related resources and papers that cite specific lipids were used to annotate lipid records. The web application encompasses a browser for viewing lipid records and a 'tools' section where an MS1 search engine is currently implemented. LipidHome can be accessed at http://www.ebi.ac.uk/apweiler-srv/lipidhome.

  15. Proxy-based accelerated discovery of Fischer–Tropsch catalysts† †Electronic supplementary information (ESI) available: Details of synthesis, analysis and testing, validation experiments for high-throughput XRD and gas treatment, details of statistical analysis and calculations, tabulation of synthesis parameters and XRD results, alternatives to Fig. 3 highlighting different data points, FTS testing results displayed graphically. See DOI: 10.1039/c4sc02116a Click here for additional data file.

    PubMed Central

    Boldrin, Paul; Gallagher, James R.; Combes, Gary B.; Enache, Dan I.; James, David; Ellis, Peter R.; Kelly, Gordon; Claridge, John B.

    2015-01-01

    Development of heterogeneous catalysts for complex reactions such as Fischer–Tropsch synthesis of fuels is hampered by difficult reaction conditions, slow characterisation techniques such as chemisorption and temperature-programmed reduction and the need for long term stability. High-throughput (HT) methods may help, but their use has until now focused on bespoke micro-reactors for direct measurements of activity and selectivity. These are specific to individual reactions and do not provide more fundamental information on the materials. Here we report using simpler HT characterisation techniques (XRD and TGA) along with ageing under Fischer–Tropsch reaction conditions to provide information analogous to metal surface area, degree of reduction and thousands of hours of stability testing time for hundreds of samples per month. The use of this method allowed the identification of a series of highly stable, high surface area catalysts promoted by Mg and Ru. In an advance over traditional multichannel HT reactors, the chemical and structural information we obtain on the materials allows us to identify the structural effects of the promoters and their effects on the modes of deactivation observed. PMID:29560180

  16. Method development in high-performance liquid chromatography for high-throughput profiling and metabonomic studies of biofluid samples.

    PubMed

    Pham-Tuan, Hai; Kaskavelis, Lefteris; Daykin, Clare A; Janssen, Hans-Gerd

    2003-06-15

    "Metabonomics" has in the past decade demonstrated enormous potential in furthering the understanding of, for example, disease processes, toxicological mechanisms, and biomarker discovery. The same principles can also provide a systematic and comprehensive approach to the study of food ingredient impact on consumer health. However, "metabonomic" methodology requires the development of rapid, advanced analytical tools to comprehensively profile biofluid metabolites within consumers. Until now, NMR spectroscopy has been used for this purpose almost exclusively. Chromatographic techniques and in particular HPLC, have not been exploited accordingly. The main drawbacks of chromatography are the long analysis time, instabilities in the sample fingerprint and the rigorous sample preparation required. This contribution addresses these problems in the quest to develop generic methods for high-throughput profiling using HPLC. After a careful optimization process, stable fingerprints of biofluid samples can be obtained using standard HPLC equipment. A method using a short monolithic column and a rapid gradient with a high flow-rate has been developed that allowed rapid and detailed profiling of larger numbers of urine samples. The method can be easily translated into a slow, shallow-gradient high-resolution method for identification of interesting peaks by LC-MS/NMR. A similar approach has been applied for cell culture media samples. Due to the much higher protein content of such samples non-porous polymer-based small particle columns yielded the best results. The study clearly shows that HPLC can be used in metabonomic fingerprinting studies.

  17. High-Throughput Quantitative Proteomic Analysis of Dengue Virus Type 2 Infected A549 Cells

    PubMed Central

    Chiu, Han-Chen; Hannemann, Holger; Heesom, Kate J.; Matthews, David A.; Davidson, Andrew D.

    2014-01-01

    Disease caused by dengue virus is a global health concern with up to 390 million individuals infected annually worldwide. There are no vaccines or antiviral compounds available to either prevent or treat dengue disease which may be fatal. To increase our understanding of the interaction of dengue virus with the host cell, we analyzed changes in the proteome of human A549 cells in response to dengue virus type 2 infection using stable isotope labelling in cell culture (SILAC) in combination with high-throughput mass spectrometry (MS). Mock and infected A549 cells were fractionated into nuclear and cytoplasmic extracts before analysis to identify proteins that redistribute between cellular compartments during infection and reduce the complexity of the analysis. We identified and quantified 3098 and 2115 proteins in the cytoplasmic and nuclear fractions respectively. Proteins that showed a significant alteration in amount during infection were examined using gene enrichment, pathway and network analysis tools. The analyses revealed that dengue virus infection modulated the amounts of proteins involved in the interferon and unfolded protein responses, lipid metabolism and the cell cycle. The SILAC-MS results were validated for a select number of proteins over a time course of infection by Western blotting and immunofluorescence microscopy. Our study demonstrates for the first time the power of SILAC-MS for identifying and quantifying novel changes in cellular protein amounts in response to dengue virus infection. PMID:24671231

  18. Evaluation of Direct Vapour Equilibration for Stable Isotope Analysis of Plant Water.

    NASA Astrophysics Data System (ADS)

    Millar, C. B.; McDonnell, J.; Pratt, D.

    2017-12-01

    The stable isotopes of water (2H and 18O), extracted from plants, have been utilized in a variety of ecohydrological, biogeochemical and climatological studies. The array of methods used to extract water from plants are as varied as the studies themselves. Here we perform a comprehensive inter-method comparison of six plant water extraction techniques: direct vapour equilibration, microwave extraction, two unique versions of cryogenic extraction, centrifugation, and high pressure mechanical squeezing. We applied these methods to four isotopically unique plant portions (heads, stems, leaves and root crown) of spring wheat (Triticum aestivum L.). The spring wheat was grown under controlled conditions with irrigation inputs of a known isotopic composition. Our results show that the methods of extraction return significantly different plant water isotopic signals. Centrifugation, microwave extraction, direct vapour equilibration, and squeezing returned more enriched results. Both cryogenic systems and squeezing returned more depleted results, depending upon the plant portion extracted. While cryogenic extraction is currently the most widely used method in the literature, our results suggest that direct vapor equilibration method outperforms it in terms of accuracy, sample throughput and replicability. More research is now needed with other plant species (especially woody plants) to see how far the findings from this study could be extended.

  19. Use of mariner transposases for one-step delivery and integration of DNA in prokaryotes and eukaryotes by transfection

    PubMed Central

    Michlewski, Gracjan; Finnegan, David J.; Elfick, Alistair; Rosser, Susan J.

    2017-01-01

    Abstract Delivery of DNA to cells and its subsequent integration into the host genome is a fundamental task in molecular biology, biotechnology and gene therapy. Here we describe an IP-free one-step method that enables stable genome integration into either prokaryotic or eukaryotic cells. A synthetic mariner transposon is generated by flanking a DNA sequence with short inverted repeats. When purified recombinant Mos1 or Mboumar-9 transposase is co-transfected with transposon-containing plasmid DNA, it penetrates prokaryotic or eukaryotic cells and integrates the target DNA into the genome. In vivo integrations by purified transposase can be achieved by electroporation, chemical transfection or Lipofection of the transposase:DNA mixture, in contrast to other published transposon-based protocols which require electroporation or microinjection. As in other transposome systems, no helper plasmids are required since transposases are not expressed inside the host cells, thus leading to generation of stable cell lines. Since it does not require electroporation or microinjection, this tool has the potential to be applied for automated high-throughput creation of libraries of random integrants for purposes including gene knock-out libraries, screening for optimal integration positions or safe genome locations in different organisms, selection of the highest production of valuable compounds for biotechnology, and sequencing. PMID:28204586

  20. Global analysis of protein folding using massively parallel design, synthesis and testing

    PubMed Central

    Rocklin, Gabriel J.; Chidyausiku, Tamuka M.; Goreshnik, Inna; Ford, Alex; Houliston, Scott; Lemak, Alexander; Carter, Lauren; Ravichandran, Rashmi; Mulligan, Vikram K.; Chevalier, Aaron; Arrowsmith, Cheryl H.; Baker, David

    2017-01-01

    Proteins fold into unique native structures stabilized by thousands of weak interactions that collectively overcome the entropic cost of folding. Though these forces are “encoded” in the thousands of known protein structures, “decoding” them is challenging due to the complexity of natural proteins that have evolved for function, not stability. Here we combine computational protein design, next-generation gene synthesis, and a high-throughput protease susceptibility assay to measure folding and stability for over 15,000 de novo designed miniproteins, 1,000 natural proteins, 10,000 point-mutants, and 30,000 negative control sequences, identifying over 2,500 new stable designed proteins in four basic folds. This scale—three orders of magnitude greater than that of previous studies of design or folding—enabled us to systematically examine how sequence determines folding and stability in uncharted protein space. Iteration between design and experiment increased the design success rate from 6% to 47%, produced stable proteins unlike those found in nature for topologies where design was initially unsuccessful, and revealed subtle contributions to stability as designs became increasingly optimized. Our approach achieves the long-standing goal of a tight feedback cycle between computation and experiment, and promises to transform computational protein design into a data-driven science. PMID:28706065

  1. Note: Real-time monitoring via second-harmonic interferometry of a flow gas cell for laser wakefield acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandi, F., E-mail: fernando.brandi@ino.it; Istituto Italiano di Tecnologia; Giammanco, F.

    2016-08-15

    The use of a gas cell as a target for laser wakefield acceleration (LWFA) offers the possibility to obtain stable and manageable laser-plasma interaction process, a mandatory condition for practical applications of this emerging technique, especially in multi-stage accelerators. In order to obtain full control of the gas particle number density in the interaction region, thus allowing for a long term stable and manageable LWFA, real-time monitoring is necessary. In fact, the ideal gas law cannot be used to estimate the particle density inside the flow cell based on the preset backing pressure and the room temperature because the gasmore » flow depends on several factors like tubing, regulators, and valves in the gas supply system, as well as vacuum chamber volume and vacuum pump speed/throughput. Here, second-harmonic interferometry is applied to measure the particle number density inside a flow gas cell designed for LWFA. The results demonstrate that real-time monitoring is achieved and that using low backing pressure gas (<1 bar) and different cell orifice diameters (<2 mm) it is possible to finely tune the number density up to the 10{sup 19} cm{sup −3} range well suited for LWFA.« less

  2. Identifying protist consumers of photosynthetic picoeukaryotes in the surface ocean using stable isotope probing.

    PubMed

    Orsi, William D; Wilken, Susanne; Del Campo, Javier; Heger, Thierry; James, Erick; Richards, Thomas A; Keeling, Patrick J; Worden, Alexandra Z; Santoro, Alyson E

    2018-02-01

    Photosynthetic picoeukaryotes contribute a significant fraction of primary production in the upper ocean. Micromonas pusilla is an ecologically relevant photosynthetic picoeukaryote, abundantly and widely distributed in marine waters. Grazing by protists may control the abundance of picoeukaryotes such as M. pusilla, but the diversity of the responsible grazers is poorly understood. To identify protists consuming photosynthetic picoeukaryotes in a productive North Pacific Ocean region, we amended seawater with living 15 N, 13 C-labelled M. pusilla cells in a 24-h replicated bottle experiment. DNA stable isotope probing, combined with high-throughput sequencing of V4 hypervariable regions from 18S rRNA gene amplicons (Tag-SIP), identified 19 operational taxonomic units (OTUs) of microbial eukaryotes that consumed M. pusilla. These OTUs were distantly related to cultured taxa within the dinoflagellates, ciliates, stramenopiles (MAST-1C and MAST-3 clades) and Telonema flagellates, thus, far known only from their environmental 18S rRNA gene sequences. Our discovery of eukaryotic prey consumption by MAST cells confirms that their trophic role in marine microbial food webs includes grazing upon picoeukaryotes. Our study provides new experimental evidence directly linking the genetic identity of diverse uncultivated microbial eukaryotes to the consumption of picoeukaryotic phytoplankton in the upper ocean. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  3. An UPLC-ESI-MS/MS Assay Using 6-Aminoquinolyl-N-Hydroxysuccinimidyl Carbamate Derivatization for Targeted Amino Acid Analysis: Application to Screening of Arabidopsis thaliana Mutants.

    PubMed

    Salazar, Carolina; Armenta, Jenny M; Shulaev, Vladimir

    2012-07-06

    In spite of the large arsenal of methodologies developed for amino acid assessment in complex matrices, their implementation in metabolomics studies involving wide-ranging mutant screening is hampered by their lack of high-throughput, sensitivity, reproducibility, and/or wide dynamic range. In response to the challenge of developing amino acid analysis methods that satisfy the criteria required for metabolomic studies, improved reverse-phase high-performance liquid chromatography-mass spectrometry (RPHPLC-MS) methods have been recently reported for large-scale screening of metabolic phenotypes. However, these methods focus on the direct analysis of underivatized amino acids and, therefore, problems associated with insufficient retention and resolution are observed due to the hydrophilic nature of amino acids. It is well known that derivatization methods render amino acids more amenable for reverse phase chromatographic analysis by introducing highly-hydrophobic tags in their carboxylic acid or amino functional group. Therefore, an analytical platform that combines the 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate (AQC) pre-column derivatization method with ultra performance liquid chromatography-electrospray ionization-tandem mass spectrometry (UPLC-ESI-MS/MS) is presented in this article. For numerous reasons typical amino acid derivatization methods would be inadequate for large scale metabolic projects. However, AQC derivatization is a simple, rapid and reproducible way of obtaining stable amino acid adducts amenable for UPLC-ESI-MS/MS and the applicability of the method for high-throughput metabolomic analysis in Arabidopsis thaliana is demonstrated in this study. Overall, the major advantages offered by this amino acid analysis method include high-throughput, enhanced sensitivity and selectivity; characteristics that showcase its utility for the rapid screening of the preselected plant metabolites without compromising the quality of the metabolic data. The presented method enabled thirty-eight metabolites (proteinogenic amino acids and related compounds) to be analyzed within 10 min with detection limits down to 1.02 × 10-11 M (i.e., atomole level on column), which represents an improved sensitivity of 1 to 5 orders of magnitude compared to existing methods. Our UPLC-ESI-MS/MS method is one of the seven analytical platforms used by the Arabidopsis Metabolomics Consortium. The amino acid dataset obtained by analysis of Arabidopsis T-DNA mutant stocks with our platform is captured and open to the public in the web portal PlantMetabolomics.org. The analytical platform herein described could find important applications in other studies where the rapid, high-throughput and sensitive assessment of low abundance amino acids in complex biosamples is necessary.

  4. An UPLC-ESI-MS/MS Assay Using 6-Aminoquinolyl-N-Hydroxysuccinimidyl Carbamate Derivatization for Targeted Amino Acid Analysis: Application to Screening of Arabidopsis thaliana Mutants

    PubMed Central

    Salazar, Carolina; Armenta, Jenny M.; Shulaev, Vladimir

    2012-01-01

    In spite of the large arsenal of methodologies developed for amino acid assessment in complex matrices, their implementation in metabolomics studies involving wide-ranging mutant screening is hampered by their lack of high-throughput, sensitivity, reproducibility, and/or wide dynamic range. In response to the challenge of developing amino acid analysis methods that satisfy the criteria required for metabolomic studies, improved reverse-phase high-performance liquid chromatography-mass spectrometry (RPHPLC-MS) methods have been recently reported for large-scale screening of metabolic phenotypes. However, these methods focus on the direct analysis of underivatized amino acids and, therefore, problems associated with insufficient retention and resolution are observed due to the hydrophilic nature of amino acids. It is well known that derivatization methods render amino acids more amenable for reverse phase chromatographic analysis by introducing highly-hydrophobic tags in their carboxylic acid or amino functional group. Therefore, an analytical platform that combines the 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate (AQC) pre-column derivatization method with ultra performance liquid chromatography-electrospray ionization-tandem mass spectrometry (UPLC-ESI-MS/MS) is presented in this article. For numerous reasons typical amino acid derivatization methods would be inadequate for large scale metabolic projects. However, AQC derivatization is a simple, rapid and reproducible way of obtaining stable amino acid adducts amenable for UPLC-ESI-MS/MS and the applicability of the method for high-throughput metabolomic analysis in Arabidopsis thaliana is demonstrated in this study. Overall, the major advantages offered by this amino acid analysis method include high-throughput, enhanced sensitivity and selectivity; characteristics that showcase its utility for the rapid screening of the preselected plant metabolites without compromising the quality of the metabolic data. The presented method enabled thirty-eight metabolites (proteinogenic amino acids and related compounds) to be analyzed within 10 min with detection limits down to 1.02 × 10−11 M (i.e., atomole level on column), which represents an improved sensitivity of 1 to 5 orders of magnitude compared to existing methods. Our UPLC-ESI-MS/MS method is one of the seven analytical platforms used by the Arabidopsis Metabolomics Consortium. The amino acid dataset obtained by analysis of Arabidopsis T-DNA mutant stocks with our platform is captured and open to the public in the web portal PlantMetabolomics.org. The analytical platform herein described could find important applications in other studies where the rapid, high-throughput and sensitive assessment of low abundance amino acids in complex biosamples is necessary. PMID:24957640

  5. Influence of Hydrogen Bonding on the Kinetic Stability of Vapor Deposited Glasses of Triazine Derivatives

    DOE Data Explorer

    Laventure, Audrey [Departement de chimie, Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec H3C 3J7, Canada] (ORCID:0000000208670231); Gujral, Ankit [Department of Chemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States] (ORCID:0000000250652694); Lebel, Olivier [Department of Chemistry and Chemical Engineering, Royal Military College of Canada, Kingston, Ontario K7K 7B4] (ORCID:0000000217376843); Ediger, Mark [Department of Chemistry, University of Wisconsin-Madison, Madison, Wisconsin 53706, United States] (ORCID:0000000347158473); Pellerin, Christian [Departement de chimie, Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec H3C 3J7, Canada] (ORCID:0000000161441318)

    2017-02-01

    It has recently been established that physical vapor deposition (PVD) can produce organic glasses with enhanced kinetic stability, high density, and anisotropic packing, with the substrate temperature during deposition (Tsubstrate) as the key control parameter. The influence of hydrogen bonding on the formation of PVD glasses has not been fully explored. Herein, we use a high-throughput preparation method to vapor-deposit three triazine derivatives over a wide range of Tsubstrate, from 0.69 to 1.08Tg, where Tg is the glass transition temperature. These model systems are structural analogues containing a functional group with different H-bonding capability at the 2-position of a triazine ring: (1) 2-methylamino-4,6-bis(3,5-dimethyl-phenylamino)-1,3,5-triazine (NHMe) (H-bond donor), (2) 2-methoxy-4,6-bis(3,5-dimethyl-phenylamino)-1,3,5-triazine (OMe) (H-bond acceptor), and (3) 2-ethyl-4,6-bis(3,5-dimethyl-phenylamino)-1,3,5-triazine (Et) (none). Using spectroscopic ellipsometry, we find that the Et and OMe compounds form PVD glasses with relatively high kinetic stability, with the transformation time (scaled by the α-relaxation time) on the order of 103, comparable to other highly stable glasses formed by PVD. In contrast, PVD glasses of NHMe are only slightly more stable than the corresponding liquid-cooled glass. Using IR spectroscopy, we find that both the supercooled liquid and the PVD glasses of the NHMe derivative show a higher average number of bonded NH per molecule than that in the other two compounds. These results suggest that H-bonds hinder the formation of stable glasses, perhaps by limiting the surface mobility. Interestingly, despite this difference in kinetic stability, all three compounds show properties typically observed in highly stable glasses prepared by PVD, including a higher density and anisotropic molecular packing (as characterized by IR and wide-angle X-ray scattering).

  6. Improvement in electron-beam lithography throughput by exploiting relaxed patterning fidelity requirements with directed self-assembly

    NASA Astrophysics Data System (ADS)

    Yu, Hao Yun; Liu, Chun-Hung; Shen, Yu Tian; Lee, Hsuan-Ping; Tsai, Kuen Yu

    2014-03-01

    Line edge roughness (LER) influencing the electrical performance of circuit components is a key challenge for electronbeam lithography (EBL) due to the continuous scaling of technology feature sizes. Controlling LER within an acceptable tolerance that satisfies International Technology Roadmap for Semiconductors requirements while achieving high throughput become a challenging issue. Although lower dosage and more-sensitive resist can be used to improve throughput, they would result in serious LER-related problems because of increasing relative fluctuation in the incident positions of electrons. Directed self-assembly (DSA) is a promising technique to relax LER-related pattern fidelity (PF) requirements because of its self-healing ability, which may benefit throughput. To quantify the potential of throughput improvement in EBL by introducing DSA for post healing, rigorous numerical methods are proposed to simultaneously maximize throughput by adjusting writing parameters of EBL systems subject to relaxed LER-related PF requirements. A fast, continuous model for parameter sweeping and a hybrid model for more accurate patterning prediction are employed for the patterning simulation. The tradeoff between throughput and DSA self-healing ability is investigated. Preliminary results indicate that significant throughput improvements are achievable at certain process conditions.

  7. High Throughput PBTK: Open-Source Data and Tools for ...

    EPA Pesticide Factsheets

    Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy

  8. Circadian and Wake-Dependent Influences on Subjective Sleepiness, Cognitive Throughput, and Reaction Time Performance in Older and Young Adults

    PubMed Central

    Silva, Edward J.; Wang, Wei; Ronda, Joseph M.; Wyatt, James K.; Duffy, Jeanne F.

    2010-01-01

    Study Objectives: To assess circadian and homeostatic influences on subjective sleepiness and cognitive performance in older adults when sleep and waking are scheduled at different times of day; to assess changes in subjective sleepiness and cognitive performance across several weeks of an inpatient study; and to compare these findings with results from younger adults. Design: Three 24-h baseline days consisting of 16 h of wakefulness and an 8-h sleep opportunity followed by 3-beat cycles of a 20-h forced desynchrony (FD) condition; 18 20-h “days,” each consisting of 13.33 h of scheduled wakefulness and 6.67 h of scheduled sleep opportunity. Setting: Intensive Physiological Monitoring Unit of the Brigham and Women's Hospital General Clinical Research Center. Participants: 10 healthy older adults (age 64.00 ± 5.98 y, 5 females) and 10 healthy younger adults (age 24.50 ± 3.54 y, 5 females). Interventions: Wake episodes during FD scheduled to begin 4 h earlier each day allowing for data collection at a full range of circadian phases. Measurements and Results: Subjective sleepiness, cognitive throughput, and psychomotor vigilance assessed every 2 h throughout the study. Core body temperature (CBT) data collected throughout to assess circadian phase. Older subjects were less sleepy and performed significantly better on reaction time (RT) measures than younger subjects. Decrements among younger subjects increased in magnitude further into the experiment, while the performance of older subjects remained stable. Conclusions: Our findings demonstrate that the waking performance and alertness of healthy older subjects are less impacted by the cumulative effects of repeated exposure to adverse circadian phase than that of young adults. This suggests that there are age-related changes in the circadian promotion of alertness, in the wake-dependent decline of alertness, and/or in how these 2 regulatory systems interact in healthy aging. Citation: Silva EJ; Wang W; Ronda JM; Wyatt JK; Duffy JF. Circadian and wake-dependent influences on subjective sleepiness, cognitive throughput, and reaction time performance in older and young adults. SLEEP 2010;33(4):481-490. PMID:20394317

  9. Particular geoscientific perspectives on stable isotope analysis in the arboreal system

    NASA Astrophysics Data System (ADS)

    Helle, Gerhard; Balting, Daniel; Pauly, Maren; Slotta, Franziska

    2017-04-01

    In geosciences stable isotopes of carbon, oxygen and hydrogen from the tree ring archive have been used for several decades to trace the course of past environmental and climatological fluctuations. In contrast to ice cores, the tree ring archive is of biological nature (like many other terrestrial archives), but provides the opportunity to establish site networks with very high resolution in space and time. Many of the basic physical mechanisms of isotope shifts are known, but biologically mediated processes may lead to isotope effects that are poorly understood. This implies that the many processes within the arboreal system leading to archived isotope ratios in wood material are governed by a multitude of environmental variables that are not only tied to the isotopic composition of atmospheric source values (precipitation, CO2), but also to seasonally changing metabolic flux rates and pool sizes of photosynthates within the trees. Consequently, the extraction of climate and environmental information is particularly challenging and reconstructions are still of rather qualitative nature. Over the last 10 years or so, monitoring studies have been implemented to investigate stable isotope, climate and environmental signal transfer within the arboreal system to develop transfer or response functions that can translate the relevant isotope values extracted from tree rings into climate or other environmental variables. To what extent have these efforts lead to a better understanding that helps improving the meaningfulness of tree ring isotope signals? For example, do monitoring studies help deciphering the causes for age-related trends in tree ring stable isotope sequences that are published in a growing number of papers. Are existing monitoring studies going into detail enough or is it already too much effort for the outcome? Based on what we know already particularly in mesic habitats, tree ring stable isotopes are much better climate proxies than other tree ring parameters. However, millennial or multi-millennial high quality reconstructions from tree ring isotopes are still rare. This is because of i) methodological constraints related to mass spectrometric analyses and ii) the nature of tree-ring chronologies that are put together by many trees of various individual ages. In view of this: What is the state-of-the-art in high throughput tree ring stable isotope analyses? Is it necessary to advance existing methodologies further to conserve the annual time resolution provided by the tree-ring archive? Other terrestrial archives, like lake sediments and speleothems rarely provide annually resolved stable isotope data. Furthermore, certain tree species from tropical or sub-tropical regions cannot be dated properly by dendrochronology and hence demand specific stable isotope measuring strategies, etc.. Although the points raised here do specifically apply for the tree ring archive, some of them are important for all proxy archives of organic origin.

  10. Prediction-based association control scheme in dense femtocell networks.

    PubMed

    Sung, Nak Woon; Pham, Ngoc-Thai; Huynh, Thong; Hwang, Won-Joo; You, Ilsun; Choo, Kim-Kwang Raymond

    2017-01-01

    The deployment of large number of femtocell base stations allows us to extend the coverage and efficiently utilize resources in a low cost manner. However, the small cell size of femtocell networks can result in frequent handovers to the mobile user, and consequently throughput degradation. Thus, in this paper, we propose predictive association control schemes to improve the system's effective throughput. Our design focuses on reducing handover frequency without impacting on throughput. The proposed schemes determine handover decisions that contribute most to the network throughput and are proper for distributed implementations. The simulation results show significant gains compared with existing methods in terms of handover frequency and network throughput perspective.

  11. Extensive cargo identification reveals distinct biological roles of the 12 importin pathways.

    PubMed

    Kimura, Makoto; Morinaka, Yuriko; Imai, Kenichiro; Kose, Shingo; Horton, Paul; Imamoto, Naoko

    2017-01-24

    Vast numbers of proteins are transported into and out of the nuclei by approximately 20 species of importin-β family nucleocytoplasmic transport receptors. However, the significance of the multiple parallel transport pathways that the receptors constitute is poorly understood because only limited numbers of cargo proteins have been reported. Here, we identified cargo proteins specific to the 12 species of human import receptors with a high-throughput method that employs stable isotope labeling with amino acids in cell culture, an in vitro reconstituted transport system, and quantitative mass spectrometry. The identified cargoes illuminated the manner of cargo allocation to the receptors. The redundancies of the receptors vary widely depending on the cargo protein. Cargoes of the same receptor are functionally related to one another, and the predominant protein groups in the cargo cohorts differ among the receptors. Thus, the receptors are linked to distinct biological processes by the nature of their cargoes.

  12. Designer diatom episomes delivered by bacterial conjugation

    DOE PAGES

    Karas, Bogumil J.; Diner, Rachel E.; Lefebvre, Stephane C.; ...

    2015-04-21

    Eukaryotic microalgae hold great promise for the bioproduction of fuels and higher value chemicals. However, compared with model genetic organisms such as Escherichia coli and Saccharomyces cerevisiae, characterization of the complex biology and biochemistry of algae and strain improvement has been hampered by the inefficient genetic tools. To date, many algal species are transformable only via particle bombardment, and the introduced DNA is integrated randomly into the nuclear genome. Here we describe the first nuclear episomal vector for diatoms and a plasmid delivery method via conjugation from Escherichia coli to the diatoms Phaeodactylum tricornutum and Thalassiosira pseudonana. We identify amore » yeast-derived sequence that enables stable episome replication in these diatoms even in the absence of antibiotic selection and show that episomes are maintained as closed circles at copy number equivalent to native chromosomes. This highly efficient genetic system facilitates high-throughput functional characterization of algal genes and accelerates molecular phytoplankton research.« less

  13. Recent trends in SELEX technique and its application to food safety monitoring

    PubMed Central

    Mei, Zhanlong; Yao, Li; Wang, Xin; Zheng, Lei; Liu, Jian; Liu, Guodong; Peng, Chifang; Chen, Wei

    2014-01-01

    The method referred to as “systemic evolution of ligands by exponential enrichment” (SELEX) was introduced in 1990 and ever since has become an important tool for the identification and screening of aptamers. Such nucleic acids can recognize and bind to their corresponding targets (analytes) with high selectivity and affinity, and aptamers therefore have become attractive alternatives to traditional antibodies not the least because they are much more stable. Meanwhile, they have found numerous applications in different fields including food quality and safety monitoring. This review first gives an introduction into the selection process and to the evolution of SELEX, then covers applications of aptamers in the surveillance of food safety (with subsections on absorptiometric, electrochemical, fluorescent and other methods), and then gives conclusions and perspectives. The SELEX method excels by its features of in vitro, high throughput and ease of operation. This review contains 86 references. PMID:25419005

  14. Recent 5-year Findings and Technological Advances in the Proteomic Study of HIV-associated Disorders.

    PubMed

    Zhang, Lijun; Jia, Xiaofang; Jin, Jun-O; Lu, Hongzhou; Tan, Zhimi

    2017-04-01

    Human immunodeficiency virus-1 (HIV-1) mainly relies on host factors to complete its life cycle. Hence, it is very important to identify HIV-regulated host proteins. Proteomics is an excellent technique for this purpose because of its high throughput and sensitivity. In this review, we summarized current technological advances in proteomics, including general isobaric tags for relative and absolute quantitation (iTRAQ) and stable isotope labeling by amino acids in cell culture (SILAC), as well as subcellular proteomics and investigation of posttranslational modifications. Furthermore, we reviewed the applications of proteomics in the discovery of HIV-related diseases and HIV infection mechanisms. Proteins identified by proteomic studies might offer new avenues for the diagnosis and treatment of HIV infection and the related diseases. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  15. Statistical characterization of multiple-reaction monitoring mass spectrometry (MRM-MS) assays for quantitative proteomics

    PubMed Central

    2012-01-01

    Multiple reaction monitoring mass spectrometry (MRM-MS) with stable isotope dilution (SID) is increasingly becoming a widely accepted assay for the quantification of proteins and peptides. These assays have shown great promise in relatively high throughput verification of candidate biomarkers. While the use of MRM-MS assays is well established in the small molecule realm, their introduction and use in proteomics is relatively recent. As such, statistical and computational methods for the analysis of MRM-MS data from proteins and peptides are still being developed. Based on our extensive experience with analyzing a wide range of SID-MRM-MS data, we set forth a methodology for analysis that encompasses significant aspects ranging from data quality assessment, assay characterization including calibration curves, limits of detection (LOD) and quantification (LOQ), and measurement of intra- and interlaboratory precision. We draw upon publicly available seminal datasets to illustrate our methods and algorithms. PMID:23176545

  16. Statistical characterization of multiple-reaction monitoring mass spectrometry (MRM-MS) assays for quantitative proteomics.

    PubMed

    Mani, D R; Abbatiello, Susan E; Carr, Steven A

    2012-01-01

    Multiple reaction monitoring mass spectrometry (MRM-MS) with stable isotope dilution (SID) is increasingly becoming a widely accepted assay for the quantification of proteins and peptides. These assays have shown great promise in relatively high throughput verification of candidate biomarkers. While the use of MRM-MS assays is well established in the small molecule realm, their introduction and use in proteomics is relatively recent. As such, statistical and computational methods for the analysis of MRM-MS data from proteins and peptides are still being developed. Based on our extensive experience with analyzing a wide range of SID-MRM-MS data, we set forth a methodology for analysis that encompasses significant aspects ranging from data quality assessment, assay characterization including calibration curves, limits of detection (LOD) and quantification (LOQ), and measurement of intra- and interlaboratory precision. We draw upon publicly available seminal datasets to illustrate our methods and algorithms.

  17. Novel tunable dynamic tweezers using dark-bright soliton collision control in an optical add/drop filter.

    PubMed

    Teeka, Chat; Jalil, Muhammad Arif; Yupapin, Preecha P; Ali, Jalil

    2010-12-01

    We propose a novel system of the dynamic optical tweezers generated by a dark soliton in the fiber optic loop. A dark soliton known as an optical tweezer is amplified and tuned within the microring resonator system. The required tunable tweezers with different widths and powers can be controlled. The analysis of dark-bright soliton conversion using a dark soliton pulse propagating within a microring resonator system is analyzed. The dynamic behaviors of soliton conversion in add/drop filter is also analyzed. The control dark soliton is input into the system via the add port of the add/drop filter. The dynamic behavior of the dark-bright soliton conversion is observed. The required stable signal is obtained via a drop and throughput ports of the add/drop filter with some suitable parameters. In application, the trapped light/atom and transportation can be realized by using the proposed system.

  18. Highly specific detection of genetic modification events using an enzyme-linked probe hybridization chip.

    PubMed

    Zhang, M Z; Zhang, X F; Chen, X M; Chen, X; Wu, S; Xu, L L

    2015-08-10

    The enzyme-linked probe hybridization chip utilizes a method based on ligase-hybridizing probe chip technology, with the principle of using thio-primers for protection against enzyme digestion, and using lambda DNA exonuclease to cut multiple PCR products obtained from the sample being tested into single-strand chains for hybridization. The 5'-end amino-labeled probe was fixed onto the aldehyde chip, and hybridized with the single-stranded PCR product, followed by addition of a fluorescent-modified probe that was then enzymatically linked with the adjacent, substrate-bound probe in order to achieve highly specific, parallel, and high-throughput detection. Specificity and sensitivity testing demonstrated that enzyme-linked probe hybridization technology could be applied to the specific detection of eight genetic modification events at the same time, with a sensitivity reaching 0.1% and the achievement of accurate, efficient, and stable results.

  19. Designer diatom episomes delivered by bacterial conjugation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karas, Bogumil J.; Diner, Rachel E.; Lefebvre, Stephane C.

    Eukaryotic microalgae hold great promise for the bioproduction of fuels and higher value chemicals. However, compared with model genetic organisms such as Escherichia coli and Saccharomyces cerevisiae, characterization of the complex biology and biochemistry of algae and strain improvement has been hampered by the inefficient genetic tools. To date, many algal species are transformable only via particle bombardment, and the introduced DNA is integrated randomly into the nuclear genome. Here we describe the first nuclear episomal vector for diatoms and a plasmid delivery method via conjugation from Escherichia coli to the diatoms Phaeodactylum tricornutum and Thalassiosira pseudonana. We identify amore » yeast-derived sequence that enables stable episome replication in these diatoms even in the absence of antibiotic selection and show that episomes are maintained as closed circles at copy number equivalent to native chromosomes. This highly efficient genetic system facilitates high-throughput functional characterization of algal genes and accelerates molecular phytoplankton research.« less

  20. Sequential bottom-up assembly of mechanically stabilized synthetic cells by microfluidics

    NASA Astrophysics Data System (ADS)

    Weiss, Marian; Frohnmayer, Johannes Patrick; Benk, Lucia Theresa; Haller, Barbara; Janiesch, Jan-Willi; Heitkamp, Thomas; Börsch, Michael; Lira, Rafael B.; Dimova, Rumiana; Lipowsky, Reinhard; Bodenschatz, Eberhard; Baret, Jean-Christophe; Vidakovic-Koch, Tanja; Sundmacher, Kai; Platzman, Ilia; Spatz, Joachim P.

    2018-01-01

    Compartments for the spatially and temporally controlled assembly of biological processes are essential towards cellular life. Synthetic mimics of cellular compartments based on lipid-based protocells lack the mechanical and chemical stability to allow their manipulation into a complex and fully functional synthetic cell. Here, we present a high-throughput microfluidic method to generate stable, defined sized liposomes termed `droplet-stabilized giant unilamellar vesicles (dsGUVs)’. The enhanced stability of dsGUVs enables the sequential loading of these compartments with biomolecules, namely purified transmembrane and cytoskeleton proteins by microfluidic pico-injection technology. This constitutes an experimental demonstration of a successful bottom-up assembly of a compartment with contents that would not self-assemble to full functionality when simply mixed together. Following assembly, the stabilizing oil phase and droplet shells are removed to release functional self-supporting protocells to an aqueous phase, enabling them to interact with physiologically relevant matrices.

  1. Ab initio structure prediction of silicon and germanium sulfides for lithium-ion battery materials

    NASA Astrophysics Data System (ADS)

    Hsueh, Connie; Mayo, Martin; Morris, Andrew J.

    Conventional experimental-based approaches to materials discovery, which can rely heavily on trial and error, are time-intensive and costly. We discuss approaches to coupling experimental and computational techniques in order to systematize, automate, and accelerate the process of materials discovery, which is of particular relevance to developing new battery materials. We use the ab initio random structure searching (AIRSS) method to conduct a systematic investigation of Si-S and Ge-S binary compounds in order to search for novel materials for lithium-ion battery (LIB) anodes. AIRSS is a high-throughput, density functional theory-based approach to structure prediction which has been successful at predicting the structures of LIBs containing sulfur and silicon and germanium. We propose a lithiation mechanism for Li-GeS2 anodes as well as report new, theoretically stable, layered and porous structures in the Si-S and Ge-S systems that pique experimental interest.

  2. Directed evolution of an ultrastable carbonic anhydrase for highly efficient carbon capture from flue gas

    DOE PAGES

    Alvizo, Oscar; Nguyen, Luan J.; Savile, Christopher K.; ...

    2014-11-03

    Carbonic anhydrase (CA) is one of nature’s fastest enzymes and can dramatically improve the economics of carbon capture under demanding environments such as coal-fired power plants. The use of CA to accelerate carbon capture is limited by the enzyme’s sensitivity to the harsh process conditions. Using directed evolution, the properties of a β-class CA from Desulfovibrio vulgaris were dramatically enhanced. Iterative rounds of library design, library generation, and high-throughput screening identified highly stable CA variants that tolerate temperatures of up to 107 °C in the presence of 4.2 M alkaline amine solvent at pH >10.0. This increase in thermostability andmore » alkali tolerance translates to a 4,000,000-fold improvement over the natural enzyme. In conclusion, at pilot scale, the evolved catalyst enhanced the rate of CO2 absorption 25-fold compared with the noncatalyzed reaction.« less

  3. Estimation of Dynamic Systems for Gene Regulatory Networks from Dependent Time-Course Data.

    PubMed

    Kim, Yoonji; Kim, Jaejik

    2018-06-15

    Dynamic system consisting of ordinary differential equations (ODEs) is a well-known tool for describing dynamic nature of gene regulatory networks (GRNs), and the dynamic features of GRNs are usually captured through time-course gene expression data. Owing to high-throughput technologies, time-course gene expression data have complex structures such as heteroscedasticity, correlations between genes, and time dependence. Since gene experiments typically yield highly noisy data with small sample size, for a more accurate prediction of the dynamics, the complex structures should be taken into account in ODE models. Hence, this study proposes an ODE model considering such data structures and a fast and stable estimation method for the ODE parameters based on the generalized profiling approach with data smoothing techniques. The proposed method also provides statistical inference for the ODE estimator and it is applied to a zebrafish retina cell network.

  4. Rapid transporter regulation prevents substrate flow traffic jams in boron transport

    PubMed Central

    Sotta, Naoyuki; Duncan, Susan; Tanaka, Mayuki; Sato, Takafumi

    2017-01-01

    Nutrient uptake by roots often involves substrate-dependent regulated nutrient transporters. For robust uptake, the system requires a regulatory circuit within cells and a collective, coordinated behaviour across the tissue. A paradigm for such systems is boron uptake, known for its directional transport and homeostasis, as boron is essential for plant growth but toxic at high concentrations. In Arabidopsis thaliana, boron uptake occurs via diffusion facilitators (NIPs) and exporters (BORs), each presenting distinct polarity. Intriguingly, although boron soil concentrations are homogenous and stable, both transporters manifest strikingly swift boron-dependent regulation. Through mathematical modelling, we demonstrate that slower regulation of these transporters leads to physiologically detrimental oscillatory behaviour. Cells become periodically exposed to potentially cytotoxic boron levels, and nutrient throughput to the xylem becomes hampered. We conclude that, while maintaining homeostasis, swift transporter regulation within a polarised tissue context is critical to prevent intrinsic traffic-jam like behaviour of nutrient flow. PMID:28870285

  5. Rapid transporter regulation prevents substrate flow traffic jams in boron transport.

    PubMed

    Sotta, Naoyuki; Duncan, Susan; Tanaka, Mayuki; Sato, Takafumi; Marée, Athanasius Fm; Fujiwara, Toru; Grieneisen, Verônica A

    2017-09-05

    Nutrient uptake by roots often involves substrate-dependent regulated nutrient transporters. For robust uptake, the system requires a regulatory circuit within cells and a collective, coordinated behaviour across the tissue. A paradigm for such systems is boron uptake, known for its directional transport and homeostasis, as boron is essential for plant growth but toxic at high concentrations. In Arabidopsis thaliana , boron uptake occurs via diffusion facilitators (NIPs) and exporters (BORs), each presenting distinct polarity. Intriguingly, although boron soil concentrations are homogenous and stable, both transporters manifest strikingly swift boron-dependent regulation. Through mathematical modelling, we demonstrate that slower regulation of these transporters leads to physiologically detrimental oscillatory behaviour. Cells become periodically exposed to potentially cytotoxic boron levels, and nutrient throughput to the xylem becomes hampered. We conclude that, while maintaining homeostasis, swift transporter regulation within a polarised tissue context is critical to prevent intrinsic traffic-jam like behaviour of nutrient flow.

  6. Efficient transformation and artificial miRNA gene silencing in Lemna minor

    PubMed Central

    Cantó-Pastor, Alex; Mollá-Morales, Almudena; Ernst, Evan; Dahl, William; Zhai, Jixian; Yan, Yiheng; Meyers, Blake; Shanklin, John; Martienssen, Robert

    2015-01-01

    Lack of genetic tools in the Lemnaceae (duckweed) has impeded full implementation of this organism as model for biological research, despite its rapid doubling time, simple architecture and unusual metabolic characteristics. Here we present technologies to facilitate high-throughput genetic studies in duckweed. We developed a fast and efficient method for producing Lemna minor stable transgenic fronds via agrobacterium-mediated transformation and regeneration from tissue culture. Additionally, we engineered an artificial microRNA (amiRNA) gene silencing system. We identified a Lemna gibba endogenous miR166 precursor and used it as a backbone to produce amiRNAs. As a proof of concept we induced the silencing of CH42, a Magnesium Chelatase subunit, using our amiRNA platform. Expression of CH42 in transgenic Lemna minor fronds was significantly reduced, which resulted in reduction of chlorophyll pigmentation. The techniques presented here will enable tackling future challenges in the biology and biotechnology of Lemnaceae. PMID:24989135

  7. Rapid and Facile Microwave-Assisted Surface Chemistry for Functionalized Microarray Slides

    PubMed Central

    Lee, Jeong Heon; Hyun, Hoon; Cross, Conor J.; Henary, Maged; Nasr, Khaled A.; Oketokoun, Rafiou; Choi, Hak Soo; Frangioni, John V.

    2011-01-01

    We describe a rapid and facile method for surface functionalization and ligand patterning of glass slides based on microwave-assisted synthesis and a microarraying robot. Our optimized reaction enables surface modification 42-times faster than conventional techniques and includes a carboxylated self-assembled monolayer, polyethylene glycol linkers of varying length, and stable amide bonds to small molecule, peptide, or protein ligands to be screened for binding to living cells. We also describe customized slide racks that permit functionalization of 100 slides at a time to produce a cost-efficient, highly reproducible batch process. Ligand spots can be positioned on the glass slides precisely using a microarraying robot, and spot size adjusted for any desired application. Using this system, we demonstrate live cell binding to a variety of ligands and optimize PEG linker length. Taken together, the technology we describe should enable high-throughput screening of disease-specific ligands that bind to living cells. PMID:23467787

  8. Surrogate measures: A proposed alternative in human factors assessment of operational measures of performance

    NASA Technical Reports Server (NTRS)

    Kennedy, Robert S.; Lane, Norman E.; Kuntz, Lois A.

    1987-01-01

    Surrogate measures are proposed as an alternative to direct assessment of operational performance for purposes of screening agents who may have to work under unusual stresses or in exotic environments. Such measures are particularly proposed when the surrogate can be empirically validated against the operational criterion. The focus is on cognitive (or throughput) performances in humans as opposed to sensory (input) or motor (output) measures, but the methods should be applicable for development of batteries which will tap input/output functions. A menu of performance tasks is under development for implementation on a battery-operated portable microcomputer, with 21 tests currently available. The tasks are reliable and become stable in minimum amounts of time; appear sensitive to some agents; comprise constructs related to actual job tasks; and are easily administered in most environments. Implications for human factors engineering studies in environmental stress are discussed.

  9. Directed evolution of an ultrastable carbonic anhydrase for highly efficient carbon capture from flue gas

    PubMed Central

    Alvizo, Oscar; Nguyen, Luan J.; Savile, Christopher K.; Bresson, Jamie A.; Lakhapatri, Satish L.; Solis, Earl O. P.; Fox, Richard J.; Broering, James M.; Benoit, Michael R.; Zimmerman, Sabrina A.; Novick, Scott J.; Liang, Jack; Lalonde, James J.

    2014-01-01

    Carbonic anhydrase (CA) is one of nature’s fastest enzymes and can dramatically improve the economics of carbon capture under demanding environments such as coal-fired power plants. The use of CA to accelerate carbon capture is limited by the enzyme’s sensitivity to the harsh process conditions. Using directed evolution, the properties of a β-class CA from Desulfovibrio vulgaris were dramatically enhanced. Iterative rounds of library design, library generation, and high-throughput screening identified highly stable CA variants that tolerate temperatures of up to 107 °C in the presence of 4.2 M alkaline amine solvent at pH >10.0. This increase in thermostability and alkali tolerance translates to a 4,000,000-fold improvement over the natural enzyme. At pilot scale, the evolved catalyst enhanced the rate of CO2 absorption 25-fold compared with the noncatalyzed reaction. PMID:25368146

  10. Deep Space Network and Lunar Network Communication Coverage of the Moon

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2006-01-01

    In this article, we describe the communication coverage analysis for the lunar network and the Earth ground stations. The first part of this article focuses on the direct communication coverage of the Moon from the Earth's ground stations. In particular, we assess the coverage performance of the Moon based on the existing Deep Space Network (DSN) antennas and the complimentary coverage of other potential stations at Hartebeesthoek, South Africa and at Santiago, Chile. We also address the coverage sensitivity based on different DSN antenna scenarios and their capability to provide single and redundant coverage of the Moon. The second part of this article focuses on the framework of the constrained optimization scheme to seek a stable constellation six relay satellites in two planes that not only can provide continuous communication coverage to any users on the Moon surface, but can also deliver data throughput in a highly efficient manner.

  11. Application of ToxCast High-Throughput Screening and ...

    EPA Pesticide Factsheets

    Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors

  12. Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Guojun; Tierney, Brian

    2003-01-31

    Distinguishing available bandwidth and achievable throughput is essential for improving network applications' performance. Achievable throughput is the throughput considering a number of factors such as network protocol, host speed, network path, and TCP buffer space, where as available bandwidth only considers the network path. Without understanding this difference, trying to improve network applications' performance is like ''blind men feeling the elephant'' [4]. In this paper, we define and distinguish bandwidth and throughput, and debate which part of each is achievable and which is available. Also, we introduce and discuss a new concept - Maximum Burst Size that is crucial tomore » the network performance and bandwidth sharing. A tool, netest, is introduced to help users to determine the available bandwidth, and provides information to achieve better throughput with fairness of sharing the available bandwidth, thus reducing misuse of the network.« less

  13. Determination of the Isotopic Enrichment of 13C- and 2H-Labeled Tracers of Glucose Using High-Resolution Mass Spectrometry: Application to Dual- and Triple-Tracer Studies.

    PubMed

    Trötzmüller, Martin; Triebl, Alexander; Ajsic, Amra; Hartler, Jürgen; Köfeler, Harald; Regittnig, Werner

    2017-11-21

    Multiple-tracer approaches for investigating glucose metabolism in humans usually involve the administration of stable and radioactive glucose tracers and the subsequent determination of tracer enrichments in sampled blood. When using conventional, low-resolution mass spectrometry (LRMS), the number of spectral interferences rises rapidly with the number of stable tracers employed. Thus, in LRMS, both computational effort and statistical uncertainties associated with the correction for spectral interferences limit the number of stable tracers that can be simultaneously employed (usually two). Here we show that these limitations can be overcome by applying high-resolution mass spectrometry (HRMS). The HRMS method presented is based on the use of an Orbitrap mass spectrometer operated at a mass resolution of 100 000 to allow electrospray-generated ions of the deprotonated glucose molecules to be monitored at their exact masses. The tracer enrichment determination in blood plasma is demonstrated for several triple combinations of 13 C- and 2 H-labeled glucose tracers (e.g., [1- 2 H 1 ]-, [6,6- 2 H 2 ]-, [1,6- 13 C 2 ]glucose). For each combination it is shown that ions arising from 2 H-labeled tracers are completely differentiated from those arising from 13 C-labeled tracers, thereby allowing the enrichment of a tracer to be simply calculated from the observed ion intensities using a standard curve with curve parameters unaffected by the presence of other tracers. For each tracer, the HRMS method exhibits low limits of detection and good repeatability in the tested 0.1-15.0% enrichment range. Additionally, due to short sample preparation and analysis times, the method is well-suited for high-throughput determination of multiple glucose tracer enrichments in plasma samples.

  14. High Throughput Screening For Hazard and Risk of Environmental Contaminants

    EPA Science Inventory

    High throughput toxicity testing provides detailed mechanistic information on the concentration response of environmental contaminants in numerous potential toxicity pathways. High throughput screening (HTS) has several key advantages: (1) expense orders of magnitude less than an...

  15. Prediction-based association control scheme in dense femtocell networks

    PubMed Central

    Pham, Ngoc-Thai; Huynh, Thong; Hwang, Won-Joo; You, Ilsun; Choo, Kim-Kwang Raymond

    2017-01-01

    The deployment of large number of femtocell base stations allows us to extend the coverage and efficiently utilize resources in a low cost manner. However, the small cell size of femtocell networks can result in frequent handovers to the mobile user, and consequently throughput degradation. Thus, in this paper, we propose predictive association control schemes to improve the system’s effective throughput. Our design focuses on reducing handover frequency without impacting on throughput. The proposed schemes determine handover decisions that contribute most to the network throughput and are proper for distributed implementations. The simulation results show significant gains compared with existing methods in terms of handover frequency and network throughput perspective. PMID:28328992

  16. Spatial tuning of acoustofluidic pressure nodes by altering net sonic velocity enables high-throughput, efficient cell sorting

    DOE PAGES

    Jung, Seung-Yong; Notton, Timothy; Fong, Erika; ...

    2015-01-07

    Particle sorting using acoustofluidics has enormous potential but widespread adoption has been limited by complex device designs and low throughput. Here, we report high-throughput separation of particles and T lymphocytes (600 μL min -1) by altering the net sonic velocity to reposition acoustic pressure nodes in a simple two-channel device. Finally, the approach is generalizable to other microfluidic platforms for rapid, high-throughput analysis.

  17. High Performance Computing Modernization Program Kerberos Throughput Test Report

    DTIC Science & Technology

    2017-10-26

    functionality as Kerberos plugins. The pre -release production kit was used in these tests to compare against the current release kit. YubiKey support...HPCMP Kerberos Throughput Test Report 3 2. THROUGHPUT TESTING 2.1 Testing Components Throughput testing was done to determine the benefits of the pre ...both the current release kit and the pre -release production kit for a total of 378 individual tests in order to note any improvements. Based on work

  18. Adaptive Traffic Route Control in QoS Provisioning for Cognitive Radio Technology with Heterogeneous Wireless Systems

    NASA Astrophysics Data System (ADS)

    Yamamoto, Toshiaki; Ueda, Tetsuro; Obana, Sadao

    As one of the dynamic spectrum access technologies, “cognitive radio technology,” which aims to improve the spectrum efficiency, has been studied. In cognitive radio networks, each node recognizes radio conditions, and according to them, optimizes its wireless communication routes. Cognitive radio systems integrate the heterogeneous wireless systems not only by switching over them but also aggregating and utilizing them simultaneously. The adaptive control of switchover use and concurrent use of various wireless systems will offer a stable and flexible wireless communication. In this paper, we propose the adaptive traffic route control scheme that provides high quality of service (QoS) for cognitive radio technology, and examine the performance of the proposed scheme through the field trials and computer simulations. The results of field trials show that the adaptive route control according to the radio conditions improves the user IP throughput by more than 20% and reduce the one-way delay to less than 1/6 with the concurrent use of IEEE802.16 and IEEE802.11 wireless media. Moreover, the simulation results assuming hundreds of mobile terminals reveal that the number of users receiving the required QoS of voice over IP (VoIP) service and the total network throughput of FTP users increase by more than twice at the same time with the proposed algorithm. The proposed adaptive traffic route control scheme can enhance the performances of the cognitive radio technologies by providing the appropriate communication routes for various applications to satisfy their required QoS.

  19. High-speed ultrafast laser machining with tertiary beam positioning (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Chuan; Zhang, Haibin

    2017-03-01

    For an industrial laser application, high process throughput and low average cost of ownership are critical to commercial success. Benefiting from high peak power, nonlinear absorption and small-achievable spot size, ultrafast lasers offer advantages of minimal heat affected zone, great taper and sidewall quality, and small via capability that exceeds the limits of their predecessors in via drilling for electronic packaging. In the past decade, ultrafast lasers have both grown in power and reduced in cost. For example, recently, disk and fiber technology have both shown stable operation in the 50W to 200W range, mostly at high repetition rate (beyond 500 kHz) that helps avoid detrimental nonlinear effects. However, to effectively and efficiently scale the throughput with the fast-growing power capability of the ultrafast lasers while keeping the beneficial laser-material interactions is very challenging, mainly because of the bottleneck imposed by the inertia-related acceleration limit and servo gain bandwidth when only stages and galvanometers are being used. On the other side, inertia-free scanning solutions like acoustic optics and electronic optical deflectors have small scan field, and therefore not suitable for large-panel processing. Our recent system developments combine stages, galvanometers, and AODs into a coordinated tertiary architecture for high bandwidth and meanwhile large field beam positioning. Synchronized three-level movements allow extremely fast local speed and continuous motion over the whole stage travel range. We present the via drilling results from such ultrafast system with up to 3MHz pulse to pulse random access, enabling high quality low cost ultrafast machining with emerging high average power laser sources.

  20. Bringing the light to high throughput screening: use of optogenetic tools for the development of recombinant cellular assays

    NASA Astrophysics Data System (ADS)

    Agus, Viviana; Di Silvio, Alberto; Rolland, Jean Francois; Mondini, Anna; Tremolada, Sara; Montag, Katharina; Scarabottolo, Lia; Redaelli, Loredana; Lohmer, Stefan

    2015-03-01

    The use of light-activated proteins represents a powerful tool to control biological processes with high spatial and temporal precision. These so called "optogenetic" technologies have been successfully validated in many recombinant systems, and have been widely applied to the study of cellular mechanisms in intact tissues or behaving animals; to do that, complex, high-intensity, often home-made instrumentations were developed to achieve the optimal power and precision of light stimulation. In our study we sought to determine if this optical modulation can be obtained also in a miniaturized format, such as a 384-well plate, using the instrumentations normally dedicated to fluorescence analysis in High Throughput Screening (HTS) activities, such as for example the FLIPR (Fluorometric Imaging Plate Reader) instrument. We successfully generated optogenetic assays for the study of different ion channel targets: the CaV1.3 calcium channel was modulated by the light-activated Channelrhodopsin-2, the HCN2 cyclic nucleotide gated (CNG) channel was modulated by the light activated bPAC adenylyl cyclase, and finally the genetically encoded voltage indicator ArcLight was efficiently used to measure potassium, sodium or chloride channel activity. Our results showed that stable, robust and miniaturized cellular assays can be developed using different optogenetic tools, and efficiently modulated by the FLIPR instrument LEDs in a 384-well format. The spatial and temporal resolution delivered by this technology might enormously advantage the early stages of drug discovery, leading to the identification of more physiological and effective drug molecules.

  1. An automated maze task for assessing hippocampus-sensitive memory in mice☆

    PubMed Central

    Pioli, Elsa Y.; Gaskill, Brianna N.; Gilmour, Gary; Tricklebank, Mark D.; Dix, Sophie L.; Bannerman, David; Garner, Joseph P.

    2014-01-01

    Memory deficits associated with hippocampal dysfunction are a key feature of a number of neurodegenerative and psychiatric disorders. The discrete-trial rewarded alternation T-maze task is highly sensitive to hippocampal dysfunction. Normal mice have spontaneously high levels of alternation, whereas hippocampal-lesioned mice are dramatically impaired. However, this is a hand-run task and handling has been shown to impact crucially on behavioural responses, as well as being labour-intensive and therefore unsuitable for high-throughput studies. To overcome this, a fully automated maze was designed. The maze was attached to the mouse's home cage and the subject earned all of its food by running through the maze. In this study the hippocampal dependence of rewarded alternation in the automated maze was assessed. Bilateral hippocampal-lesioned mice were assessed in the standard, hand-run, discrete-trial rewarded alternation paradigm and in the automated paradigm, according to a cross-over design. A similarly robust lesion effect on alternation performance was found in both mazes, confirming the sensitivity of the automated maze to hippocampal lesions. Moreover, the performance of the animals in the automated maze was not affected by their handling history whereas performance in the hand-run maze was affected by prior testing history. By having more stable performance and by decreasing human contact the automated maze may offer opportunities to reduce extraneous experimental variation and therefore increase the reproducibility within and/or between laboratories. Furthermore, automation potentially allows for greater experimental throughput and hence suitability for use in assessment of cognitive function in drug discovery. PMID:24333574

  2. Digital micromirror devices in Raman trace detection of explosives

    NASA Astrophysics Data System (ADS)

    Glimtoft, Martin; Svanqvist, Mattias; Ågren, Matilda; Nordberg, Markus; Östmark, Henric

    2016-05-01

    Imaging Raman spectroscopy based on tunable filters is an established technique for detecting single explosives particles at stand-off distances. However, large light losses are inherent in the design due to sequential imaging at different wavelengths, leading to effective transmission often well below 1 %. The use of digital micromirror devices (DMD) and compressive sensing (CS) in imaging Raman explosives trace detection can improve light throughput and add significant flexibility compared to existing systems. DMDs are based on mature microelectronics technology, and are compact, scalable, and can be customized for specific tasks, including new functions not available with current technologies. This paper has been focusing on investigating how a DMD can be used when applying CS-based imaging Raman spectroscopy on stand-off explosives trace detection, and evaluating the performance in terms of light throughput, image reconstruction ability and potential detection limits. This type of setup also gives the possibility to combine imaging Raman with non-spatially resolved fluorescence suppression techniques, such as Kerr gating. The system used consists of a 2nd harmonics Nd:YAG laser for sample excitation, collection optics, DMD, CMOScamera and a spectrometer with ICCD camera for signal gating and detection. Initial results for compressive sensing imaging Raman shows a stable reconstruction procedure even at low signals and in presence of interfering background signal. It is also shown to give increased effective light transmission without sacrificing molecular specificity or area coverage compared to filter based imaging Raman. At the same time it adds flexibility so the setup can be customized for new functionality.

  3. A modified method for determining the focal ratio degradation and length properties of optical fibres in astronomy

    NASA Astrophysics Data System (ADS)

    Yan, Yunxiang; Wang, Gang; Sun, Weimin; Luo, A.-Li; Ma, Zhenyu; Li, Jian; Wang, Shuqing

    2017-04-01

    Focal ratio degradation (FRD) is a major contributor to throughput and light loss in a fibre spectroscopic telescope system. We combine the guided mode theory in geometric optics and a well-known model, the power distribution model (PDM), to predict and explain the FRD dependence properties. We present a robust method by modifying the energy distribution method with f-intercept to control the input condition. This method provides a way to determine the proper position of the fibre end on the focal plane to improve energy utilization and FRD performance, which lifts the relative throughput up to 95 per cent with variation of output focal ratio less than 2 per cent. This method can also help to optimize the arrangement of the position of focal-plane plate to enhance the coupling efficiency in a telescope. To investigate length properties, we modified the PDM by introducing a new parameter, the focal distance f, into the original model to make it available for a multiposition measurement system. The results show that the modified model is robust and feasible for measuring the key parameter d0 to simulate the transmission characteristics. The output focal ratio in the experiment does not follow the prediction trend but shows an interesting phenomenon: the output focal ratio increases first to the peak, then decreases and remains stable finally with increasing fibre length longer than 15 m. This provides a reference for choosing the appropriate length of fibre to improve the FRD performance for the design of the fibre system in a telescope.

  4. Phylogenetically Distinct Phylotypes Modulate Nitrification in a Paddy Soil

    PubMed Central

    Zhao, Jun; Wang, Baozhan

    2015-01-01

    Paddy fields represent a unique ecosystem in which regular flooding occurs, allowing for rice cultivation. However, the taxonomic identity of the microbial functional guilds that catalyze soil nitrification remains poorly understood. In this study, we provide molecular evidence for distinctly different phylotypes of nitrifying communities in a neutral paddy soil using high-throughput pyrosequencing and DNA-based stable isotope probing (SIP). Following urea addition, the levels of soil nitrate increased significantly, accompanied by an increase in the abundance of the bacterial and archaeal amoA gene in microcosms subjected to SIP (SIP microcosms) during a 56-day incubation period. High-throughput fingerprints of the total 16S rRNA genes in SIP microcosms indicated that nitrification activity positively correlated with the abundance of Nitrosospira-like ammonia-oxidizing bacteria (AOB), soil group 1.1b-like ammonia-oxidizing archaea (AOA), and Nitrospira-like nitrite-oxidizing bacteria (NOB). Pyrosequencing of 13C-labeled DNA further revealed that 13CO2 was assimilated by these functional groups to a much greater extent than by marine group 1.1a-associated AOA and Nitrobacter-like NOB. Phylogenetic analysis demonstrated that active AOB communities were closely affiliated with Nitrosospira sp. strain L115 and the Nitrosospira multiformis lineage and that the 13C-labeled AOA were related to phylogenetically distinct groups, including the moderately thermophilic “Candidatus Nitrososphaera gargensis,” uncultured fosmid 29i4, and acidophilic “Candidatus Nitrosotalea devanaterra” lineages. These results suggest that a wide variety of microorganisms were involved in soil nitrification, implying physiological diversification of soil nitrifying communities that are constantly exposed to environmental fluctuations in paddy fields. PMID:25724959

  5. A High Throughput Screening Assay System for the Identification of Small Molecule Inhibitors of gsp

    PubMed Central

    Bhattacharyya, Nisan; Hu, Xin; Chen, Catherine Z.; Mathews Griner, Lesley A.; Zheng, Wei; Inglese, James; Austin, Christopher P.; Marugan, Juan J.; Southall, Noel; Neumann, Susanne; Northup, John K.; Ferrer, Marc; Collins, Michael T.

    2014-01-01

    Mis-sense mutations in the α-subunit of the G-protein, Gsα, cause fibrous dysplasia of bone/McCune-Albright syndrome. The biochemical outcome of these mutations is constitutively active Gsα and increased levels of cAMP. The aim of this study was to develop an assay system that would allow the identification of small molecule inhibitors specific for the mutant Gsα protein, the so-called gsp oncogene. Commercially available Chinese hamster ovary cells were stably transfected with either wild-type (WT) or mutant Gsα proteins (R201C and R201H). Stable cell lines with equivalent transfected Gsα protein expression that had relatively lower (WT) or higher (R201C and R201H) cAMP levels were generated. These cell lines were used to develop a fluorescence resonance energy transfer (FRET)–based cAMP assay in 1536-well microplate format for high throughput screening of small molecule libraries. A small molecule library of 343,768 compounds was screened to identify modulators of gsp activity. A total of 1,356 compounds with inhibitory activity were initially identified and reconfirmed when tested in concentration dose responses. Six hundred eighty-six molecules were selected for further analysis after removing cytotoxic compounds and those that were active in forskolin-induced WT cells. These molecules were grouped by potency, efficacy, and structural similarities to yield 22 clusters with more than 5 of structurally similar members and 144 singleton molecules. Seven chemotypes of the major clusters were identified for further testing and analyses. PMID:24667240

  6. Three-dimensional HepaRG model as an attractive tool for toxicity testing.

    PubMed

    Leite, Sofia B; Wilk-Zasadna, Iwona; Zaldivar, Jose M; Airola, Elodie; Reis-Fernandes, Marcos A; Mennecozzi, Milena; Guguen-Guillouzo, Christiane; Chesne, Christopher; Guillou, Claude; Alves, Paula M; Coecke, Sandra

    2012-11-01

    The culture of HepaRG cells as three dimensional (3D) structures in the spinner-bioreactor may represent added value as a hepatic system for toxicological purposes. The use of a cost-effective commercially available bioreactor, which is compatible with high-throughput cell analysis, constitutes an attractive approach for routine use in the drug testing industry. In order to assess specific aspects of the biotransformation capacity of the bioreactor-based HepaRG system, the induction of CYP450 enzymes (i.e., CYP1A2, 2B6, 2C9, and 3A4) and the activity of the phase II enzyme, uridine diphosphate glucuronoltransferase (UGT), were tested. The long-term functionality of the system was demonstrated by 7-week stable profiles of albumin secretion, CYP3A4 induction, and UGT activities. Immunofluorescence-based staining showed formation of tissue-like arrangements including bile canaliculi-like structures and polar distribution of transporters. The use of in silico models to analyze the in vitro data related to hepatotoxic activity of acetaminophen (APAP) demonstrated the advantage of the integration of kinetic and dynamic aspects for a better understanding of the in vitro cell behavior. The bioactivation of APAP and its related cytotoxicity was assessed in a system compatible to high-throughput screening. The approach also proved to be a good strategy to reduce the time necessary to obtain fully differentiated cell cultures. In conclusion, HepaRG cells cultured in 3D spinner-bioreactors are an attractive tool for toxicological studies, showing a liver-like performance and demonstrating a practical applicability for toxicodynamic approaches.

  7. High Throughput Transcriptomics: From screening to pathways

    EPA Science Inventory

    The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...

  8. Advancements in Aptamer Discovery Technologies.

    PubMed

    Gotrik, Michael R; Feagin, Trevor A; Csordas, Andrew T; Nakamoto, Margaret A; Soh, H Tom

    2016-09-20

    Affinity reagents that specifically bind to their target molecules are invaluable tools in nearly every field of modern biomedicine. Nucleic acid-based aptamers offer many advantages in this domain, because they are chemically synthesized, stable, and economical. Despite these compelling features, aptamers are currently not widely used in comparison to antibodies. This is primarily because conventional aptamer-discovery techniques such as SELEX are time-consuming and labor-intensive and often fail to produce aptamers with comparable binding performance to antibodies. This Account describes a body of work from our laboratory in developing advanced methods for consistently producing high-performance aptamers with higher efficiency, fewer resources, and, most importantly, a greater probability of success. We describe our efforts in systematically transforming each major step of the aptamer discovery process: selection, analysis, and characterization. To improve selection, we have developed microfluidic devices (M-SELEX) that enable discovery of high-affinity aptamers after a minimal number of selection rounds by precisely controlling the target concentration and washing stringency. In terms of improving aptamer pool analysis, our group was the first to use high-throughput sequencing (HTS) for the discovery of new aptamers. We showed that tracking the enrichment trajectory of individual aptamer sequences enables the identification of high-performing aptamers without requiring full convergence of the selected aptamer pool. HTS is now widely used for aptamer discovery, and open-source software has become available to facilitate analysis. To improve binding characterization, we used HTS data to design custom aptamer arrays to measure the affinity and specificity of up to ∼10(4) DNA aptamers in parallel as a means to rapidly discover high-quality aptamers. Most recently, our efforts have culminated in the invention of the "particle display" (PD) screening system, which transforms solution-phase aptamers into "aptamer particles" that can be individually screened at high-throughput via fluorescence-activated cell sorting. Using PD, we have shown the feasibility of rapidly generating aptamers with exceptional affinities, even for proteins that have previously proven intractable to aptamer discovery. We are confident that these advanced aptamer-discovery methods will accelerate the discovery of aptamer reagents with excellent affinities and specificities, perhaps even exceeding those of the best monoclonal antibodies. Since aptamers are reproducible, renewable, stable, and can be distributed as sequence information, we anticipate that these affinity reagents will become even more valuable tools for both research and clinical applications.

  9. The high throughput biomedicine unit at the institute for molecular medicine Finland: high throughput screening meets precision medicine.

    PubMed

    Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister

    2014-05-01

    The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.

  10. Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Bergel, Itsik; Perets, Yona; Shamai, Shlomo

    2016-05-01

    In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.

  11. 20180311 - High Throughput Transcriptomics: From screening to pathways (SOT 2018)

    EPA Science Inventory

    The EPA ToxCast effort has screened thousands of chemicals across hundreds of high-throughput in vitro screening assays. The project is now leveraging high-throughput transcriptomic (HTTr) technologies to substantially expand its coverage of biological pathways. The first HTTr sc...

  12. Evaluation of Sequencing Approaches for High-Throughput Transcriptomics - (BOSC)

    EPA Science Inventory

    Whole-genome in vitro transcriptomics has shown the capability to identify mechanisms of action and estimates of potency for chemical-mediated effects in a toxicological framework, but with limited throughput and high cost. The generation of high-throughput global gene expression...

  13. Studies of Several New Modifications of Aggressive Packet Combining to Achieve Higher Throughput, Based on Correction Capability of Disjoint Error Vectors

    NASA Astrophysics Data System (ADS)

    Chakraborty, Swarnendu Kumar; Goswami, Rajat Subhra; Bhunia, Chandan Tilak; Bhunia, Abhinandan

    2016-06-01

    Aggressive packet combining (APC) scheme is well-established in literature. Several modifications were studied earlier for improving throughput. In this paper, three new modifications of APC are proposed. The performance of proposed modified APC is studied by simulation and is reported here. A hybrid scheme is proposed here for getting higher throughput and also the disjoint factor is compared among conventional APC with proposed schemes for getting higher throughput.

  14. Design and implementation of priority and time-window based traffic scheduling and routing-spectrum allocation mechanism in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Honghuan; Xing, Fangyuan; Yin, Hongxi; Zhao, Nan; Lian, Bizhan

    2016-02-01

    With the explosive growth of network services, the reasonable traffic scheduling and efficient configuration of network resources have an important significance to increase the efficiency of the network. In this paper, an adaptive traffic scheduling policy based on the priority and time window is proposed and the performance of this algorithm is evaluated in terms of scheduling ratio. The routing and spectrum allocation are achieved by using the Floyd shortest path algorithm and establishing a node spectrum resource allocation model based on greedy algorithm, which is proposed by us. The fairness index is introduced to improve the capability of spectrum configuration. The results show that the designed traffic scheduling strategy can be applied to networks with multicast and broadcast functionalities, and makes them get real-time and efficient response. The scheme of node spectrum configuration improves the frequency resource utilization and gives play to the efficiency of the network.

  15. The Xpress Transfer Protocol (XTP): A tutorial (expanded version)

    NASA Technical Reports Server (NTRS)

    Sanders, Robert M.; Weaver, Alfred C.

    1990-01-01

    The Xpress Transfer Protocol (XTP) is a reliable, real-time, light weight transfer layer protocol. Current transport layer protocols such as DoD's Transmission Control Protocol (TCP) and ISO's Transport Protocol (TP) were not designed for the next generation of high speed, interconnected reliable networks such as fiber distributed data interface (FDDI) and the gigabit/second wide area networks. Unlike all previous transport layer protocols, XTP is being designed to be implemented in hardware as a VLSI chip set. By streamlining the protocol, combining the transport and network layers and utilizing the increased speed and parallelization possible with a VLSI implementation, XTP will be able to provide the end-to-end data transmission rates demanded in high speed networks without compromising reliability and functionality. This paper describes the operation of the XTP protocol and in particular, its error, flow and rate control; inter-networking addressing mechanisms; and multicast support features, as defined in the XTP Protocol Definition Revision 3.4.

  16. OAM-labeled free-space optical flow routing.

    PubMed

    Gao, Shecheng; Lei, Ting; Li, Yangjin; Yuan, Yangsheng; Xie, Zhenwei; Li, Zhaohui; Yuan, Xiaocong

    2016-09-19

    Space-division multiplexing allows unprecedented scaling of bandwidth density for optical communication. Routing spatial channels among transmission ports is critical for future scalable optical network, however, there is still no characteristic parameter to label the overlapped optical carriers. Here we propose a free-space optical flow routing (OFR) scheme by using optical orbital angular moment (OAM) states to label optical flows and simultaneously steer each flow according to their OAM states. With an OAM multiplexer and a reconfigurable OAM demultiplexer, massive individual optical flows can be routed to the demanded optical ports. In the routing process, the OAM beams act as data carriers at the same time their topological charges act as each carrier's labels. Using this scheme, we experimentally demonstrate switching, multicasting and filtering network functions by simultaneously steer 10 input optical flows on demand to 10 output ports. The demonstration of data-carrying OFR with nonreturn-to-zero signals shows that this process enables synchronous processing of massive spatial channels and flexible optical network.

  17. A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services

    NASA Astrophysics Data System (ADS)

    Cho, Kenjiro; Birman, Kenneth P.

    1994-05-01

    This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.

  18. Reliable communication in the presence of failures

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, Thomas A.

    1987-01-01

    The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying cost and performance that depend on the degree of ordering desired. In particular, a protocol that enforces causal delivery orderings is introduced and shown to be a valuable alternative to conventional asynchronous communication protocols. The facility also ensures that the processes belonging to a fault-tolerant process group will observe consistant orderings of events affecting the group as a whole, including process failures, recoveries, migration, and dynamic changes to group properties like member rankings. A review of several uses for the protocols is the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by our approach.

  19. Performance Evaluation of Peer-to-Peer Progressive Download in Broadband Access Networks

    NASA Astrophysics Data System (ADS)

    Shibuya, Megumi; Ogishi, Tomohiko; Yamamoto, Shu

    P2P (Peer-to-Peer) file sharing architectures have scalable and cost-effective features. Hence, the application of P2P architectures to media streaming is attractive and expected to be an alternative to the current video streaming using IP multicast or content delivery systems because the current systems require expensive network infrastructures and large scale centralized cache storage systems. In this paper, we investigate the P2P progressive download enabling Internet video streaming services. We demonstrated the capability of the P2P progressive download in both laboratory test network as well as in the Internet. Through the experiments, we clarified the contribution of the FTTH links to the P2P progressive download in the heterogeneous access networks consisting of FTTH and ADSL links. We analyzed the cause of some download performance degradation occurred in the experiment and discussed about the effective methods to provide the video streaming service using P2P progressive download in the current heterogeneous networks.

  20. TeCo3D: a 3D telecooperation application based on VRML and Java

    NASA Astrophysics Data System (ADS)

    Mauve, Martin

    1998-12-01

    In this paper we present a method for sharing collaboration- unaware VRML content, e.g. 3D models which were not specifically developed for use in a distributed environment. This functionality is an essential requirement for the inclusion of arbitrary VRML content, as generated by standard CAD or animation software, into teleconferencing sessions. We have developed a 3D TeleCooperation (TeCo3D) prototype to demonstrate the feasibility of our approach. The basic services provided by the prototype are the distribution of cooperation unaware VRML content, the sharing of user interactions, and the joint viewing of the content. In order to achieve maximum portability, the prototype was developed completely in Java. This paper presents general aspects of sharing VRML content as well as the concepts, the architecture and the services of the TeCo3D prototype. Our approach relies on existing VRML browsers as the VRML presentation and execution engines while reliable multicast is used as the means of communication to provide for scalability.

  1. A Performance Evaluation of NACK-Oriented Protocols as the Foundation of Reliable Delay- Tolerant Networking Convergence Layers

    NASA Technical Reports Server (NTRS)

    Iannicca, Dennis; Hylton, Alan; Ishac, Joseph

    2012-01-01

    Delay-Tolerant Networking (DTN) is an active area of research in the space communications community. DTN uses a standard layered approach with the Bundle Protocol operating on top of transport layer protocols known as convergence layers that actually transmit the data between nodes. Several different common transport layer protocols have been implemented as convergence layers in DTN implementations including User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Licklider Transmission Protocol (LTP). The purpose of this paper is to evaluate several stand-alone implementations of negative-acknowledgment based transport layer protocols to determine how they perform in a variety of different link conditions. The transport protocols chosen for this evaluation include Consultative Committee for Space Data Systems (CCSDS) File Delivery Protocol (CFDP), Licklider Transmission Protocol (LTP), NACK-Oriented Reliable Multicast (NORM), and Saratoga. The test parameters that the protocols were subjected to are characteristic of common communications links ranging from terrestrial to cis-lunar and apply different levels of delay, line rate, and error.

  2. High Speed, Low Cost Fabrication of Gas Diffusion Electrodes for Membrane Electrode Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeCastro, Emory S.; Tsou, Yu-Min; Liu, Zhenyu

    Fabrication of membrane electrode assemblies (MEAs) depends on creating inks or pastes of catalyst and binder, and applying this suspension to either the membrane (catalyst coated membrane) or gas diffusion media (gas diffusion electrode) and respectively laminating either gas diffusion media or gas diffusion electrodes (GDEs) to the membrane. One barrier to cost effective fabrication for either of these approaches is the development of stable and consistent suspensions. This program investigated the fundamental forces that destabilize the suspensions and developed innovative approaches to create new, highly stable formulations. These more concentrated formulations needed fewer application passes, could be coated overmore » longer and wider substrates, and resulted in significantly lower coating defects. In March of 2012 BASF Fuel Cell released a new high temperature product based on these advances, whereby our customers received higher performing, more uniform MEAs resulting in higher stack build yields. Furthermore, these new materials resulted in an “instant” increase in capacity due to higher product yields and material throughput. Although not part of the original scope of this program, these new formulations have also led us to materials that demonstrate equivalent performance with 30% less precious metal in the anode. This program has achieved two key milestones in DOE’s Manufacturing R&D program: demonstration of processes for direct coating of electrodes and continuous in-line measurement for component fabrication.« less

  3. Selection and Application of Sulfide Oxidizing Microorganisms Able to Withstand Thiols in Gas Biodesulfurization Systems.

    PubMed

    Roman, Pawel; Klok, Johannes B M; Sousa, João A B; Broman, Elias; Dopson, Mark; Van Zessen, Erik; Bijmans, Martijn F M; Sorokin, Dimitry Y; Janssen, Albert J H

    2016-12-06

    After the first commercial applications of a new biological process for the removal of hydrogen sulfide (H 2 S) from low pressure biogas, the need arose to broaden the operating window to also enable the removal of organosulfur compounds from high pressure sour gases. In this study we have selected microorganisms from a full-scale biodesulfurization system that are capable of withstanding the presence of thiols. This full-scale unit has been in stable operation for more than 10 years. We investigated the microbial community by using high-throughput sequencing of 16S rRNA gene amplicons which showed that methanethiol gave a competitive advantage to bacteria belonging to the genera Thioalkalibacter (Halothiobacillaceae family) and Alkalilimnicola (Ectothiorhosdospiraceae family). The sulfide-oxidizing potential of the acclimatized population was investigated under elevated thiol loading rates (4.5-9.1 mM d -1 ), consisting of a mix of methanethiol, ethanethiol, and propanethiol. With this biomass, it was possible to achieve a stable bioreactor operation at which 80% of the supplied H 2 S (61 mM d -1 ) was biologically oxidized to elemental sulfur. The remainder was chemically produced thiosulfate. Moreover, we found that a conventionally applied method for controlling the oxygen supply to the bioreactor, that is, by maintaining a redox potential set-point value, appeared to be ineffective in the presence of thiols.

  4. Genome-assisted Breeding For Drought Resistance

    PubMed Central

    Khan, Awais; Sovero, Valpuri; Gemenet, Dorcus

    2016-01-01

    Drought stress caused by unpredictable precipitation poses a major threat to food production worldwide, and its impact is only expected to increase with the further onset of climate change. Understanding the effect of drought stress on crops and plants' response is critical for developing improved varieties with stable high yield to fill a growing food gap from an increasing population depending on decreasing land and water resources. When a plant encounters drought stress, it may use multiple response types, depending on environmental conditions, drought stress intensity and duration, and the physiological stage of the plant. Drought stress responses can be divided into four broad types: drought escape, drought avoidance, drought tolerance, and drought recovery, each characterized by interacting mechanisms, which may together be referred to as drought resistance mechanisms. The complex nature of drought resistance requires a multi-pronged approach to breed new varieties with stable and enhanced yield under drought stress conditions. High throughput genomics and phenomics allow marker-assisted selection (MAS) and genomic selection (GS), which offer rapid and targeted improvement of populations and identification of parents for rapid genetic gains and improved drought-resistant varieties. Using these approaches together with appropriate genetic diversity, databases, analytical tools, and well-characterized drought stress scenarios, weather and soil data, new varieties with improved drought resistance corresponding to grower preferences can be introduced into target regions rapidly. PMID:27499682

  5. The detailed 3D multi-loop aggregate/rosette chromatin architecture and functional dynamic organization of the human and mouse genomes.

    PubMed

    Knoch, Tobias A; Wachsmuth, Malte; Kepper, Nick; Lesnussa, Michael; Abuseiris, Anis; Ali Imam, A M; Kolovos, Petros; Zuin, Jessica; Kockx, Christel E M; Brouwer, Rutger W W; van de Werken, Harmen J G; van IJcken, Wilfred F J; Wendt, Kerstin S; Grosveld, Frank G

    2016-01-01

    The dynamic three-dimensional chromatin architecture of genomes and its co-evolutionary connection to its function-the storage, expression, and replication of genetic information-is still one of the central issues in biology. Here, we describe the much debated 3D architecture of the human and mouse genomes from the nucleosomal to the megabase pair level by a novel approach combining selective high-throughput high-resolution chromosomal interaction capture ( T2C ), polymer simulations, and scaling analysis of the 3D architecture and the DNA sequence. The genome is compacted into a chromatin quasi-fibre with ~5 ± 1 nucleosomes/11 nm, folded into stable ~30-100 kbp loops forming stable loop aggregates/rosettes connected by similar sized linkers. Minor but significant variations in the architecture are seen between cell types and functional states. The architecture and the DNA sequence show very similar fine-structured multi-scaling behaviour confirming their co-evolution and the above. This architecture, its dynamics, and accessibility, balance stability and flexibility ensuring genome integrity and variation enabling gene expression/regulation by self-organization of (in)active units already in proximity. Our results agree with the heuristics of the field and allow "architectural sequencing" at a genome mechanics level to understand the inseparable systems genomic properties.

  6. High-throughput measurements of biochemical responses using the plate::vision multimode 96 minilens array reader.

    PubMed

    Huang, Kuo-Sen; Mark, David; Gandenberger, Frank Ulrich

    2006-01-01

    The plate::vision is a high-throughput multimode reader capable of reading absorbance, fluorescence, fluorescence polarization, time-resolved fluorescence, and luminescence. Its performance has been shown to be quite comparable with other readers. When the reader is integrated into the plate::explorer, an ultrahigh-throughput screening system with event-driven software and parallel plate-handling devices, it becomes possible to run complicated assays with kinetic readouts in high-density microtiter plate formats for high-throughput screening. For the past 5 years, we have used the plate::vision and the plate::explorer to run screens and have generated more than 30 million data points. Their throughput, performance, and robustness have speeded up our drug discovery process greatly.

  7. 40 CFR 65.166 - Periodic reports.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., including a halogen reduction device for a low-throughput transfer rack, is used to control emissions from storage vessels or low-throughput transfer racks, the periodic report shall identify and state the cause...-throughput transfer racks, periodic reports shall include the following information: (1) Periodic reports...

  8. High Throughput Determination of Critical Human Dosing Parameters (SOT)

    EPA Science Inventory

    High throughput toxicokinetics (HTTK) is a rapid approach that uses in vitro data to estimate TK for hundreds of environmental chemicals. Reverse dosimetry (i.e., reverse toxicokinetics or RTK) based on HTTK data converts high throughput in vitro toxicity screening (HTS) data int...

  9. High Throughput Determinations of Critical Dosing Parameters (IVIVE workshop)

    EPA Science Inventory

    High throughput toxicokinetics (HTTK) is an approach that allows for rapid estimations of TK for hundreds of environmental chemicals. HTTK-based reverse dosimetry (i.e, reverse toxicokinetics or RTK) is used in order to convert high throughput in vitro toxicity screening (HTS) da...

  10. Optimization of high-throughput nanomaterial developmental toxicity testing in zebrafish embryos

    EPA Science Inventory

    Nanomaterial (NM) developmental toxicities are largely unknown. With an extensive variety of NMs available, high-throughput screening methods may be of value for initial characterization of potential hazard. We optimized a zebrafish embryo test as an in vivo high-throughput assay...

  11. Adaptive Packet Combining Scheme in Three State Channel Model

    NASA Astrophysics Data System (ADS)

    Saring, Yang; Bulo, Yaka; Bhunia, Chandan Tilak

    2018-01-01

    The two popular techniques of packet combining based error correction schemes are: Packet Combining (PC) scheme and Aggressive Packet Combining (APC) scheme. PC scheme and APC scheme have their own merits and demerits; PC scheme has better throughput than APC scheme, but suffers from higher packet error rate than APC scheme. The wireless channel state changes all the time. Because of this random and time varying nature of wireless channel, individual application of SR ARQ scheme, PC scheme and APC scheme can't give desired levels of throughput. Better throughput can be achieved if appropriate transmission scheme is used based on the condition of channel. Based on this approach, adaptive packet combining scheme has been proposed to achieve better throughput. The proposed scheme adapts to the channel condition to carry out transmission using PC scheme, APC scheme and SR ARQ scheme to achieve better throughput. Experimentally, it was observed that the error correction capability and throughput of the proposed scheme was significantly better than that of SR ARQ scheme, PC scheme and APC scheme.

  12. High-throughput cryopreservation of spermatozoa of blue catfish (Ictalurus furcatus): Establishment of an approach for commercial-scale processing.

    PubMed

    Hu, E; Yang, Huiping; Tiersch, Terrence R

    2011-02-01

    Hybrid catfish created by crossing of female channel catfish (Ictalurus punctatus) and male blue catfish (Ictalurus furcatus) are being used increasingly in foodfish aquaculture because of their fast growth and efficient food conversion. However, the availability of blue catfish males is limited, and their peak spawning is at a different time than that of the channel catfish. As such, cryopreservation of sperm of blue catfish could improve production of hybrid catfish, and has been studied in the laboratory and tested for feasibility in a commercial dairy bull cryopreservation facility. However, an approach for commercially relevant production of cryopreserved blue catfish sperm is still needed. The goal of this study was to develop practical approaches for commercial-scale sperm cryopreservation of blue catfish by use of an automated high-throughput system (MAPI, CryoBioSystem Co.). The objectives were to: (1) refine cooling rate and cryoprotectant concentration, and evaluate their interactions; (2) evaluate the effect of sperm concentration on cryopreservation; (3) refine cryoprotectant concentration based on the highest effective sperm concentration; (4) compare the effect of thawing samples at 20 or 40°C; (5) evaluate the fertility of thawed sperm at a research scale by fertilizing with channel catfish eggs; (6) test the post-thaw motility and fertility of sperm from individual males in a commercial setting, and (7) test for correlation of cryopreservation results with biological indices used for male evaluation. The optimal cooling rate was 5°C/min (Micro Digitcool, IMV) for high-throughput cryopreservation using CBS high-biosecurity 0.5-ml straws with 10% methanol, and a concentration of 1×10(9)sperm/ml. There was no difference in post-thaw motility when samples were thawed at 20°C for 40s or 40°C for 20s. After fertilization, the percentage of neurulation (Stage V embryos) was 80±21%, and percentage of embryonic mobility (Stage VI embryo) was 51±22%. There was a significant difference among the neurulation values produced by thawed blue catfish sperm, fresh blue catfish sperm (P=0.010) and channel catfish sperm (P=0.023), but not for Stage VI embryos (P≥0.585). Cryopreserved sperm from ten males did not show significant variation in post-thaw motility or fertility at the neurulation stage. This study demonstrates that the protocol established for high-throughput cryopreservation of blue catfish sperm can provide commercially relevant quantities and quality of sperm with stable fertility for hybrid catfish production and provides a model for establishment of commercial-scale approaches for other aquatic species. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. High-throughput screening (HTS) and modeling of the retinoid ...

    EPA Pesticide Factsheets

    Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system

  14. Evaluating High Throughput Toxicokinetics and Toxicodynamics for IVIVE (WC10)

    EPA Science Inventory

    High-throughput screening (HTS) generates in vitro data for characterizing potential chemical hazard. TK models are needed to allow in vitro to in vivo extrapolation (IVIVE) to real world situations. The U.S. EPA has created a public tool (R package “httk” for high throughput tox...

  15. High Throughput Experimental Materials Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zakutayev, Andriy; Perkins, John; Schwarting, Marcus

    The mission of the High Throughput Experimental Materials Database (HTEM DB) is to enable discovery of new materials with useful properties by releasing large amounts of high-quality experimental data to public. The HTEM DB contains information about materials obtained from high-throughput experiments at the National Renewable Energy Laboratory (NREL).

  16. High-throughput RAD-SNP genotyping for characterization of sugar beet genotypes

    USDA-ARS?s Scientific Manuscript database

    High-throughput SNP genotyping provides a rapid way of developing resourceful set of markers for delineating the genetic architecture and for effective species discrimination. In the presented research, we demonstrate a set of 192 SNPs for effective genotyping in sugar beet using high-throughput mar...

  17. Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays (SOT)

    EPA Science Inventory

    Alginate Immobilization of Metabolic Enzymes (AIME) for High-Throughput Screening Assays DE DeGroot, RS Thomas, and SO SimmonsNational Center for Computational Toxicology, US EPA, Research Triangle Park, NC USAThe EPA’s ToxCast program utilizes a wide variety of high-throughput s...

  18. A quantitative literature-curated gold standard for kinase-substrate pairs

    PubMed Central

    2011-01-01

    We describe the Yeast Kinase Interaction Database (KID, http://www.moseslab.csb.utoronto.ca/KID/), which contains high- and low-throughput data relevant to phosphorylation events. KID includes 6,225 low-throughput and 21,990 high-throughput interactions, from greater than 35,000 experiments. By quantitatively integrating these data, we identified 517 high-confidence kinase-substrate pairs that we consider a gold standard. We show that this gold standard can be used to assess published high-throughput datasets, suggesting that it will enable similar rigorous assessments in the future. PMID:21492431

  19. High-Throughput Industrial Coatings Research at The Dow Chemical Company.

    PubMed

    Kuo, Tzu-Chi; Malvadkar, Niranjan A; Drumright, Ray; Cesaretti, Richard; Bishop, Matthew T

    2016-09-12

    At The Dow Chemical Company, high-throughput research is an active area for developing new industrial coatings products. Using the principles of automation (i.e., using robotic instruments), parallel processing (i.e., prepare, process, and evaluate samples in parallel), and miniaturization (i.e., reduce sample size), high-throughput tools for synthesizing, formulating, and applying coating compositions have been developed at Dow. In addition, high-throughput workflows for measuring various coating properties, such as cure speed, hardness development, scratch resistance, impact toughness, resin compatibility, pot-life, surface defects, among others have also been developed in-house. These workflows correlate well with the traditional coatings tests, but they do not necessarily mimic those tests. The use of such high-throughput workflows in combination with smart experimental designs allows accelerated discovery and commercialization.

  20. Outlook for Development of High-throughput Cryopreservation for Small-bodied Biomedical Model Fishes★

    PubMed Central

    Tiersch, Terrence R.; Yang, Huiping; Hu, E.

    2011-01-01

    With the development of genomic research technologies, comparative genome studies among vertebrate species are becoming commonplace for human biomedical research. Fish offer unlimited versatility for biomedical research. Extensive studies are done using these fish models, yielding tens of thousands of specific strains and lines, and the number is increasing every day. Thus, high-throughput sperm cryopreservation is urgently needed to preserve these genetic resources. Although high-throughput processing has been widely applied for sperm cryopreservation in livestock for decades, application in biomedical model fishes is still in the concept-development stage because of the limited sample volumes and the biological characteristics of fish sperm. High-throughput processing in livestock was developed based on advances made in the laboratory and was scaled up for increased processing speed, capability for mass production, and uniformity and quality assurance. Cryopreserved germplasm combined with high-throughput processing constitutes an independent industry encompassing animal breeding, preservation of genetic diversity, and medical research. Currently, there is no specifically engineered system available for high-throughput of cryopreserved germplasm for aquatic species. This review is to discuss the concepts and needs for high-throughput technology for model fishes, propose approaches for technical development, and overview future directions of this approach. PMID:21440666

  1. Enhancing high throughput toxicology - development of putative adverse outcome pathways linking US EPA ToxCast screening targets to relevant apical hazards.

    EPA Science Inventory

    High throughput toxicology programs, such as ToxCast and Tox21, have provided biological effects data for thousands of chemicals at multiple concentrations. Compared to traditional, whole-organism approaches, high throughput assays are rapid and cost-effective, yet they generall...

  2. 40 CFR 65.145 - Nonflare control devices used to control emissions from storage vessels or low-throughput...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... control emissions from storage vessels or low-throughput transfer racks. 65.145 Section 65.145 Protection... racks. (a) Nonflare control device equipment and operating requirements. The owner or operator shall...-throughput transfer rack, so that the monitored parameters defined as required in paragraph (c) of this...

  3. Evaluation of High-Throughput Chemical Exposure Models via Analysis of Matched Environmental and Biological Media Measurements

    EPA Science Inventory

    The U.S. EPA, under its ExpoCast program, is developing high-throughput near-field modeling methods to estimate human chemical exposure and to provide real-world context to high-throughput screening (HTS) hazard data. These novel modeling methods include reverse methods to infer ...

  4. The development of a general purpose ARM-based processing unit for the ATLAS TileCal sROD

    NASA Astrophysics Data System (ADS)

    Cox, M. A.; Reed, R.; Mellado, B.

    2015-01-01

    After Phase-II upgrades in 2022, the data output from the LHC ATLAS Tile Calorimeter will increase significantly. ARM processors are common in mobile devices due to their low cost, low energy consumption and high performance. It is proposed that a cost-effective, high data throughput Processing Unit (PU) can be developed by using several consumer ARM processors in a cluster configuration to allow aggregated processing performance and data throughput while maintaining minimal software design difficulty for the end-user. This PU could be used for a variety of high-level functions on the high-throughput raw data such as spectral analysis and histograms to detect possible issues in the detector at a low level. High-throughput I/O interfaces are not typical in consumer ARM System on Chips but high data throughput capabilities are feasible via the novel use of PCI-Express as the I/O interface to the ARM processors. An overview of the PU is given and the results for performance and throughput testing of four different ARM Cortex System on Chips are presented.

  5. Quantitative description on structure-property relationships of Li-ion battery materials for high-throughput computations

    NASA Astrophysics Data System (ADS)

    Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun

    2017-12-01

    Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.

  6. Quantitative description on structure-property relationships of Li-ion battery materials for high-throughput computations.

    PubMed

    Wang, Youwei; Zhang, Wenqing; Chen, Lidong; Shi, Siqi; Liu, Jianjun

    2017-01-01

    Li-ion batteries are a key technology for addressing the global challenge of clean renewable energy and environment pollution. Their contemporary applications, for portable electronic devices, electric vehicles, and large-scale power grids, stimulate the development of high-performance battery materials with high energy density, high power, good safety, and long lifetime. High-throughput calculations provide a practical strategy to discover new battery materials and optimize currently known material performances. Most cathode materials screened by the previous high-throughput calculations cannot meet the requirement of practical applications because only capacity, voltage and volume change of bulk were considered. It is important to include more structure-property relationships, such as point defects, surface and interface, doping and metal-mixture and nanosize effects, in high-throughput calculations. In this review, we established quantitative description of structure-property relationships in Li-ion battery materials by the intrinsic bulk parameters, which can be applied in future high-throughput calculations to screen Li-ion battery materials. Based on these parameterized structure-property relationships, a possible high-throughput computational screening flow path is proposed to obtain high-performance battery materials.

  7. [Current applications of high-throughput DNA sequencing technology in antibody drug research].

    PubMed

    Yu, Xin; Liu, Qi-Gang; Wang, Ming-Rong

    2012-03-01

    Since the publication of a high-throughput DNA sequencing technology based on PCR reaction was carried out in oil emulsions in 2005, high-throughput DNA sequencing platforms have been evolved to a robust technology in sequencing genomes and diverse DNA libraries. Antibody libraries with vast numbers of members currently serve as a foundation of discovering novel antibody drugs, and high-throughput DNA sequencing technology makes it possible to rapidly identify functional antibody variants with desired properties. Herein we present a review of current applications of high-throughput DNA sequencing technology in the analysis of antibody library diversity, sequencing of CDR3 regions, identification of potent antibodies based on sequence frequency, discovery of functional genes, and combination with various display technologies, so as to provide an alternative approach of discovery and development of antibody drugs.

  8. Optical Layout Analysis of Polarization Interference Imaging Spectrometer by Jones Calculus in View of both Optical Throughput and Interference Fringe Visibility

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanni; Zhang, Chunmin

    2013-01-01

    A polarization interference imaging spectrometer based on Savart polariscope was presented. Its optical throughput was analyzed by Jones calculus. The throughput expression was given, and clearly showed that the optical throughput mainly depended on the intensity of incident light, transmissivity, refractive index and the layout of optical system. The simulation and analysis gave the optimum layout in view of both optical throughput and interference fringe visibility, and verified that the layout of our former design was optimum. The simulation showed that a small deviation from the optimum layout influenced interference fringe visibility little for the optimum one, but influenced severely for others, so a small deviation is admissible in the optimum, and this can mitigate the manufacture difficulty. These results pave the way for further research and engineering design.

  9. Combined Effect of Random Transmit Power Control and Inter-Path Interference Cancellation on DS-CDMA Packet Mobile Communications

    NASA Astrophysics Data System (ADS)

    Kudoh, Eisuke; Ito, Haruki; Wang, Zhisen; Adachi, Fumiyuki

    In mobile communication systems, high speed packet data services are demanded. In the high speed data transmission, throughput degrades severely due to severe inter-path interference (IPI). Recently, we proposed a random transmit power control (TPC) to increase the uplink throughput of DS-CDMA packet mobile communications. In this paper, we apply IPI cancellation in addition to the random TPC. We derive the numerical expression of the received signal-to-interference plus noise power ratio (SINR) and introduce IPI cancellation factor. We also derive the numerical expression of system throughput when IPI is cancelled ideally to compare with the Monte Carlo numerically evaluated system throughput. Then we evaluate, by Monte-Carlo numerical computation method, the combined effect of random TPC and IPI cancellation on the uplink throughput of DS-CDMA packet mobile communications.

  10. Energy efficient strategy for throughput improvement in wireless sensor networks.

    PubMed

    Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif

    2015-01-23

    Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature.

  11. Energy Efficient Strategy for Throughput Improvement in Wireless Sensor Networks

    PubMed Central

    Jabbar, Sohail; Minhas, Abid Ali; Imran, Muhammad; Khalid, Shehzad; Saleem, Kashif

    2015-01-01

    Network lifetime and throughput are one of the prime concerns while designing routing protocols for wireless sensor networks (WSNs). However, most of the existing schemes are either geared towards prolonging network lifetime or improving throughput. This paper presents an energy efficient routing scheme for throughput improvement in WSN. The proposed scheme exploits multilayer cluster design for energy efficient forwarding node selection, cluster heads rotation and both inter- and intra-cluster routing. To improve throughput, we rotate the role of cluster head among various nodes based on two threshold levels which reduces the number of dropped packets. We conducted simulations in the NS2 simulator to validate the performance of the proposed scheme. Simulation results demonstrate the performance efficiency of the proposed scheme in terms of various metrics compared to similar approaches published in the literature. PMID:25625902

  12. Determination of Resistant Starch Assimilating Bacteria in Fecal Samples of Mice by In vitro RNA-Based Stable Isotope Probing

    PubMed Central

    Herrmann, Elena; Young, Wayne; Rosendale, Douglas; Conrad, Ralf; Riedel, Christian U.; Egert, Markus

    2017-01-01

    The impact of the intestinal microbiota on human health is becoming increasingly appreciated in recent years. In consequence, and fueled by major technological advances, the composition of the intestinal microbiota in health and disease has been intensively studied by high throughput sequencing approaches. Observations linking dysbiosis of the intestinal microbiota with a number of serious medical conditions including chronic inflammatory disorders and allergic diseases suggest that restoration of the composition and activity of the intestinal microbiota may be a treatment option at least for some of these diseases. One possibility to shape the intestinal microbiota is the administration of prebiotic carbohydrates such as resistant starch (RS). In the present study, we aim at establishing RNA-based stable isotope probing (RNA-SIP) to identify bacterial populations that are involved in the assimilation of RS using anaerobic in vitro fermentation of murine fecal material with stable [U13C] isotope-labeled potato starch. Total RNA from these incubations was extracted, processed by gradient ultracentrifugation and fractionated by density. 16S rRNA gene sequences were amplified from reverse transcribed RNA of high and low density fractions suspected to contain labeled and unlabeled RNA, respectively. Phylogenetic analysis of the obtained sequences revealed a distinct subset of the intestinal microbiota involved in starch metabolism. The results suggest Bacteroidetes, in particular genera affiliated with Prevotellaceae, as well as members of the Ruminococcacea family to be primary assimilators of resistant starch due to a significantly higher relative abundance in higher density fractions in RNA samples isolated after 2 h of incubation. Using high performance liquid chromatography coupled to isotope ratio mass spectrometry (HPLC-IRMS) analysis, some stable isotope label was recovered from acetate, propionate and butyrate. Here, we demonstrate the suitability of RNA-SIP to link specific groups of microorganisms with fermentation of a specific substrate. The application of RNA-SIP in future in vivo studies will help to better understand the mechanisms behind functionality of a prebiotic carbohydrate and its impact on an intestinal ecosystem with potential implications for human health. PMID:28790981

  13. Determination of Resistant Starch Assimilating Bacteria in Fecal Samples of Mice by In vitro RNA-Based Stable Isotope Probing.

    PubMed

    Herrmann, Elena; Young, Wayne; Rosendale, Douglas; Conrad, Ralf; Riedel, Christian U; Egert, Markus

    2017-01-01

    The impact of the intestinal microbiota on human health is becoming increasingly appreciated in recent years. In consequence, and fueled by major technological advances, the composition of the intestinal microbiota in health and disease has been intensively studied by high throughput sequencing approaches. Observations linking dysbiosis of the intestinal microbiota with a number of serious medical conditions including chronic inflammatory disorders and allergic diseases suggest that restoration of the composition and activity of the intestinal microbiota may be a treatment option at least for some of these diseases. One possibility to shape the intestinal microbiota is the administration of prebiotic carbohydrates such as resistant starch (RS). In the present study, we aim at establishing RNA-based stable isotope probing (RNA-SIP) to identify bacterial populations that are involved in the assimilation of RS using anaerobic in vitro fermentation of murine fecal material with stable [U 13 C] isotope-labeled potato starch. Total RNA from these incubations was extracted, processed by gradient ultracentrifugation and fractionated by density. 16S rRNA gene sequences were amplified from reverse transcribed RNA of high and low density fractions suspected to contain labeled and unlabeled RNA, respectively. Phylogenetic analysis of the obtained sequences revealed a distinct subset of the intestinal microbiota involved in starch metabolism. The results suggest Bacteroidetes , in particular genera affiliated with Prevotellaceae , as well as members of the Ruminococcacea family to be primary assimilators of resistant starch due to a significantly higher relative abundance in higher density fractions in RNA samples isolated after 2 h of incubation. Using high performance liquid chromatography coupled to isotope ratio mass spectrometry (HPLC-IRMS) analysis, some stable isotope label was recovered from acetate, propionate and butyrate. Here, we demonstrate the suitability of RNA-SIP to link specific groups of microorganisms with fermentation of a specific substrate. The application of RNA-SIP in future in vivo studies will help to better understand the mechanisms behind functionality of a prebiotic carbohydrate and its impact on an intestinal ecosystem with potential implications for human health.

  14. Lessons from high-throughput protein crystallization screening: 10 years of practical experience

    PubMed Central

    JR, Luft; EH, Snell; GT, DeTitta

    2011-01-01

    Introduction X-ray crystallography provides the majority of our structural biological knowledge at a molecular level and in terms of pharmaceutical design is a valuable tool to accelerate discovery. It is the premier technique in the field, but its usefulness is significantly limited by the need to grow well-diffracting crystals. It is for this reason that high-throughput crystallization has become a key technology that has matured over the past 10 years through the field of structural genomics. Areas covered The authors describe their experiences in high-throughput crystallization screening in the context of structural genomics and the general biomedical community. They focus on the lessons learnt from the operation of a high-throughput crystallization screening laboratory, which to date has screened over 12,500 biological macromolecules. They also describe the approaches taken to maximize the success while minimizing the effort. Through this, the authors hope that the reader will gain an insight into the efficient design of a laboratory and protocols to accomplish high-throughput crystallization on a single-, multiuser-laboratory or industrial scale. Expert Opinion High-throughput crystallization screening is readily available but, despite the power of the crystallographic technique, getting crystals is still not a solved problem. High-throughput approaches can help when used skillfully; however, they still require human input in the detailed analysis and interpretation of results to be more successful. PMID:22646073

  15. High-throughput screening based on label-free detection of small molecule microarrays

    NASA Astrophysics Data System (ADS)

    Zhu, Chenggang; Fei, Yiyan; Zhu, Xiangdong

    2017-02-01

    Based on small-molecule microarrays (SMMs) and oblique-incidence reflectivity difference (OI-RD) scanner, we have developed a novel high-throughput drug preliminary screening platform based on label-free monitoring of direct interactions between target proteins and immobilized small molecules. The screening platform is especially attractive for screening compounds against targets of unknown function and/or structure that are not compatible with functional assay development. In this screening platform, OI-RD scanner serves as a label-free detection instrument which is able to monitor about 15,000 biomolecular interactions in a single experiment without the need to label any biomolecule. Besides, SMMs serves as a novel format for high-throughput screening by immobilization of tens of thousands of different compounds on a single phenyl-isocyanate functionalized glass slide. Based on the high-throughput screening platform, we sequentially screened five target proteins (purified target proteins or cell lysate containing target protein) in high-throughput and label-free mode. We found hits for respective target protein and the inhibition effects for some hits were confirmed by following functional assays. Compared to traditional high-throughput screening assay, the novel high-throughput screening platform has many advantages, including minimal sample consumption, minimal distortion of interactions through label-free detection, multi-target screening analysis, which has a great potential to be a complementary screening platform in the field of drug discovery.

  16. 40 CFR Table 3 to Subpart Eeee of... - Operating Limits-High Throughput Transfer Racks

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Operating Limits-High Throughput Transfer Racks 3 Table 3 to Subpart EEEE of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION... Throughput Transfer Racks As stated in § 63.2346(e), you must comply with the operating limits for existing...

  17. High-Throughput Quantitation of Proline Betaine in Foods and Suitability as a Valid Biomarker for Citrus Consumption.

    PubMed

    Lang, Roman; Lang, Tatjana; Bader, Matthias; Beusch, Anja; Schlagbauer, Verena; Hofmann, Thomas

    2017-03-01

    Proline betaine has been proposed as a candidate dietary biomarker for citrus intake. To validate its suitability as a dietary biomarker and to gain insight into the range of this per-methylated amino acid in foods and beverages, a quick and accurate stable isotope dilution assay was developed for quantitative high-throughput HILIC-MS/MS screening of proline betaine in foods and urine after solvent-mediated matrix precipitation. Quantitative analysis of a variety of foods confirmed substantial amounts of proline betaine in citrus juices (140-1100 mg/L) and revealed high abundance in tubers of the vegetable Stachys affinis, also known as Chinese artichocke (∼700 mg/kg). Seafood including clams, shrimp, and lobster contained limited amounts (1-95 mg/kg), whereas only traces were detected in fish, cuttlefish, fresh meat, dairy products, fresh vegetable (<3 mg/kg), coffee, tea, beer, and wine (<7 mg/L). The human excretion profiles of proline betaine in urine were comparable when common portions of orange juice or fried Stachys tubers were consumed. Neither mussels nor beer provided enough proline betaine to detect significant differences between morning urine samples collected before and after consumption. As Stachys is a rather rare vegetable and not part of peoples' daily diet, the data reported here will help to monitor the subject's compliance in future nutritional human studies on citrus products or the exclusion of citrus products in the wash-out phase of an intervention study. Moreover, proline betaine measurement can contribute to the establishment of a toolbox of valid dietary biomarkers reflecting wider aspects of diet to assess metabolic profiles as measures of dietary exposure and indicators of dietary patterns, dietary changes, or effectiveness of dietary interventions.

  18. Markov Chain Model-Based Optimal Cluster Heads Selection for Wireless Sensor Networks

    PubMed Central

    Ahmed, Gulnaz; Zou, Jianhua; Zhao, Xi; Sadiq Fareed, Mian Muhammad

    2017-01-01

    The longer network lifetime of Wireless Sensor Networks (WSNs) is a goal which is directly related to energy consumption. This energy consumption issue becomes more challenging when the energy load is not properly distributed in the sensing area. The hierarchal clustering architecture is the best choice for these kind of issues. In this paper, we introduce a novel clustering protocol called Markov chain model-based optimal cluster heads (MOCHs) selection for WSNs. In our proposed model, we introduce a simple strategy for the optimal number of cluster heads selection to overcome the problem of uneven energy distribution in the network. The attractiveness of our model is that the BS controls the number of cluster heads while the cluster heads control the cluster members in each cluster in such a restricted manner that a uniform and even load is ensured in each cluster. We perform an extensive range of simulation using five quality measures, namely: the lifetime of the network, stable and unstable region in the lifetime of the network, throughput of the network, the number of cluster heads in the network, and the transmission time of the network to analyze the proposed model. We compare MOCHs against Sleep-awake Energy Efficient Distributed (SEED) clustering, Artificial Bee Colony (ABC), Zone Based Routing (ZBR), and Centralized Energy Efficient Clustering (CEEC) using the above-discussed quality metrics and found that the lifetime of the proposed model is almost 1095, 2630, 3599, and 2045 rounds (time steps) greater than SEED, ABC, ZBR, and CEEC, respectively. The obtained results demonstrate that the MOCHs is better than SEED, ABC, ZBR, and CEEC in terms of energy efficiency and the network throughput. PMID:28241492

  19. Absolute quantification of prion protein (90-231) using stable isotope-labeled chymotryptic peptide standards in a LC-MRM AQUA workflow.

    PubMed

    Sturm, Robert; Sheynkman, Gloria; Booth, Clarissa; Smith, Lloyd M; Pedersen, Joel A; Li, Lingjun

    2012-09-01

    Substantial evidence indicates that the disease-associated conformer of the prion protein (PrP(TSE)) constitutes the etiologic agent in prion diseases. These diseases affect multiple mammalian species. PrP(TSE) has the ability to convert the conformation of the normal prion protein (PrP(C)) into a β-sheet rich form resistant to proteinase K digestion. Common immunological techniques lack the sensitivity to detect PrP(TSE) at subfemtomole levels, whereas animal bioassays, cell culture, and in vitro conversion assays offer higher sensitivity but lack the high-throughput the immunological assays offer. Mass spectrometry is an attractive alternative to the above assays as it offers high-throughput, direct measurement of a protein's signature peptide, often with subfemtomole sensitivities. Although a liquid chromatography-multiple reaction monitoring (LC-MRM) method has been reported for PrP(TSE), the chemical composition and lack of amino acid sequence conservation of the signature peptide may compromise its accuracy and make it difficult to apply to multiple species. Here, we demonstrate that an alternative protease (chymotrypsin) can produce signature peptides suitable for a LC-MRM absolute quantification (AQUA) experiment. The new method offers several advantages, including: (1) a chymotryptic signature peptide lacking chemically active residues (Cys, Met) that can confound assay accuracy; (2) low attomole limits of detection and quantitation (LOD and LOQ); and (3) a signature peptide retaining the same amino acid sequence across most mammals naturally susceptible to prion infection as well as important laboratory models. To the authors' knowledge, this is the first report on the use of a non-tryptic peptide in a LC-MRM AQUA workflow.

  20. Absolute quantification of prion protein (90-231) using stable isotope-labeled chymotryptic peptide standards in a LC-MRM AQUA workflow

    PubMed Central

    Sturm, Robert; Kreitinger, Gloria; Booth, Clarissa; Smith, Lloyd; Pedersen, Joel; Li, Lingjun

    2012-01-01

    Substantial evidence indicates that the disease-associated conformer of the prion protein (PrPTSE) constitutes the etiological agent in prion diseases. These diseases affect multiple mammalian species. PrPTSE has the ability to convert the conformation of the normal prion protein (PrPC) into a β-sheet rich form resistant to proteinase K digestion. Common immunological techniques lack the sensitivity to detect PrPTSE at sub-femtomole levels while animal bioassays, cell culture, and in vitro conversion assays offer ultrasensitivity but lack the high-throughput the immunological assays offer. Mass spectrometry is an attractive alternative to the above assays as it offers high-throughput, direct measurement of a protein’s signature peptide, often with sub-femtomole sensitivities. Although a liquid chromatography-multiple reaction monitoring (LC-MRM) method has been reported for PrPTSE, the chemical composition and lack of amino acid sequence conservation of the signature peptide may compromise its accuracy and make it difficult to apply to multiple species. Here, we demonstrate that an alternative protease (chymotrypsin) can produce signature peptides suitable for a LC-MRM absolute quantification (AQUA) experiment. The new method offers several advantages, including: (1) a chymotryptic signature peptide lacking chemically active residues (Cys, Met) that can confound assay accuracy; (2) low attomole limits of detection and quantitation (LOD and LOQ); and (3) a signature peptide retaining the same amino acid sequence across most mammals naturally susceptible to prion infection as well as important laboratory models. To the authors’ knowledge, this is the first report of the use of a non-tryptic peptide in a LC-MRM AQUA workflow. PMID:22714949

Top