Sample records for modeling node bandwidth

  1. Modeling node bandwidth limits and their effects on vector combining algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Littlefield, R.J.

    Each node in a message-passing multicomputer typically has several communication links. However, the maximum aggregate communication speed of a node is often less than the sum of its individual link speeds. Such computers are called node bandwidth limited (NBL). The NBL constraint is important when choosing algorithms because it can change the relative performance of different algorithms that accomplish the same task. This paper introduces a model of communication performance for NBL computers and uses the model to analyze the overall performance of three algorithms for vector combining (global sum) on the Intel Touchstone DELTA computer. Each of the threemore » algorithms is found to be at least 33% faster than the other two for some combinations of machine size and vector length. The NBL constraint is shown to significantly affect the conditions under which each algorithm is fastest.« less

  2. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    PubMed Central

    Dinh, Thanh; Kim, Younghan

    2016-01-01

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud. PMID:27367689

  3. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud.

    PubMed

    Dinh, Thanh; Kim, Younghan

    2016-06-28

    This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  4. Performance Optimization of Priority Assisted CSMA/CA Mechanism of 802.15.6 under Saturation Regime

    PubMed Central

    Shakir, Mustafa; Rehman, Obaid Ur; Rahim, Mudassir; Alrajeh, Nabil; Khan, Zahoor Ali; Khan, Mahmood Ashraf; Niaz, Iftikhar Azim; Javaid, Nadeem

    2016-01-01

    Due to the recent development in the field of Wireless Sensor Networks (WSNs), the Wireless Body Area Networks (WBANs) have become a major area of interest for the developers and researchers. Human body exhibits postural mobility due to which distance variation occurs and the status of connections amongst sensors change time to time. One of the major requirements of WBAN is to prolong the network lifetime without compromising on other performance measures, i.e., delay, throughput and bandwidth efficiency. Node prioritization is one of the possible solutions to obtain optimum performance in WBAN. IEEE 802.15.6 CSMA/CA standard splits the nodes with different user priorities based on Contention Window (CW) size. Smaller CW size is assigned to higher priority nodes. This standard helps to reduce delay, however, it is not energy efficient. In this paper, we propose a hybrid node prioritization scheme based on IEEE 802.15.6 CSMA/CA to reduce energy consumption and maximize network lifetime. In this scheme, optimum performance is achieved by node prioritization based on CW size as well as power in respective user priority. Our proposed scheme reduces the average back off time for channel access due to CW based prioritization. Additionally, power based prioritization for a respective user priority helps to minimize required number of retransmissions. Furthermore, we also compare our scheme with IEEE 802.15.6 CSMA/CA standard (CW assisted node prioritization) and power assisted node prioritization under postural mobility in WBAN. Mathematical expressions are derived to determine the accurate analytical model for throughput, delay, bandwidth efficiency, energy consumption and life time for each node prioritization scheme. With the intention of analytical model validation, we have performed the simulations in OMNET++/MIXIM framework. Analytical and simulation results show that our proposed hybrid node prioritization scheme outperforms other node prioritization schemes in terms of average network delay, average throughput, average bandwidth efficiency and network lifetime. PMID:27598167

  5. Determining a bisection bandwidth for a multi-node data communications network

    DOEpatents

    Faraj, Ahmad A.

    2010-01-26

    Methods, systems, and products are disclosed for determining a bisection bandwidth for a multi-node data communications network that include: partitioning nodes in the network into a first sub-network and a second sub-network in dependence upon a topology of the network; sending, by each node in the first sub-network to a destination node in the second sub-network, a first message having a predetermined message size; receiving, by each node in the first sub-network from a source node in the second sub-network, a second message; measuring, by each node in the first sub-network, the elapsed communications time between the sending of the first message and the receiving of the second message; selecting the longest elapsed communications time; and calculating the bisection bandwidth for the network in dependence upon the number of the nodes in the first sub-network, the predetermined message size of the first test message, and the longest elapsed communications time.

  6. Path connectivity based spectral defragmentation in flexible bandwidth networks.

    PubMed

    Wang, Ying; Zhang, Jie; Zhao, Yongli; Zhang, Jiawei; Zhao, Jie; Wang, Xinbo; Gu, Wanyi

    2013-01-28

    Optical networks with flexible bandwidth provisioning have become a very promising networking architecture. It enables efficient resource utilization and supports heterogeneous bandwidth demands. In this paper, two novel spectrum defragmentation approaches, i.e. Maximum Path Connectivity (MPC) algorithm and Path Connectivity Triggering (PCT) algorithm, are proposed based on the notion of Path Connectivity, which is defined to represent the maximum variation of node switching ability along the path in flexible bandwidth networks. A cost-performance-ratio based profitability model is given to denote the prons and cons of spectrum defragmentation. We compare these two proposed algorithms with non-defragmentation algorithm in terms of blocking probability. Then we analyze the differences of defragmentation profitability between MPC and PCT algorithms.

  7. DoS detection in IEEE 802.11 with the presence of hidden nodes

    PubMed Central

    Soryal, Joseph; Liu, Xijie; Saadawi, Tarek

    2013-01-01

    The paper presents a novel technique to detect Denial of Service (DoS) attacks applied by misbehaving nodes in wireless networks with the presence of hidden nodes employing the widely used IEEE 802.11 Distributed Coordination Function (DCF) protocols described in the IEEE standard [1]. Attacker nodes alter the IEEE 802.11 DCF firmware to illicitly capture the channel via elevating the probability of the average number of packets transmitted successfully using up the bandwidth share of the innocent nodes that follow the protocol standards. We obtained the theoretical network throughput by solving two-dimensional Markov Chain model as described by Bianchi [2], and Liu and Saadawi [3] to determine the channel capacity. We validated the results obtained via the theoretical computations with the results obtained by OPNET simulator [4] to define the baseline for the average attainable throughput in the channel under standard conditions where all nodes follow the standards. The main goal of the DoS attacker is to prevent the innocent nodes from accessing the channel and by capturing the channel’s bandwidth. In addition, the attacker strives to appear as an innocent node that follows the standards. The protocol resides in every node to enable each node to police other nodes in its immediate wireless coverage area. All innocent nodes are able to detect and identify the DoS attacker in its wireless coverage area. We applied the protocol to two Physical Layer technologies: Direct Sequence Spread Spectrum (DSSS) and Frequency Hopping Spread Spectrum (FHSS) and the results are presented to validate the algorithm. PMID:25685510

  8. DoS detection in IEEE 802.11 with the presence of hidden nodes.

    PubMed

    Soryal, Joseph; Liu, Xijie; Saadawi, Tarek

    2014-07-01

    The paper presents a novel technique to detect Denial of Service (DoS) attacks applied by misbehaving nodes in wireless networks with the presence of hidden nodes employing the widely used IEEE 802.11 Distributed Coordination Function (DCF) protocols described in the IEEE standard [1]. Attacker nodes alter the IEEE 802.11 DCF firmware to illicitly capture the channel via elevating the probability of the average number of packets transmitted successfully using up the bandwidth share of the innocent nodes that follow the protocol standards. We obtained the theoretical network throughput by solving two-dimensional Markov Chain model as described by Bianchi [2], and Liu and Saadawi [3] to determine the channel capacity. We validated the results obtained via the theoretical computations with the results obtained by OPNET simulator [4] to define the baseline for the average attainable throughput in the channel under standard conditions where all nodes follow the standards. The main goal of the DoS attacker is to prevent the innocent nodes from accessing the channel and by capturing the channel's bandwidth. In addition, the attacker strives to appear as an innocent node that follows the standards. The protocol resides in every node to enable each node to police other nodes in its immediate wireless coverage area. All innocent nodes are able to detect and identify the DoS attacker in its wireless coverage area. We applied the protocol to two Physical Layer technologies: Direct Sequence Spread Spectrum (DSSS) and Frequency Hopping Spread Spectrum (FHSS) and the results are presented to validate the algorithm.

  9. Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Jennings, Esther H.; Sergui, John S.

    2013-01-01

    A large-scale network that supports a large number of users can have an aggregate data rate of hundreds of Mbps at any time. High-fidelity simulation of a large-scale network might be too complicated and memory-intensive for typical commercial-off-the-shelf (COTS) tools. Unlike a large commercial wide-area-network (WAN) that shares diverse network resources among diverse users and has a complex topology that requires routing mechanism and flow control, the ground communication links of a space network operate under the assumption of a guaranteed dedicated bandwidth allocation between specific sparse endpoints in a star-like topology. This work solved the network design problem of estimating the bandwidths of a ground network architecture option that offer different service classes to meet the latency requirements of different user data types. In this work, a top-down analysis and simulation approach was created to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. These techniques were used to estimate the WAN bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network. A new analytical approach, called the "leveling scheme," was developed to model the store-and-forward mechanism of the network data flow. The term "leveling" refers to the spreading of data across a longer time horizon without violating the corresponding latency requirement of the data type. Two versions of the leveling scheme were developed: 1. A straightforward version that simply spreads the data of each data type across the time horizon and doesn't take into account the interactions among data types within a pass, or between data types across overlapping passes at a network node, and is inherently sub-optimal. 2. Two-state Markov leveling scheme that takes into account the second order behavior of the store-and-forward mechanism, and the interactions among data types within a pass. The novelty of this approach lies in the modeling of the store-and-forward mechanism of each network node. The term store-and-forward refers to the data traffic regulation technique in which data is sent to an intermediate network node where they are temporarily stored and sent at a later time to the destination node or to another intermediate node. Store-and-forward can be applied to both space-based networks that have intermittent connectivity, and ground-based networks with deterministic connectivity. For groundbased networks, the store-and-forward mechanism is used to regulate the network data flow and link resource utilization such that the user data types can be delivered to their destination nodes without violating their respective latency requirements.

  10. Bandwidth turbulence control based on flow community structure in the Internet

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyu; Gu, Rentao; Ji, Yuefeng

    2016-10-01

    Bursty flows vary rapidly in short period of time, and cause fierce bandwidth turbulence in the Internet. In this letter, we model the flow bandwidth turbulence process by constructing a flow interaction network (FIN network), with nodes representing flows and edges denoting bandwidth interactions among them. To restrain the bandwidth turbulence in FIN networks, an immune control strategy based on flow community structure is proposed. Flows in community boundary positions are immunized to cut off the inter-community turbulence spreading. By applying this control strategy in the first- and the second-level flow communities separately, 97.2% flows can effectively avoid bandwidth variations by immunizing 21% flows, and the average bandwidth variation degree reaches near zero. To achieve a similar result, about 70%-90% immune flows are needed with targeted control strategy based on flow degrees and random control strategy. Moreover, simulation results showed that the control effect of the proposed strategy improves significantly if the immune flow number is relatively smaller in each control step.

  11. ICE-Based Custom Full-Mesh Network for the CHIME High Bandwidth Radio Astronomy Correlator

    NASA Astrophysics Data System (ADS)

    Bandura, K.; Cliche, J. F.; Dobbs, M. A.; Gilbert, A. J.; Ittah, D.; Mena Parra, J.; Smecher, G.

    2016-03-01

    New generation radio interferometers encode signals from thousands of antenna feeds across large bandwidth. Channelizing and correlating this data requires networking capabilities that can handle unprecedented data rates with reasonable cost. The Canadian Hydrogen Intensity Mapping Experiment (CHIME) correlator processes 8-bits from N=2,048 digitizer inputs across 400MHz of bandwidth. Measured in N2× bandwidth, it is the largest radio correlator that is currently commissioning. Its digital back-end must exchange and reorganize the 6.6terabit/s produced by its 128 digitizing and channelizing nodes, and feed it to the 256 graphics processing unit (GPU) node spatial correlator in a way that each node obtains data from all digitizer inputs but across a small fraction of the bandwidth (i.e. ‘corner-turn’). In order to maximize performance and reliability of the corner-turn system while minimizing cost, a custom networking solution has been implemented. The system makes use of Field Programmable Gate Array (FPGA) transceivers to implement direct, passive copper, full-mesh, high speed serial connections between sixteen circuit boards in a crate, to exchange data between crates, and to offload the data to a cluster of 256 GPU nodes using standard 10Gbit/s Ethernet links. The GPU nodes complete the corner-turn by combining data from all crates and then computing visibilities. Eye diagrams and frame error counters confirm error-free operation of the corner-turn network in both the currently operating CHIME Pathfinder telescope (a prototype for the full CHIME telescope) and a representative fraction of the full CHIME hardware providing an end-to-end system validation. An analysis of an equivalent corner-turn system built with Ethernet switches instead of custom passive data links is provided.

  12. Enabling Secure High-Performance Wireless Ad Hoc Networking

    DTIC Science & Technology

    2003-05-29

    destinations, consuming energy and available bandwidth. An attacker may similarly create a routing black hole, in which all packets are dropped: by sending...of the vertex cut, for example by forwarding only routing packets and not data packets, such that the nodes waste energy forwarding packets to the...with limited resources, including network bandwidth and the CPU processing capacity, memory, and battery power ( energy ) of each individual node in the

  13. Bisectional fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2012-02-14

    An apparatus, program product and method logically divide a group of nodes and causes node pairs comprising a node from each section to communicate. Results from the communications may be analyzed to determine performance characteristics, such as bandwidth and proper connectivity.

  14. Bisectional fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2009-08-04

    An apparatus and program product logically divide a group of nodes and causes node pairs comprising a node from each section to communicate. Results from the communications may be analyzed to determine performance characteristics, such as bandwidth and proper connectivity.

  15. Bisectional fault detection system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Pinnow, Kurt Walter [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian Edward [Rochester, MN

    2008-11-11

    An apparatus, program product and method logically divides a group of nodes and causes node pairs comprising a node from each section to communicate. Results from the communications may be analyzed to determine performance characteristics, such as bandwidth and proper connectivity.

  16. C3 System Performance Simulation and User Manual. Getting Started: Guidelines for Users

    NASA Technical Reports Server (NTRS)

    2006-01-01

    This document is a User's Manual describing the C3 Simulation capabilities. The subject work was designed to simulate the communications involved in the flight of a Remotely Operated Aircraft (ROA) using the Opnet software. Opnet provides a comprehensive development environment supporting the modeling of communication networks and distributed systems. It has tools for model design, simulation, data collection, and data analysis. Opnet models are hierarchical -- consisting of a project which contains node models which in turn contain process models. Nodes can be fixed, mobile, or satellite. Links between nodes can be physical or wireless. Communications are packet based. The model is very generic in its current form. Attributes such as frequency and bandwidth can easily be modified to better reflect a specific platform. The model is not fully developed at this stage -- there are still more enhancements to be added. Current issues are documented throughout this guide.

  17. Exploring the use of I/O nodes for computation in a MIMD multiprocessor

    NASA Technical Reports Server (NTRS)

    Kotz, David; Cai, Ting

    1995-01-01

    As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between 'compute' and 'I/O' nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O. Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.

  18. A Novel Deployment Scheme Based on Three-Dimensional Coverage Model for Wireless Sensor Networks

    PubMed Central

    Xiao, Fu; Yang, Yang; Wang, Ruchuan; Sun, Lijuan

    2014-01-01

    Coverage pattern and deployment strategy are directly related to the optimum allocation of limited resources for wireless sensor networks, such as energy of nodes, communication bandwidth, and computing power, and quality improvement is largely determined by these for wireless sensor networks. A three-dimensional coverage pattern and deployment scheme are proposed in this paper. Firstly, by analyzing the regular polyhedron models in three-dimensional scene, a coverage pattern based on cuboids is proposed, and then relationship between coverage and sensor nodes' radius is deduced; also the minimum number of sensor nodes to maintain network area's full coverage is calculated. At last, sensor nodes are deployed according to the coverage pattern after the monitor area is subdivided into finite 3D grid. Experimental results show that, compared with traditional random method, sensor nodes number is reduced effectively while coverage rate of monitor area is ensured using our coverage pattern and deterministic deployment scheme. PMID:25045747

  19. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  20. Blocking performance approximation in flexi-grid networks

    NASA Astrophysics Data System (ADS)

    Ge, Fei; Tan, Liansheng

    2016-12-01

    The blocking probability to the path requests is an important issue in flexible bandwidth optical communications. In this paper, we propose a blocking probability approximation method of path requests in flexi-grid networks. It models the bundled neighboring carrier allocation with a group of birth-death processes and provides a theoretical analysis to the blocking probability under variable bandwidth traffic. The numerical results show the effect of traffic parameters to the blocking probability of path requests. We use the first fit algorithm in network nodes to allocate neighboring carriers to path requests in simulations, and verify approximation results.

  1. Research on Robustness of Tree-based P2P Streaming

    NASA Astrophysics Data System (ADS)

    Chu, Chen; Yan, Jinyao; Ding, Kuangzheng; Wang, Xi

    Research on P2P streaming media is a hot topic in the area of Internet technology. It has emerged as a promising technique. This new paradigm brings a number of unique advantages such as scalability, resilience and also effectiveness in coping with dynamics and heterogeneity. However, There are also many problems in P2P streaming media systems using traditional tree-based topology such as the bandwidth limits between parents and child nodes; node's joining or leaving has a great effect on robustness of tree-based topology. This paper will introduce a method of measuring the robustness of tree-based topology: using network measurement, we observe and record the bandwidth between all the nodes, analyses the correlation between all the sibling flows, measure the robustness of tree-based topology. And the result shows that in the Tree-based topology, the different links which have similar routing paths would share the bandwidth bottleneck, reduce the robustness of the Tree-based topology.

  2. Army Communicator. Volume 31, Number 1, Winter 2006

    DTIC Science & Technology

    2006-01-01

    material does not represent official policy, thinking, or endorsement by an agency of the U.S. Army. This publication contains no advertising . U.S...exercise, to simu- late the bandwidth capacity of a Joint Node Network command post node or an ATM Moblie Subscriber Equipment node. These links were

  3. Data Acquisition Based on Stable Matching of Bipartite Graph in Cooperative Vehicle–Infrastructure Systems †

    PubMed Central

    Tang, Xiaolan; Hong, Donghui; Chen, Wenlong

    2017-01-01

    Existing studies on data acquisition in vehicular networks often take the mobile vehicular nodes as data carriers. However, their autonomous movements, limited resources and security risks impact the quality of services. In this article, we propose a data acquisition model using stable matching of bipartite graph in cooperative vehicle-infrastructure systems, namely, DAS. Contents are distributed to roadside units, while vehicular nodes support supplementary storage. The original distribution problem is formulated as a stable matching problem of bipartite graph, where the data and the storage cells compose two sides of vertices. Regarding the factors relevant with the access ratio and delay, the preference rankings for contents and roadside units are calculated, respectively. With a multi-replica preprocessing algorithm to handle the potential one-to-many mapping, the matching problem is addressed in polynomial time. In addition, vehicular nodes carry and forward assistant contents to deliver the failed packets because of bandwidth competition. Furthermore, an incentive strategy is put forward to boost the vehicle cooperation and to achieve a fair bandwidth allocation at roadside units. Experiments show that DAS achieves a high access ratio and a small storage cost with an acceptable delay. PMID:28594359

  4. Going End to End to Deliver High-Speed Data

    NASA Technical Reports Server (NTRS)

    2005-01-01

    By the end of the 1990s, the optical fiber "backbone" of the telecommunication and data-communication networks had evolved from megabits-per-second transmission rates to gigabits-per-second transmission rates. Despite this boom in bandwidth, however, users at the end nodes were still not being reached on a consistent basis. (An end node is any device that does not behave like a router or a managed hub or switch. Examples of end node objects are computers, printers, serial interface processor phones, and unmanaged hubs and switches.) The primary reason that prevents bandwidth from reaching the end nodes is the complex local network topology that exists between the optical backbone and the end nodes. This complex network topology consists of several layers of routing and switch equipment which introduce potential congestion points and network latency. By breaking down the complex network topology, a true optical connection can be achieved. Access Optical Networks, Inc., is making this connection a reality with guidance from NASA s nondestructive evaluation experts.

  5. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  6. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  7. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  8. High-performance parallel processors based on star-coupled wavelength division multiplexing optical interconnects

    DOEpatents

    Deri, Robert J.; DeGroot, Anthony J.; Haigh, Ronald E.

    2002-01-01

    As the performance of individual elements within parallel processing systems increases, increased communication capability between distributed processor and memory elements is required. There is great interest in using fiber optics to improve interconnect communication beyond that attainable using electronic technology. Several groups have considered WDM, star-coupled optical interconnects. The invention uses a fiber optic transceiver to provide low latency, high bandwidth channels for such interconnects using a robust multimode fiber technology. Instruction-level simulation is used to quantify the bandwidth, latency, and concurrency required for such interconnects to scale to 256 nodes, each operating at 1 GFLOPS performance. Performance scales have been shown to .apprxeq.100 GFLOPS for scientific application kernels using a small number of wavelengths (8 to 32), only one wavelength received per node, and achievable optoelectronic bandwidth and latency.

  9. Design and development of broadband piezoelectric vibration energy harvester based on compliant orthoplanar spring

    NASA Astrophysics Data System (ADS)

    Dhote, Sharvari

    With advancement in technology, power requirements are reduced drastically for sensor nodes. The piezoelectric vibration energy harvesters generate sufficient power to low-powered sensor nodes. The main requirement of energy harvester is to provide a broad bandwidth. A conventional linear harvester does not satisfy this requirement. Therefore, the research focus is shifted to exploiting nonlinearity to widen the bandwidth of the harvester. Although nonlinear techniques are promising for broadening a bandwidth, reverse sweep shows reduced response as compared to the forward sweep. To overcome this issue, this thesis presents the design and development of a broadband piezoelectric vibration energy harvester based on a nonlinear multi-frequency compliant orthoplanar spring. This thesis is divided into three parts. The first part presents the design and experimental study of a tri-leg compliant orthoplanar spring for a broadband energy harvesting. The harvester performance is enhanced through the use of lightweight masses, which bring nonlinear vibration modes closer. The performance of the harvester is analyzed through development of a mathematical model based on the Duffing oscillator. The experimental and numerical results are in good agreement. The parametric study shows that an optimum performance is achieved by further reducing a gap in between the vibration modes using different weight masses. In the second part of the research, multiple (bi, quad and pent) leg compliant orthoplanar springs are designed to understand their role in expanding the bandwidth and reducing gap between vibration modes. The designed harvesters are compared by calculating the figure of merits. The quad-leg design provides a better performance in terms of power density and bandwidth among all the designs. The reverse sweep response is comparable to the forward sweep in terms of bandwidth. In the final part, a magnetic force is applied to the tri-leg harvester, which enhanced the voltage output and bandwidth. In addition, vibration modes have been brought even closer by reducing the gap between the modes. Overall, the proposed harvester performance is significantly improved using multiple legs attached with piezoelectric plates and masses, bringing the modes closer in the forward and reverse sweeps, making it advantageous to harvest energy from wideband environmental vibrations.

  10. Stanford Hardware Development Program

    NASA Technical Reports Server (NTRS)

    Peterson, A.; Linscott, I.; Burr, J.

    1986-01-01

    Architectures for high performance, digital signal processing, particularly for high resolution, wide band spectrum analysis were developed. These developments are intended to provide instrumentation for NASA's Search for Extraterrestrial Intelligence (SETI) program. The real time signal processing is both formal and experimental. The efficient organization and optimal scheduling of signal processing algorithms were investigated. The work is complemented by efforts in processor architecture design and implementation. A high resolution, multichannel spectrometer that incorporates special purpose microcoded signal processors is being tested. A general purpose signal processor for the data from the multichannel spectrometer was designed to function as the processing element in a highly concurrent machine. The processor performance required for the spectrometer is in the range of 1000 to 10,000 million instructions per second (MIPS). Multiple node processor configurations, where each node performs at 100 MIPS, are sought. The nodes are microprogrammable and are interconnected through a network with high bandwidth for neighboring nodes, and medium bandwidth for nodes at larger distance. The implementation of both the current mutlichannel spectrometer and the signal processor as Very Large Scale Integration CMOS chip sets was commenced.

  11. Designing Two-Layer Optical Networks with Statistical Multiplexing

    NASA Astrophysics Data System (ADS)

    Addis, B.; Capone, A.; Carello, G.; Malucelli, F.; Fumagalli, M.; Pedrin Elli, E.

    The possibility of adding multi-protocol label switching (MPLS) support to transport networks is considered an important opportunity by telecom carriers that want to add packet services and applications to their networks. However, the question that arises is whether it is suitable to have MPLS nodes just at the edge of the network to collect packet traffic from users, or also to introduce MPLS facilities on a subset of the core nodes in order to exploit packet switching flexibility and multiplexing, thus providing induction of a better bandwidth allocation. In this article, we address this complex decisional problem with the support of a mathematical programming approach. We consider two-layer networks where MPLS is overlaid on top of transport networks-synchronous digital hierarchy (SDH) or wavelength division multiplexing (WDM)-depending on the required link speed. The discussions' decisions take into account the trade-off between the cost of adding MPLS support in the core nodes and the savings in the link bandwidth allocation due to the statistical multiplexing and the traffic grooming effects induced by MPLS nodes. The traffic matrix specifies for each point-to-point request a pair of values: a mean traffic value and an additional one. Using this traffic model, the effect of statistical multiplexing on a link allows the allocation of a capacity equal to the sum of all the mean values of the traffic demands routed on the link and only the highest additional one. The proposed approach is suitable to solve real instances in reasonable time.

  12. Modeling a Million-Node Slim Fly Network Using Parallel Discrete-Event Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfe, Noah; Carothers, Christopher; Mubarak, Misbah

    As supercomputers close in on exascale performance, the increased number of processors and processing power translates to an increased demand on the underlying network interconnect. The Slim Fly network topology, a new lowdiameter and low-latency interconnection network, is gaining interest as one possible solution for next-generation supercomputing interconnect systems. In this paper, we present a high-fidelity Slim Fly it-level model leveraging the Rensselaer Optimistic Simulation System (ROSS) and Co-Design of Exascale Storage (CODES) frameworks. We validate our Slim Fly model with the Kathareios et al. Slim Fly model results provided at moderately sized network scales. We further scale the modelmore » size up to n unprecedented 1 million compute nodes; and through visualization of network simulation metrics such as link bandwidth, packet latency, and port occupancy, we get an insight into the network behavior at the million-node scale. We also show linear strong scaling of the Slim Fly model on an Intel cluster achieving a peak event rate of 36 million events per second using 128 MPI tasks to process 7 billion events. Detailed analysis of the underlying discrete-event simulation performance shows that a million-node Slim Fly model simulation can execute in 198 seconds on the Intel cluster.« less

  13. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  14. Livermore Big Artificial Neural Network Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  15. LWT Based Sensor Node Signal Processing in Vehicle Surveillance Distributed Sensor Network

    NASA Astrophysics Data System (ADS)

    Cha, Daehyun; Hwang, Chansik

    Previous vehicle surveillance researches on distributed sensor network focused on overcoming power limitation and communication bandwidth constraints in sensor node. In spite of this constraints, vehicle surveillance sensor node must have signal compression, feature extraction, target localization, noise cancellation and collaborative signal processing with low computation and communication energy dissipation. In this paper, we introduce an algorithm for light-weight wireless sensor node signal processing based on lifting scheme wavelet analysis feature extraction in distributed sensor network.

  16. A batch-fabricated electret-biased wideband MEMS vibration energy harvester with frequency-up conversion behavior powering a UHF wireless sensor node

    NASA Astrophysics Data System (ADS)

    Lu, Y.; O'Riordan, E.; Cottone, F.; Boisseau, S.; Galayko, D.; Blokhina, E.; Marty, F.; Basset, P.

    2016-12-01

    This paper reports a batch-fabricated, low-frequency and wideband MEMS electrostatic vibration energy harvester (e-VEH), which implements corona-charged vertical electrets and nonlinear elastic stoppers. A numeric model is used to perform parametric study, where we observe a wideband bi-modality resulting from nonlinearity. The nonlinear stoppers improve the bandwidth and induce a frequency-up feature at low frequencies. When the e-VEH works with a bias of 45 V, the power reaches a maximum value of 6.6 μW at 428 Hz and 2.0 g rms, and is above 1 μW at 50 Hz. When the frequency drops below 60 Hz, a ‘frequency-up’ conversion behavior is observed with peaks of power at 34 Hz and 52 Hz. The  -3 dB bandwidth is more than 60% of its central frequency, both including and excluding the hysteresis introduced by the nonlinear stoppers. We also perform experiments with wideband Gaussian noise. The device is eventually tested with an RF data transmission setup, where a communication node with an internal temperature sensor is powered. Every 2 min, a data transmission at 868 MHz is performed by the sensor node supplied by the e-VEH, and received at a distance of up to 15 m.

  17. Coalition Game-Based Secure and Effective Clustering Communication in Vehicular Cyber-Physical System (VCPS).

    PubMed

    Huo, Yan; Dong, Wei; Qian, Jin; Jing, Tao

    2017-02-27

    In this paper, we address the low efficiency of cluster-based communication for the crossroad scenario in the Vehicular Cyber-Physical System (VCPS), which is due to the overload of the cluster head resulting from a large number of transmission bandwidth requirements. After formulating the issue as a coalition formation game, a coalition-based clustering strategy is proposed, which could converge into a Nash-stable partition to accomplish the clustering formation process. In the proposed strategy, the coalition utility is formulated by the relative velocity, relative position and the bandwidth availability ratio of vehicles among the cluster. Employing the coalition utility, the vehicles are denoted as the nodes that make the decision whether to switch to a new coalition or stay in the current coalition. Based on this, we can make full use of the bandwidth provided by cluster head under the requirement of clustering stability. Nevertheless, there exist selfish nodes duringtheclusteringformation,soastointendtobenefitfromnetworks. Thisbehaviormaydegrade the communication quality and even destroy the cluster. Thus, we also present a reputation-based incentive and penalty mechanism to stop the selfish nodes from entering clusters. Numerical simulation results show that our strategy, CG-SECC, takes on a better performance for the tradeoff between the stability and efficiency of clustering communication. Besides, a case study demonstrates that the proposed incentive and penalty mechanism can play an important role in discovering and removing malicious nodes.

  18. Coalition Game-Based Secure and Effective Clustering Communication in Vehicular Cyber-Physical System (VCPS)

    PubMed Central

    Huo, Yan; Dong, Wei; Qian, Jin; Jing, Tao

    2017-01-01

    In this paper, we address the low efficiency of cluster-based communication for the crossroad scenario in the Vehicular Cyber-Physical System (VCPS), which is due to the overload of the cluster head resulting from a large number of transmission bandwidth requirements. After formulating the issue as a coalition formation game, a coalition-based clustering strategy is proposed, which could converge into a Nash-stable partition to accomplish the clustering formation process. In the proposed strategy, the coalition utility is formulated by the relative velocity, relative position and the bandwidth availability ratio of vehicles among the cluster. Employing the coalition utility, the vehicles are denoted as the nodes that make the decision whether to switch to a new coalition or stay in the current coalition. Based on this, we can make full use of the bandwidth provided by cluster head under the requirement of clustering stability. Nevertheless, there exist selfish nodes during the clustering formation, so as to intend to benefit from networks. This behavior may degrade the communication quality and even destroy the cluster. Thus, we also present a reputation-based incentive and penalty mechanism to stop the selfish nodes from entering clusters. Numerical simulation results show that our strategy, CG-SECC, takes on a better performance for the tradeoff between the stability and efficiency of clustering communication. Besides, a case study demonstrates that the proposed incentive and penalty mechanism can play an important role in discovering and removing malicious nodes. PMID:28264469

  19. Distributed cluster management techniques for unattended ground sensor networks

    NASA Astrophysics Data System (ADS)

    Essawy, Magdi A.; Stelzig, Chad A.; Bevington, James E.; Minor, Sharon

    2005-05-01

    Smart Sensor Networks are becoming important target detection and tracking tools. The challenging problems in such networks include the sensor fusion, data management and communication schemes. This work discusses techniques used to distribute sensor management and multi-target tracking responsibilities across an ad hoc, self-healing cluster of sensor nodes. Although miniaturized computing resources possess the ability to host complex tracking and data fusion algorithms, there still exist inherent bandwidth constraints on the RF channel. Therefore, special attention is placed on the reduction of node-to-node communications within the cluster by minimizing unsolicited messaging, and distributing the sensor fusion and tracking tasks onto local portions of the network. Several challenging problems are addressed in this work including track initialization and conflict resolution, track ownership handling, and communication control optimization. Emphasis is also placed on increasing the overall robustness of the sensor cluster through independent decision capabilities on all sensor nodes. Track initiation is performed using collaborative sensing within a neighborhood of sensor nodes, allowing each node to independently determine if initial track ownership should be assumed. This autonomous track initiation prevents the formation of duplicate tracks while eliminating the need for a central "management" node to assign tracking responsibilities. Track update is performed as an ownership node requests sensor reports from neighboring nodes based on track error covariance and the neighboring nodes geo-positional location. Track ownership is periodically recomputed using propagated track states to determine which sensing node provides the desired coverage characteristics. High fidelity multi-target simulation results are presented, indicating the distribution of sensor management and tracking capabilities to not only reduce communication bandwidth consumption, but to also simplify multi-target tracking within the cluster.

  20. Building a Terabyte Memory Bandwidth Compute Node with Four Consumer Electronics GPUs

    NASA Astrophysics Data System (ADS)

    Omlin, Samuel; Räss, Ludovic; Podladchikov, Yuri

    2014-05-01

    GPUs released for consumer electronics are generally built with the same chip architectures as the GPUs released for professional usage. With regards to scientific computing, there are no obvious important differences in functionality or performance between the two types of releases, yet the price can differ up to one order of magnitude. For example, the consumer electronics release of the most recent NVIDIA Kepler architecture (GK110), named GeForce GTX TITAN, performed equally well in conducted memory bandwidth tests as the professional release, named Tesla K20; the consumer electronics release costs about one third of the professional release. We explain how to design and assemble a well adjusted computer with four high-end consumer electronics GPUs (GeForce GTX TITAN) combining more than 1 terabyte/s memory bandwidth. We compare the system's performance and precision with the one of hardware released for professional usage. The system can be used as a powerful workstation for scientific computing or as a compute node in a home-built GPU cluster.

  1. An Incentive Based Approach to Detect Selfish Nodes in Mobile P2P Network

    DTIC Science & Technology

    2011-01-01

    also listens to the packet if it is in promiscuous mode. So node 1 is sure that node 8 2 has forwarded the packet if it is able to hear the packet...3) where R represents the maximum distance a transmission can be sent, λ = Vw/f ≈ Vw/B assuming bandwidth...a customized routing protocol and explore new methods to find credibility. 44 REFERENCES [1] Refaei M.T, Vivek Srivastava

  2. Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.

    PubMed

    Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li

    2018-06-01

    State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.

  3. A novel communication mechanism based on node potential multi-path routing

    NASA Astrophysics Data System (ADS)

    Bu, Youjun; Zhang, Chuanhao; Jiang, YiMing; Zhang, Zhen

    2016-10-01

    With the network scales rapidly and new network applications emerge frequently, bandwidth supply for today's Internet could not catch up with the rapid increasing requirements. Unfortunately, irrational using of network sources makes things worse. Actual network deploys single-next-hop optimization paths for data transmission, but such "best effort" model leads to the imbalance use of network resources and usually leads to local congestion. On the other hand Multi-path routing can use the aggregation bandwidth of multi paths efficiently and improve the robustness of network, security, load balancing and quality of service. As a result, multi-path has attracted much attention in the routing and switching research fields and many important ideas and solutions have been proposed. This paper focuses on implementing the parallel transmission of multi next-hop data, balancing the network traffic and reducing the congestion. It aimed at exploring the key technologies of the multi-path communication network, which could provide a feasible academic support for subsequent applications of multi-path communication networking. It proposed a novel multi-path algorithm based on node potential in the network. And the algorithm can fully use of the network link resource and effectively balance network link resource utilization.

  4. On service differentiation in mobile Ad Hoc networks.

    PubMed

    Zhang, Shun-liang; Ye, Cheng-qing

    2004-09-01

    A network model is proposed to support service differentiation for mobile Ad Hoc networks by combining a fully distributed admission control approach and the DIFS based differentiation mechanism of IEEE802.11. It can provide different kinds of QoS (Quality of Service) for various applications. Admission controllers determine a committed bandwidth based on the reserved bandwidth of flows and the source utilization of networks. Packets are marked when entering into networks by markers according to the committed rate. By the mark in the packet header, intermediate nodes handle the received packets in different manners to provide applications with the QoS corresponding to the pre-negotiated profile. Extensive simulation experiments showed that the proposed mechanism can provide QoS guarantee to assured service traffic and increase the channel utilization of networks.

  5. Survey and analysis of satellite-based telemedicine projects involving Japan and developing nations: investigation of transmission rates, channel numbers, and node numbers.

    PubMed

    Nakajima, I; Natori, M; Takizawa, M; Kaihara, S

    2001-01-01

    We surveyed interactive telemedicine projects via telecommunications satellite (AMINE-PARTNERS, Post-PARTNERS, and Shinshu University Project using Inmarsat satellites) offered by Japan as assistance to developing countries. The survey helped clarify channel occupation time and data transfer rates. Using our survey results, we proposed an optimized satellite model with VSATs simulating the number of required channels and bandwidth magnitude. For future implementation of VSATs for medical use in developing nations, design of telecommunication channels should take into consideration TCP/IP-based operations. We calculated that one hub station with 30-76 VSATs in developing nation can be operated on bandwidth 6 Mbps using with 128 Kbps videoconferencing system for teleconsultation and teleconference, and linking with Internet.

  6. Recent Performance Results of VPIC on Trinity

    NASA Astrophysics Data System (ADS)

    Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Le, A.; Li, H.; Nam, H.; Pang, X.; Stark, D. J.; Rust, W. N., III; Yin, L.; Albright, B. J.

    2017-10-01

    Trinity is a new DOE compute resource now in production at Los Alamos National Laboratory. Trinity has several new and unique features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes, use of on package high bandwidth memory (HBM) for KNL nodes, ability to configure KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to optimize VPIC on Trinity by taking advantage of these new architectural features. Results of work will be presented on performance of VPIC on Haswell and KNL partitions for single node runs and runs at scale. Results include use of burst buffers at scale to optimize I/O, comparison of strategies for using MPI and threads, performance benefits using HBM and effectiveness of using intrinsics for vectorization. Work performed under auspices of U.S. Dept. of Energy by Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by LANL LDRD program.

  7. Highly efficient frequency conversion with bandwidth compression of quantum light

    PubMed Central

    Allgaier, Markus; Ansari, Vahid; Sansoni, Linda; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Harder, Georg; Brecht, Benjamin; Silberhorn, Christine

    2017-01-01

    Hybrid quantum networks rely on efficient interfacing of dissimilar quantum nodes, as elements based on parametric downconversion sources, quantum dots, colour centres or atoms are fundamentally different in their frequencies and bandwidths. Although pulse manipulation has been demonstrated in very different systems, to date no interface exists that provides both an efficient bandwidth compression and a substantial frequency translation at the same time. Here we demonstrate an engineered sum-frequency-conversion process in lithium niobate that achieves both goals. We convert pure photons at telecom wavelengths to the visible range while compressing the bandwidth by a factor of 7.47 under preservation of non-classical photon-number statistics. We achieve internal conversion efficiencies of 61.5%, significantly outperforming spectral filtering for bandwidth compression. Our system thus makes the connection between previously incompatible quantum systems as a step towards usable quantum networks. PMID:28134242

  8. Enhanced compressed sensing for visual target tracking in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Qiang, Guo

    2017-11-01

    Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.

  9. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by employing bandwidth shells at areas of overutilization

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-04-27

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. An automated routing strategy routes packets through one or more intermediate nodes of the network to reach a final destination. The default routing strategy is altered responsive to detection of overutilization of a particular path of one or more links, and at least some traffic is re-routed by distributing the traffic among multiple paths (which may include the default path). An alternative path may require a greater number of link traversals to reach the destination node.

  10. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2014-04-01

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  11. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2014-04-22

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  12. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    DOEpatents

    Blocksome, Michael A

    2013-07-02

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  13. Implementation of Distributed Services for a Deep Sea Moored Instrument Network

    NASA Astrophysics Data System (ADS)

    Oreilly, T. C.; Headley, K. L.; Risi, M.; Davis, D.; Edgington, D. R.; Salamy, K. A.; Chaffey, M.

    2004-12-01

    The Monterey Ocean Observing System (MOOS) is a moored observatory network consisting of interconnected instrument nodes on the sea surface, midwater, and deep sea floor. We describe Software Infrastructure and Applications for MOOS ("SIAM"), which implement the management, control, and data acquisition infrastructure for the moored observatory. Links in the MOOS network include fiber-optic and 10-BaseT copper connections between the at-sea nodes. A Globalstar satellite transceiver or 900 MHz Freewave terrestrial line-of-sight RF modem provides the link to shore. All of these links support Internet protocols, providing TCP/IP connectivity throughout a system that extends from shore to sensor nodes at the air-sea interface, through the oceanic water column to a benthic network of sensor nodes extending across the deep sea floor. Exploiting this TCP/IP infrastructure as well as capabilities provided by MBARI's MOOS mooring controller, we use powerful Internet software technologies to implement a distributed management, control and data acquisition system for the moored observatory. The system design meets the demanding functional requirements specified for MOOS. Nodes and their instruments are represented by Java RMI "services" having well defined software interfaces. Clients anywhere on the network can interact with any node or instrument through its corresponding service. A client may be on the same node as the service, may be on another node, or may reside on shore. Clients may be human, e.g. when a scientist on shore accesses a deployed instrument in real-time through a user interface. Clients may also be software components that interact autonomously with instruments and nodes, e.g. for purposes such as system resource management or autonomous detection and response to scientifically interesting events. All electrical power to the moored network is provided by solar and wind energy, and the RF shore-to-mooring links are intermittent and relatively low-bandwidth connections. Thus power and wireless bandwidth are limited resources that constrain our choice of service technologies and wireless access strategy. We describe and evaluate system performance in light of actual deployment of observatory elements in Monterey Bay, and discuss how the system can be developed further. We also consider management and control strategies for the cable-to-shore observatory known as MARS ("Monterey Accelerated Research System"). The MARS cable will provide high power and continuous high-bandwidth connectivity between seafloor instrument nodes and shore, thus removing key limitations of the moored observatory. Moreover MARS functional requirements may differ significantly from MOOS requirements. In light of these differences, we discuss how elements of our MOOS moored observatory architecture might be adapted to MARS.

  14. Circuit-Switched Memory Access in Photonic Interconnection Networks for High-Performance Embedded Computing

    DTIC Science & Technology

    2010-07-22

    dependent , providing a natural bandwidth match between compute cores and the memory subsystem. • High Bandwidth Dcnsity. Waveguides crossing the chip...simulate this memory access architecture on a 2S6-core chip with a concentrated 64-node network lIsing detailed traces of high-performance embedded...memory modulcs, wc placc memory access poi nts (MAPs) around the pcriphery of the chip connected to thc nctwork. These MAPs, shown in Figure 4, contain

  15. Wireless visual sensor network resource allocation using cross-layer optimization

    NASA Astrophysics Data System (ADS)

    Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.

    2009-01-01

    In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

  16. The P-Mesh: A Commodity-based Scalable Network Architecture for Clusters

    NASA Technical Reports Server (NTRS)

    Nitzberg, Bill; Kuszmaul, Chris; Stockdale, Ian; Becker, Jeff; Jiang, John; Wong, Parkson; Tweten, David (Technical Monitor)

    1998-01-01

    We designed a new network architecture, the P-Mesh which combines the scalability and fault resilience of a torus with the performance of a switch. We compare the scalability, performance, and cost of the hub, switch, torus, tree, and P-Mesh architectures. The latter three are capable of scaling to thousands of nodes, however, the torus has severe performance limitations with that many processors. The tree and P-Mesh have similar latency, bandwidth, and bisection bandwidth, but the P-Mesh outperforms the switch architecture (a lower bound for tree performance) on 16-node NAB Parallel Benchmark tests by up to 23%, and costs 40% less. Further, the P-Mesh has better fault resilience characteristics. The P-Mesh architecture trades increased management overhead for lower cost, and is a good bridging technology while the price of tree uplinks is expensive.

  17. Fully-elastic multi-granular network with space/frequency/time switching using multi-core fibres and programmable optical nodes.

    PubMed

    Amaya, N; Irfan, M; Zervas, G; Nejabati, R; Simeonidou, D; Sakaguchi, J; Klaus, W; Puttnam, B J; Miyazawa, T; Awaji, Y; Wada, N; Henning, I

    2013-04-08

    We present the first elastic, space division multiplexing, and multi-granular network based on two 7-core MCF links and four programmable optical nodes able to switch traffic utilising the space, frequency and time dimensions with over 6000-fold bandwidth granularity. Results show good end-to-end performance on all channels with power penalties between 0.75 dB and 3.7 dB.

  18. Downhole drilling network using burst modulation techniques

    DOEpatents

    Hall,; David R. , Fox; Joe, [Spanish Fork, UT

    2007-04-03

    A downhole drilling system is disclosed in one aspect of the present invention as including a drill string and a transmission line integrated into the drill string. Multiple network nodes are installed at selected intervals along the drill string and are adapted to communicate with one another through the transmission line. In order to efficiently allocate the available bandwidth, the network nodes are configured to use any of numerous burst modulation techniques to transmit data.

  19. Back pressure based multicast scheduling for fair bandwidth allocation.

    PubMed

    Sarkar, Saswati; Tassiulas, Leandros

    2005-09-01

    We study the fair allocation of bandwidth in multicast networks with multirate capabilities. In multirate transmission, each source encodes its signal in layers. The lowest layer contains the most important information and all receivers of a session should receive it. If a receiver's data path has additional bandwidth, it receives higher layers which leads to a better quality of reception. The bandwidth allocation objective is to distribute the layers fairly. We present a computationally simple, decentralized scheduling policy that attains the maxmin fair rates without using any knowledge of traffic statistics and layer bandwidths. This policy learns the congestion level from the queue lengths at the nodes, and adapts the packet transmissions accordingly. When the network is congested, packets are dropped from the higher layers; therefore, the more important lower layers suffer negligible packet loss. We present analytical and simulation results that guarantee the maxmin fairness of the resulting rate allocation, and upper bound the packet loss rates for different layers.

  20. Results of using frequency banded SAFT for examining three types of defects

    NASA Astrophysics Data System (ADS)

    Clayton, Dwight; Barker, Alan; Santos-Villalobos, Hector

    2017-02-01

    A multitude of concrete-based structures are typically part of a light water reactor (LWR) plant to provide the foundation, support, shielding, and containment functions. Concrete has been used in the construction of nuclear power plants (NPPs) because of three primary properties; its low cost, structural strength, and ability to shield radiation. Examples of concrete structures important to the safety of LWR plants include the containment building, spent fuel pool, and cooling towers. This use has made concrete's long-term performance crucial for the safe operation of commercial NPPs. Extending reactor life to 60 years and beyond will likely increase susceptibility and severity of known forms of degradation. Additionally, new mechanisms of materials degradation are also possible. Specially designed and fabricated test specimens can provide realistic flaws that are similar to actual flaws in terms of how they interact with a particular Nondestructive Evaluation (NDE) technique. Artificial test blocks allow the isolation of certain testing problems as well as the variation of certain parameters. Because conditions in the laboratory are controlled, the number of unknown variables can be decreased, making it possible to focus on specific aspects, investigate them in detail, and gain further information on the capabilities and limitations of each method. To minimize artifacts caused by boundary effects, the dimensions of the specimens should not be too compact. In this paper, we apply the frequency banded Synthetic Aperture Focusing Technique (SAFT) technique to a 2.134 m × 2.134 m × 1.016 m concrete test specimen with twenty deliberately embedded defects. These twenty embedded defects simulate voids (honeycombs), delamination, and embedded organic construction debris. Using the time-frequency technique of wavelet packet decomposition and reconstruction, the spectral content of the signal can be divided into two resulting child nodes. The resulting two nodes can then also be divided into two child nodes with each child node containing half of the bandwidth (spectral content) of its parent node. This process can be repeated until the bandwidth of the child nodes is sufficiently small. Once the desired bandwidth has been obtained, the band limited signal can be analyzed using SAFT, enabling the visualization of reflectivity of a frequency band and that band's interaction with the contents of the concrete structure. This paper examines the benefits of using frequency banded SAFT.

  1. Underwater Electromagnetic Sensor Networks, Part II: Localization and Network Simulations

    PubMed Central

    Zazo, Javier; Valcarcel Macua, Sergio; Zazo, Santiago; Pérez, Marina; Pérez-Álvarez, Iván; Jiménez, Eugenio; Cardona, Laura; Brito, Joaquín Hernández; Quevedo, Eduardo

    2016-01-01

    In the first part of the paper, we modeled and characterized the underwater radio channel in shallow waters. In the second part, we analyze the application requirements for an underwater wireless sensor network (U-WSN) operating in the same environment and perform detailed simulations. We consider two localization applications, namely self-localization and navigation aid, and propose algorithms that work well under the specific constraints associated with U-WSN, namely low connectivity, low data rates and high packet loss probability. We propose an algorithm where the sensor nodes collaboratively estimate their unknown positions in the network using a low number of anchor nodes and distance measurements from the underwater channel. Once the network has been self-located, we consider a node estimating its position for underwater navigation communicating with neighboring nodes. We also propose a communication system and simulate the whole electromagnetic U-WSN in the Castalia simulator to evaluate the network performance, including propagation impairments (e.g., noise, interference), radio parameters (e.g., modulation scheme, bandwidth, transmit power), hardware limitations (e.g., clock drift, transmission buffer) and complete MAC and routing protocols. We also explain the changes that have to be done to Castalia in order to perform the simulations. In addition, we propose a parametric model of the communication channel that matches well with the results from the first part of this paper. Finally, we provide simulation results for some illustrative scenarios. PMID:27999309

  2. Energy-efficient routing, modulation and spectrum allocation in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Tan, Yanxia; Gu, Rentao; Ji, Yuefeng

    2017-07-01

    With tremendous growth in bandwidth demand, energy consumption problem in elastic optical networks (EONs) becomes a hot topic with wide concern. The sliceable bandwidth-variable transponder in EON, which can transmit/receive multiple optical flows, was recently proposed to improve a transponder's flexibility and save energy. In this paper, energy-efficient routing, modulation and spectrum allocation (EE-RMSA) in EONs with sliceable bandwidth-variable transponder is studied. To decrease the energy consumption, we develop a Mixed Integer Linear Programming (MILP) model with corresponding EE-RMSA algorithm for EONs. The MILP model jointly considers the modulation format and optical grooming in the process of routing and spectrum allocation with the objective of minimizing the energy consumption. With the help of genetic operators, the EE-RMSA algorithm iteratively optimizes the feasible routing path, modulation format and spectrum resources solutions by explore the whole search space. In order to save energy, the optical-layer grooming strategy is designed to transmit the lightpath requests. Finally, simulation results verify that the proposed scheme is able to reduce the energy consumption of the network while maintaining the blocking probability (BP) performance compare with the existing First-Fit-KSP algorithm, Iterative Flipping algorithm and EAMGSP algorithm especially in large network topology. Our results also demonstrate that the proposed EE-RMSA algorithm achieves almost the same performance as MILP on an 8-node network.

  3. Energy Efficient and QoS sensitive Routing Protocol for Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Saeed Tanoli, Tariq; Khalid Khan, Muhammad

    2013-12-01

    Efficient routing is an important part of wireless ad hoc networks. Since in ad hoc networks we have limited resources, there are many limitations like bandwidth, battery consumption, and processing cycle etc. Reliability is also necessary since there is no allowance for invalid or incomplete information (and expired data is useless). There are various protocols that perform routing by considering one parameter but ignoring other parameters. In this paper we present a protocol that finds route on the basis of bandwidth, energy and mobility of the nodes participating in the communication.

  4. A slotted access control protocol for metropolitan WDM ring networks

    NASA Astrophysics Data System (ADS)

    Baziana, P. A.; Pountourakis, I. E.

    2009-03-01

    In this study we focus on the serious scalability problems that many access protocols for WDM ring networks introduce due to the use of a dedicated wavelength per access node for either transmission or reception. We propose an efficient slotted MAC protocol suitable for WDM ring metropolitan area networks. The proposed network architecture employs a separate wavelength for control information exchange prior to the data packet transmission. Each access node is equipped with a pair of tunable transceivers for data communication and a pair of fixed tuned transceivers for control information exchange. Also, each access node includes a set of fixed delay lines for synchronization reasons; to keep the data packets, while the control information is processed. An efficient access algorithm is applied to avoid both the data wavelengths and the receiver collisions. In our protocol, each access node is capable of transmitting and receiving over any of the data wavelengths, facing the scalability issues. Two different slot reuse schemes are assumed: the source and the destination stripping schemes. For both schemes, performance measures evaluation is provided via an analytic model. The analytical results are validated by a discrete event simulation model that uses Poisson traffic sources. Simulation results show that the proposed protocol manages efficient bandwidth utilization, especially under high load. Also, comparative simulation results prove that our protocol achieves significant performance improvement as compared with other WDMA protocols which restrict transmission over a dedicated data wavelength. Finally, performance measures evaluation is explored for diverse numbers of buffer size, access nodes and data wavelengths.

  5. Optimum ArFi laser bandwidth for 10nm node logic imaging performance

    NASA Astrophysics Data System (ADS)

    Alagna, Paolo; Zurita, Omar; Timoshkov, Vadim; Wong, Patrick; Rechtsteiner, Gregory; Baselmans, Jan; Mailfert, Julien

    2015-03-01

    Lithography process window (PW) and CD uniformity (CDU) requirements are being challenged with scaling across all device types. Aggressive PW and yield specifications put tight requirements on scanner performance, especially on focus budgets resulting in complicated systems for focus control. In this study, an imec N10 Logic-type test vehicle was used to investigate the E95 bandwidth impact on six different Metal 1 Logic features. The imaging metrics that track the impact of light source E95 bandwidth on performance of hot spots are: process window (PW), line width roughness (LWR), and local critical dimension uniformity (LCDU). In the first section of this study, the impact of increasing E95 bandwidth was investigated to observe the lithographic process control response of the specified logic features. In the second section, a preliminary assessment of the impact of lower E95 bandwidth was performed. The impact of lower E95 bandwidth on local intensity variability was monitored through the CDU of line end features and the LWR power spectral density (PSD) of line/space patterns. The investigation found that the imec N10 test vehicle (with OPC optimized for standard E95 bandwidth of300fm) features exposed at 200fm showed pattern specific responses, suggesting areas of potential interest for further investigation.

  6. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  7. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    PubMed

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  8. Degree-constrained multicast routing for multimedia communications

    NASA Astrophysics Data System (ADS)

    Wang, Yanlin; Sun, Yugeng; Li, Guidan

    2005-02-01

    Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.

  9. AURP: An AUV-Aided Underwater Routing Protocol for Underwater Acoustic Sensor Networks

    PubMed Central

    Yoon, Seokhoon; Azad, Abul K.; Oh, Hoon; Kim, Sunghwan

    2012-01-01

    Deploying a multi-hop underwater acoustic sensor network (UASN) in a large area brings about new challenges in reliable data transmissions and survivability of network due to the limited underwater communication range/bandwidth and the limited energy of underwater sensor nodes. In order to address those challenges and achieve the objectives of maximization of data delivery ratio and minimization of energy consumption of underwater sensor nodes, this paper proposes a new underwater routing scheme, namely AURP (AUV-aided underwater routing protocol), which uses not only heterogeneous acoustic communication channels but also controlled mobility of multiple autonomous underwater vehicles (AUVs). In AURP, the total data transmissions are minimized by using AUVs as relay nodes, which collect sensed data from gateway nodes and then forward to the sink. Moreover, controlled mobility of AUVs makes it possible to apply a short-range high data rate underwater channel for transmissions of a large amount of data. To the best to our knowledge, this work is the first attempt to employ multiple AUVs as relay nodes in a multi-hop UASN to improve the network performance in terms of data delivery ratio and energy consumption. Simulations, which are incorporated with a realistic underwater acoustic communication channel model, are carried out to evaluate the performance of the proposed scheme, and the results indicate that a high delivery ratio and low energy consumption can be achieved. PMID:22438740

  10. AURP: an AUV-aided underwater routing protocol for underwater acoustic sensor networks.

    PubMed

    Yoon, Seokhoon; Azad, Abul K; Oh, Hoon; Kim, Sunghwan

    2012-01-01

    Deploying a multi-hop underwater acoustic sensor network (UASN) in a large area brings about new challenges in reliable data transmissions and survivability of network due to the limited underwater communication range/bandwidth and the limited energy of underwater sensor nodes. In order to address those challenges and achieve the objectives of maximization of data delivery ratio and minimization of energy consumption of underwater sensor nodes, this paper proposes a new underwater routing scheme, namely AURP (AUV-aided underwater routing protocol), which uses not only heterogeneous acoustic communication channels but also controlled mobility of multiple autonomous underwater vehicles (AUVs). In AURP, the total data transmissions are minimized by using AUVs as relay nodes, which collect sensed data from gateway nodes and then forward to the sink. Moreover, controlled mobility of AUVs makes it possible to apply a short-range high data rate underwater channel for transmissions of a large amount of data. To the best to our knowledge, this work is the first attempt to employ multiple AUVs as relay nodes in a multi-hop UASN to improve the network performance in terms of data delivery ratio and energy consumption. Simulations, which are incorporated with a realistic underwater acoustic communication channel model, are carried out to evaluate the performance of the proposed scheme, and the results indicate that a high delivery ratio and low energy consumption can be achieved.

  11. Performance of VPIC on Trinity

    NASA Astrophysics Data System (ADS)

    Nystrom, W. D.; Bergen, B.; Bird, R. F.; Bowers, K. J.; Daughton, W. S.; Guo, F.; Li, H.; Nam, H. A.; Pang, X.; Rust, W. N., III; Wohlbier, J.; Yin, L.; Albright, B. J.

    2016-10-01

    Trinity is a new major DOE computing resource which is going through final acceptance testing at Los Alamos National Laboratory. Trinity has several new and unique architectural features including two compute partitions, one with dual socket Intel Haswell Xeon compute nodes and one with Intel Knights Landing (KNL) Xeon Phi compute nodes. Additional unique features include use of on package high bandwidth memory (HBM) for the KNL nodes, the ability to configure the KNL nodes with respect to HBM model and on die network topology in a variety of operational modes at run time, and use of solid state storage via burst buffer technology to reduce time required to perform I/O. An effort is in progress to port and optimize VPIC to Trinity and evaluate its performance. Because VPIC was recently released as Open Source, it is being used as part of acceptance testing for Trinity and is participating in the Trinity Open Science Program which has resulted in excellent collaboration activities with both Cray and Intel. Results of this work will be presented on performance of VPIC on both Haswell and KNL partitions for both single node runs and runs at scale. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  12. Parallel scalability of Hartree-Fock calculations

    NASA Astrophysics Data System (ADS)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  13. Towards green high capacity optical networks

    NASA Astrophysics Data System (ADS)

    Glesk, I.; Mohd Warip, M. N.; Idris, S. K.; Osadola, T. B.; Andonovic, I.

    2011-09-01

    The demand for fast, secure, energy efficient high capacity networks is growing. It is fuelled by transmission bandwidth needs which will support among other things the rapid penetration of multimedia applications empowering smart consumer electronics and E-businesses. All the above trigger unparallel needs for networking solutions which must offer not only high-speed low-cost "on demand" mobile connectivity but should be ecologically friendly and have low carbon footprint. The first answer to address the bandwidth needs was deployment of fibre optic technologies into transport networks. After this it became quickly obvious that the inferior electronic bandwidth (if compared to optical fiber) will further keep its upper hand on maximum implementable serial data rates. A new solution was found by introducing parallelism into data transport in the form of Wavelength Division Multiplexing (WDM) which has helped dramatically to improve aggregate throughput of optical networks. However with these advancements a new bottleneck has emerged at fibre endpoints where data routers must process the incoming and outgoing traffic. Here, even with the massive and power hungry electronic parallelism routers today (still relying upon bandwidth limiting electronics) do not offer needed processing speeds networks demands. In this paper we will discuss some novel unconventional approaches to address network scalability leading to energy savings via advance optical signal processing. We will also investigate energy savings based on advanced network management through nodes hibernation proposed for Optical IP networks. The hibernation reduces the network overall power consumption by forming virtual network reconfigurations through selective nodes groupings and by links segmentations and partitionings.

  14. Magnetic sensor nodes for enhanced situational awareness in urban settings

    NASA Astrophysics Data System (ADS)

    Trammell, Hoke; Shelby, Richard; Mathis, Kevin; Dalichaouch, Yacine; Kumar, Sankaran

    2005-05-01

    Military forces conducting urban operations are in need of non-line-of-sight sensor technologies for enhanced situational awareness. Disposable sensors ought to be able to detect and track targets through walls and within rooms in a building and relay that information in real-time to the soldier. We have recently developed magnetic sensor nodes aimed towards low cost, small size, low power consumption, and wireless communication. The current design uses a three-axis thin-film magnetoresistive sensor for low bandwidth B-field monitoring of magnetic targets such as vehicles and weapons carried by personnel. These sensor nodes are battery operated and use IEEE 802.15.4 communication link for control and data transmission. Power consumption during signal acquisition and communication is approximately 300 mW per channel. We will present and discuss node array performance, future node development and sensor fusion concepts.

  15. Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network

    NASA Astrophysics Data System (ADS)

    Ong, Jia Jan; Ang, L.-M.; Seng, K. P.

    This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.

  16. An Effective Collaborative Mobile Weighted Clustering Schemes for Energy Balancing in Wireless Sensor Networks.

    PubMed

    Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang

    2016-02-19

    Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain.

  17. Fault-tolerant bandwidth reservation strategies for data transfers in high-performance networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuo, Liudong; Zhu, Michelle M.; Wu, Chase Q.

    2016-11-22

    Many next-generation e-science applications need fast and reliable transfer of large volumes of data with guaranteed performance, which is typically enabled by the bandwidth reservation service in high-performance networks. One prominent issue in such network environments with large footprints is that node and link failures are inevitable, hence potentially degrading the quality of data transfer. We consider two generic types of bandwidth reservation requests (BRRs) concerning data transfer reliability: (i) to achieve the highest data transfer reliability under a given data transfer deadline, and (ii) to achieve the earliest data transfer completion time while satisfying a given data transfer reliabilitymore » requirement. We propose two periodic bandwidth reservation algorithms with rigorous optimality proofs to optimize the scheduling of individual BRRs within BRR batches. The efficacy of the proposed algorithms is illustrated through extensive simulations in comparison with scheduling algorithms widely adopted in production networks in terms of various performance metrics.« less

  18. Bandwidth management for mobile mode of mobile monitoring system for Indonesian Volcano

    NASA Astrophysics Data System (ADS)

    Evita, Maria; Djamal, Mitra; Zimanowski, Bernd; Schilling, Klaus

    2017-01-01

    Volcano monitoring requires the system which has high-fidelity operation and real-time acquisition. MONICA (Mobile Monitoring System for Indonesian Volcano), a system based on Wireless Sensor Network, mobile robot and satellite technology has been proposed to fulfill this requirement for volcano monitoring system in Indonesia. This system consists of fixed-mode for normal condition and mobile mode for emergency situation. The first and second modes have been simulated in slow motion earthquake cases of Merapi Volcano, Indonesia. In this research, we have investigated the application of our bandwidth management for high-fidelity operation and real time acquisition in mobile mode of a strong motion earthquake from this volcano. The simulation result showed that our system still could manage the bandwidth even when there were 2 died fixed node after had stroked by the lightning. This result (64% to 83% throughput in average) was still better than the bandwidth utilized by the existing equipment (0% throughput because of the broken seismometer).

  19. Energy-efficient virtual optical network mapping approaches over converged flexible bandwidth optical networks and data centers.

    PubMed

    Chen, Bowen; Zhao, Yongli; Zhang, Jie

    2015-09-21

    In this paper, we develop a virtual link priority mapping (LPM) approach and a virtual node priority mapping (NPM) approach to improve the energy efficiency and to reduce the spectrum usage over the converged flexible bandwidth optical networks and data centers. For comparison, the lower bound of the virtual optical network mapping is used for the benchmark solutions. Simulation results show that the LPM approach achieves the better performance in terms of power consumption, energy efficiency, spectrum usage, and the number of regenerators compared to the NPM approach.

  20. High speed all-optical networks

    NASA Technical Reports Server (NTRS)

    Chlamtac, Imrich

    1993-01-01

    An inherent problem of conventional point-to-point WAN architectures is that they cannot translate optical transmission bandwidth into comparable user available throughput due to the limiting electronic processing speed of the switching nodes. This report presents the first solution to WDM based WAN networks that overcomes this limitation. The proposed Lightnet architecture takes into account the idiosyncrasies of WDM switching/transmission leading to an efficient and pragmatic solution. The Lightnet architecture trades the ample WDM bandwidth for a reduction in the number of processing stages and a simplification of each switching stage, leading to drastically increased effective network throughputs.

  1. Experimental demonstration of spectrum-sliced elastic optical path network (SLICE).

    PubMed

    Kozicki, Bartłomiej; Takara, Hidehiko; Tsukishima, Yukio; Yoshimatsu, Toshihide; Yonenaga, Kazushige; Jinno, Masahiko

    2010-10-11

    We describe experimental demonstration of spectrum-sliced elastic optical path network (SLICE) architecture. We employ optical orthogonal frequency-division multiplexing (OFDM) modulation format and bandwidth-variable optical cross-connects (OXC) to generate, transmit and receive optical paths with bandwidths of up to 1 Tb/s. We experimentally demonstrate elastic optical path setup and spectrally-efficient transmission of multiple channels with bit rates ranging from 40 to 140 Gb/s between six nodes of a mesh network. We show dynamic bandwidth scalability for optical paths with bit rates of 40 to 440 Gb/s. Moreover, we demonstrate multihop transmission of a 1 Tb/s optical path over 400 km of standard single-mode fiber (SMF). Finally, we investigate the filtering properties and the required guard band width for spectrally-efficient allocation of optical paths in SLICE.

  2. Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking

    PubMed Central

    Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng

    2017-01-01

    Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243

  3. An Effective Collaborative Mobile Weighted Clustering Schemes for Energy Balancing in Wireless Sensor Networks

    PubMed Central

    Tang, Chengpei; Shokla, Sanesy Kumcr; Modhawar, George; Wang, Qiang

    2016-01-01

    Collaborative strategies for mobile sensor nodes ensure the efficiency and the robustness of data processing, while limiting the required communication bandwidth. In order to solve the problem of pipeline inspection and oil leakage monitoring, a collaborative weighted mobile sensing scheme is proposed. By adopting a weighted mobile sensing scheme, the adaptive collaborative clustering protocol can realize an even distribution of energy load among the mobile sensor nodes in each round, and make the best use of battery energy. A detailed theoretical analysis and experimental results revealed that the proposed protocol is an energy efficient collaborative strategy such that the sensor nodes can communicate with a fusion center and produce high power gain. PMID:26907285

  4. An Optimized Hidden Node Detection Paradigm for Improving the Coverage and Network Efficiency in Wireless Multimedia Sensor Networks.

    PubMed

    Alanazi, Adwan; Elleithy, Khaled

    2016-09-07

    Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node- continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches.

  5. An Optimized Hidden Node Detection Paradigm for Improving the Coverage and Network Efficiency in Wireless Multimedia Sensor Networks

    PubMed Central

    Alanazi, Adwan; Elleithy, Khaled

    2016-01-01

    Successful transmission of online multimedia streams in wireless multimedia sensor networks (WMSNs) is a big challenge due to their limited bandwidth and power resources. The existing WSN protocols are not completely appropriate for multimedia communication. The effectiveness of WMSNs varies, and it depends on the correct location of its sensor nodes in the field. Thus, maximizing the multimedia coverage is the most important issue in the delivery of multimedia contents. The nodes in WMSNs are either static or mobile. Thus, the node connections change continuously due to the mobility in wireless multimedia communication that causes an additional energy consumption, and synchronization loss between neighboring nodes. In this paper, we introduce an Optimized Hidden Node Detection (OHND) paradigm. The OHND consists of three phases: hidden node detection, message exchange, and location detection. These three phases aim to maximize the multimedia node coverage, and improve energy efficiency, hidden node detection capacity, and packet delivery ratio. OHND helps multimedia sensor nodes to compute the directional coverage. Furthermore, an OHND is used to maintain a continuous node– continuous neighbor discovery process in order to handle the mobility of the nodes. We implement our proposed algorithms by using a network simulator (NS2). The simulation results demonstrate that nodes are capable of maintaining direct coverage and detecting hidden nodes in order to maximize coverage and multimedia node mobility. To evaluate the performance of our proposed algorithms, we compared our results with other known approaches. PMID:27618048

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Qi, Hairong

    This paper addresses the communication and energy efficiency in collaborative visual sensor networks (VSNs) for people localization, a challenging computer vision problem of its own. We focus on the design of a light-weight and energy efficient solution where people are localized based on distributed camera nodes integrating the so-called certainty map generated at each node, that records the target non-existence information within the camera s field of view. We first present a dynamic itinerary for certainty map integration where not only each sensor node transmits a very limited amount of data but that a limited number of camera nodes ismore » involved. Then, we perform a comprehensive analytical study to evaluate communication and energy efficiency between different integration schemes, i.e., centralized and distributed integration. Based on results obtained from analytical study and real experiments, the distributed method shows effectiveness in detection accuracy as well as energy and bandwidth efficiency.« less

  7. A Six-Node Curved Triangular Element and a Four-Node Quadrilateral Element for Analysis of Laminated Composite Aerospace Structures

    NASA Technical Reports Server (NTRS)

    Martin, C. Wayne; Breiner, David M.; Gupta, Kajal K. (Technical Monitor)

    2004-01-01

    Mathematical development and some computed results are presented for Mindlin plate and shell elements, suitable for analysis of laminated composite and sandwich structures. These elements use the conventional 3 (plate) or 5 (shell) nodal degrees of freedom, have no communicable mechanisms, have no spurious shear energy (no shear locking), have no spurious membrane energy (no membrane locking) and do not require arbitrary reduction of out-of-plane shear moduli or under-integration. Artificial out-of-plane rotational stiffnesses are added at the element level to avoid convergence problems or singularity due to flat spots in shells. This report discusses a 6-node curved triangular element and a 4-node quadrilateral element. Findings show that in regular rectangular meshes, the Martin-Breiner 6-node triangular curved shell (MB6) is approximately equivalent to the conventional 8-node quadrilateral with integration. The 4-node quadrilateral (MB4) has very good accuracy for a 4-node element, and may be preferred in vibration analysis because of narrower bandwidth. The mathematical developments used in these elements, those discussed in the seven appendices, have been applied to elements with 3, 4, 6, and 10 nodes and can be applied to other nodal configurations.

  8. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search.

    PubMed

    Liu, Meiqin; Zhang, Duo; Zhang, Senlin; Zhang, Qunfei

    2017-12-04

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.

  9. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search

    PubMed Central

    Zhang, Senlin; Zhang, Qunfei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme. PMID:29207541

  10. Intercluster Connection in Cognitive Wireless Mesh Networks Based on Intelligent Network Coding

    NASA Astrophysics Data System (ADS)

    Chen, Xianfu; Zhao, Zhifeng; Jiang, Tao; Grace, David; Zhang, Honggang

    2009-12-01

    Cognitive wireless mesh networks have great flexibility to improve spectrum resource utilization, within which secondary users (SUs) can opportunistically access the authorized frequency bands while being complying with the interference constraint as well as the QoS (Quality-of-Service) requirement of primary users (PUs). In this paper, we consider intercluster connection between the neighboring clusters under the framework of cognitive wireless mesh networks. Corresponding to the collocated clusters, data flow which includes the exchanging of control channel messages usually needs four time slots in traditional relaying schemes since all involved nodes operate in half-duplex mode, resulting in significant bandwidth efficiency loss. The situation is even worse at the gateway node connecting the two colocated clusters. A novel scheme based on network coding is proposed in this paper, which needs only two time slots to exchange the same amount of information mentioned above. Our simulation shows that the network coding-based intercluster connection has the advantage of higher bandwidth efficiency compared with the traditional strategy. Furthermore, how to choose an optimal relaying transmission power level at the gateway node in an environment of coexisting primary and secondary users is discussed. We present intelligent approaches based on reinforcement learning to solve the problem. Theoretical analysis and simulation results both show that the intelligent approaches can achieve optimal throughput for the intercluster relaying in the long run.

  11. Free Factories: Unified Infrastructure for Data Intensive Web Services

    PubMed Central

    Zaranek, Alexander Wait; Clegg, Tom; Vandewege, Ward; Church, George M.

    2010-01-01

    We introduce the Free Factory, a platform for deploying data-intensive web services using small clusters of commodity hardware and free software. Independently administered virtual machines called Freegols give application developers the flexibility of a general purpose web server, along with access to distributed batch processing, cache and storage services. Each cluster exploits idle RAM and disk space for cache, and reserves disks in each node for high bandwidth storage. The batch processing service uses a variation of the MapReduce model. Virtualization allows every CPU in the cluster to participate in batch jobs. Each 48-node cluster can achieve 4-8 gigabytes per second of disk I/O. Our intent is to use multiple clusters to process hundreds of simultaneous requests on multi-hundred terabyte data sets. Currently, our applications achieve 1 gigabyte per second of I/O with 123 disks by scheduling batch jobs on two clusters, one of which is located in a remote data center. PMID:20514356

  12. GASNet-EX Performance Improvements Due to Specialization for the Cray Aries Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargrove, Paul H.; Bonachea, Dan

    This document is a deliverable for milestone STPM17-6 of the Exascale Computing Project, delivered by WBS 2.3.1.14. It reports on the improvements in performance observed on Cray XC-series systems due to enhancements made to the GASNet-EX software. These enhancements, known as “specializations”, primarily consist of replacing network-independent implementations of several recently added features with implementations tailored to the Cray Aries network. Performance gains from specialization include (1) Negotiated-Payload Active Messages improve bandwidth of a ping-pong test by up to 14%, (2) Immediate Operations reduce running time of a synthetic benchmark by up to 93%, (3) non-bulk RMA Put bandwidth ismore » increased by up to 32%, (4) Remote Atomic performance is 70% faster than the reference on a point-to-point test and allows a hot-spot test to scale robustly, and (5) non-contiguous RMA interfaces see up to 8.6x speedups for an intra-node benchmark and 26% for inter-node. These improvements are available in the GASNet-EX 2018.3.0 release.« less

  13. High speed all optical networks

    NASA Technical Reports Server (NTRS)

    Chlamtac, Imrich; Ganz, Aura

    1990-01-01

    An inherent problem of conventional point-to-point wide area network (WAN) architectures is that they cannot translate optical transmission bandwidth into comparable user available throughput due to the limiting electronic processing speed of the switching nodes. The first solution to wavelength division multiplexing (WDM) based WAN networks that overcomes this limitation is presented. The proposed Lightnet architecture takes into account the idiosyncrasies of WDM switching/transmission leading to an efficient and pragmatic solution. The Lightnet architecture trades the ample WDM bandwidth for a reduction in the number of processing stages and a simplification of each switching stage, leading to drastically increased effective network throughputs. The principle of the Lightnet architecture is the construction and use of virtual topology networks, embedded in the original network in the wavelength domain. For this construction Lightnets utilize the new concept of lightpaths which constitute the links of the virtual topology. Lightpaths are all-optical, multihop, paths in the network that allow data to be switched through intermediate nodes using high throughput passive optical switches. The use of the virtual topologies and the associated switching design introduce a number of new ideas, which are discussed in detail.

  14. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    NASA Technical Reports Server (NTRS)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  15. Optical slotted circuit switched network: a bandwidth efficient alternative to wavelength-routed network

    NASA Astrophysics Data System (ADS)

    Li, Yan; Collier, Martin

    2007-11-01

    Wavelength-routed networks have received enormous attention due to the fact that they are relatively simple to implement and implicitly offer Quality of Service (QoS) guarantees. However, they suffer from a bandwidth inefficiency problem and require complex Routing and Wavelength Assignment (RWA). Most attempts to address the above issues exploit the joint use of WDM and TDM technologies. The resultant TDM-based wavelength-routed networks partition the wavelength bandwidth into fixed-length time slots organized as a fixed-length frame. Multiple connections can thus time-share a wavelength and the grooming of their traffic leads to better bandwidth utilization. The capability of switching in both wavelength and time domains in such networks also mitigates the RWA problem. However, TMD-based wavelength-routed networks work in synchronous mode and strict synchronization among all network nodes is required. Global synchronization for all-optical networks which operate at extremely high speed is technically challenging, and deploying an optical synchronizer for each wavelength involves considerable cost. An Optical Slotted Circuit Switching (OSCS) architecture is proposed in this paper. In an OSCS network, slotted circuits are created to better utilize the wavelength bandwidth than in classic wavelength-routed networks. The operation of the protocol is such as to avoid the need for global synchronization required by TDM-based wavelength-routed networks.

  16. Game-theoretic approach for improving cooperation in wireless multihop networks.

    PubMed

    Ng, See-Kee; Seah, Winston K G

    2010-06-01

    Traditional networks are built on the assumption that network entities cooperate based on a mandatory network communication semantic to achieve desirable qualities such as efficiency and scalability. Over the years, this assumption has been eroded by the emergence of users that alter network behavior in a way to benefit themselves at the expense of others. At one extreme, a malicious user/node may eavesdrop on sensitive data or deliberately inject packets into the network to disrupt network operations. The solution to this generally lies in encryption and authentication. In contrast, a rational node acts only to achieve an outcome that he desires most. In such a case, cooperation is still achievable if the outcome is to the best interest of the node. The node misbehavior problem would be more pronounced in multihop wireless networks like mobile ad hoc and sensor networks, which are typically made up of wireless battery-powered devices that must cooperate to forward packets for one another. However, cooperation may be hard to maintain as it consumes scarce resources such as bandwidth, computational power, and battery power. This paper applies game theory to achieve collusive networking behavior in such network environments. In this paper, pricing, promiscuous listening, and mass punishments are avoided altogether. Our model builds on recent work in the field of Economics on the theory of imperfect private monitoring for the dynamic Bertrand oligopoly, and adapts it to the wireless multihop network. The model derives conditions for collusive packet forwarding, truthful routing broadcasts, and packet acknowledgments under a lossy wireless multihop environment, thus capturing many important characteristics of the network layer and link layer in one integrated analysis that has not been achieved previously. We also provide a proof of the viability of the model under a theoretical wireless environment. Finally, we show how the model can be applied to design a generic protocol which we call the Selfishness Resilient Resource Reservation protocol, and validate the effectiveness of this protocol in ensuring cooperation using simulations.

  17. Heterogeneous Gossip

    NASA Astrophysics Data System (ADS)

    Frey, Davide; Guerraoui, Rachid; Kermarrec, Anne-Marie; Koldehofe, Boris; Mogensen, Martin; Monod, Maxime; Quéma, Vivien

    Gossip-based information dissemination protocols are considered easy to deploy, scalable and resilient to network dynamics. Load-balancing is inherent in these protocols as the dissemination work is evenly spread among all nodes. Yet, large-scale distributed systems are usually heterogeneous with respect to network capabilities such as bandwidth. In practice, a blind load-balancing strategy might significantly hamper the performance of the gossip dissemination.

  18. Using VirtualGL/TurboVNC Software on the Peregrine System |

    Science.gov Websites

    High-Performance Computing | NREL VirtualGL/TurboVNC Software on the Peregrine System Using , allowing users to access and share large-memory visualization nodes with high-end graphics processing units may be better than just using X11 forwarding when connecting from a remote site with low bandwidth and

  19. Announcing Supercomputer Summit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Jack; Bland, Buddy; Nichols, Jeff

    Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte ofmore » coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.« less

  20. Effective bandwidth guaranteed routing schemes for MPLS traffic engineering

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Jain, Nidhi

    2001-07-01

    In this work, we present online algorithms for dynamic routing bandwidth guaranteed label switched paths (LSPs) where LSP set-up requests (in terms of a pair of ingress and egress routers as well as its bandwidth requirement) arrive one by one and there is no a priori knowledge regarding future LSP set-up requests. In addition, we consider rerouting of LSPs in this work. Rerouting of LSPs has not been well studied in previous work on LSP routing. The need of LSP rerouting arises in a number of ways: occurrence of faults (link and/or node failures), re-optimization of existing LSPs' routes to accommodate traffic fluctuation, requests with higher priorities, and so on. We formulate the bandwidth guaranteed LSP routing with rerouting capability as a multi-commodity flow problem. The solution to this problem is used as the benchmark for comparing other computationally less costly algorithms studied in this paper. Furthermore, to more efficiently utilize the network resources, we propose online routing algorithms which route bandwidth demands over multiple paths at the ingress router to satisfy the customer requests while providing better service survivability. Traffic splitting and distribution over the multiple paths are carefully handled using table-based hashing schemes while the order of packets within a flow is preserved. Preliminary simulations are conducted to show the performance of different design choices and the effectiveness of the rerouting and multi-path routing algorithms in terms of LSP set-up request rejection probability and bandwidth blocking probability.

  1. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    DTIC Science & Technology

    2015-06-01

    5110P and 16 dx360M4 nodes each with one NVIDIA Kepler K20M/K40M GPU. Each node contained dual Intel Xeon E5-2670 (Sandy Bridge) central processing...kernel and as such does not employ multiple processors. This work makes use of a single processing core and a single NVIDIA Kepler K40 GK110...bandwidth (2 × 16 slot), 7.877 GFloat/s; Kepler K40 peak, 4,290 × 1 billion floating-point operations (GFLOPs), and 288 GB/s Kepler K40 memory

  2. Trade-off Analysis of Underwater Acoustic Sensor Networks

    NASA Astrophysics Data System (ADS)

    Tuna, G.; Das, R.

    2017-09-01

    In the last couple of decades, Underwater Acoustic Sensor Networks (UASNs) were started to be used for various commercial and non-commercial purposes. However, in underwater environments, there are some specific inherent constraints, such as high bit error rate, variable and large propagation delay, limited bandwidth capacity, and short-range communications, which severely degrade the performance of UASNs and limit the lifetime of underwater sensor nodes as well. Therefore, proving reliability of UASN applications poses a challenge. In this study, we try to balance energy consumption of underwater acoustic sensor networks and minimize end-to-end delay using an efficient node placement strategy. Our simulation results reveal that if the number of hops is reduced, energy consumption can be reduced. However, this increases end-to-end delay. Hence, application-specific requirements must be taken into consideration when determining a strategy for node deployment.

  3. FPGA cluster for high-performance AO real-time control system

    NASA Astrophysics Data System (ADS)

    Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.

    2006-06-01

    Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.

  4. Low-power, transparent optical network interface for high bandwidth off-chip interconnects.

    PubMed

    Liboiron-Ladouceur, Odile; Wang, Howard; Garg, Ajay S; Bergman, Keren

    2009-04-13

    The recent emergence of multicore architectures and chip multiprocessors (CMPs) has accelerated the bandwidth requirements in high-performance processors for both on-chip and off-chip interconnects. For next generation computing clusters, the delivery of scalable power efficient off-chip communications to each compute node has emerged as a key bottleneck to realizing the full computational performance of these systems. The power dissipation is dominated by the off-chip interface and the necessity to drive high-speed signals over long distances. We present a scalable photonic network interface approach that fully exploits the bandwidth capacity offered by optical interconnects while offering significant power savings over traditional E/O and O/E approaches. The power-efficient interface optically aggregates electronic serial data streams into a multiple WDM channel packet structure at time-of-flight latencies. We demonstrate a scalable optical network interface with 70% improvement in power efficiency for a complete end-to-end PCI Express data transfer.

  5. Global synchronization of complex dynamical networks through digital communication with limited data rate.

    PubMed

    Wang, Yan-Wu; Bian, Tao; Xiao, Jiang-Wen; Wen, Changyun

    2015-10-01

    This paper studies the global synchronization of complex dynamical network (CDN) under digital communication with limited bandwidth. To realize the digital communication, the so-called uniform-quantizer-sets are introduced to quantize the states of nodes, which are then encoded and decoded by newly designed encoders and decoders. To meet the requirement of the bandwidth constraint, a scaling function is utilized to guarantee the quantizers having bounded inputs and thus achieving bounded real-time quantization levels. Moreover, a new type of vector norm is introduced to simplify the expression of the bandwidth limit. Through mathematical induction, a sufficient condition is derived to ensure global synchronization of the CDNs. The lower bound on the sum of the real-time quantization levels is analyzed for different cases. Optimization method is employed to relax the requirements on the network topology and to determine the minimum of such lower bound for each case, respectively. Simulation examples are also presented to illustrate the established results.

  6. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    PubMed

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  7. Coding Local and Global Binary Visual Features Extracted From Video Sequences

    NASA Astrophysics Data System (ADS)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.

  8. Traffic placement policies for a multi-band network

    NASA Technical Reports Server (NTRS)

    Maly, Kurt J.; Foudriat, E. C.; Game, David; Mukkamala, R.; Overstreet, C. Michael

    1990-01-01

    Recently protocols were introduced that enable the integration of synchronous traffic (voice or video) and asynchronous traffic (data) and extend the size of local area networks without loss in speed or capacity. One of these is DRAMA, a multiband protocol based on broadband technology. It provides dynamic allocation of bandwidth among clusters of nodes in the total network. A number of traffic placement policies for such networks are proposed and evaluated. Metrics used for performance evaluation include average network access delay, degree of fairness of access among the nodes, and network throughput. The feasibility of the DRAMA protocol is established through simulation studies. DRAMA provides effective integration of synchronous and asychronous traffic due to its ability to separate traffic types. Under the suggested traffic placement policies, the DRAMA protocol is shown to handle diverse loads, mixes of traffic types, and numbers of nodes, as well as modifications to the network structure and momentary traffic overloads.

  9. SITRUS: Semantic Infrastructure for Wireless Sensor Networks

    PubMed Central

    Bispo, Kalil A.; Rosa, Nelson S.; Cunha, Paulo R. F.

    2015-01-01

    Wireless sensor networks (WSNs) are made up of nodes with limited resources, such as processing, bandwidth, memory and, most importantly, energy. For this reason, it is essential that WSNs always work to reduce the power consumption as much as possible in order to maximize its lifetime. In this context, this paper presents SITRUS (semantic infrastructure for wireless sensor networks), which aims to reduce the power consumption of WSN nodes using ontologies. SITRUS consists of two major parts: a message-oriented middleware responsible for both an oriented message communication service and a reconfiguration service; and a semantic information processing module whose purpose is to generate a semantic database that provides the basis to decide whether a WSN node needs to be reconfigurated or not. In order to evaluate the proposed solution, we carried out an experimental evaluation to assess the power consumption and memory usage of WSN applications built atop SITRUS. PMID:26528974

  10. Broadband network selection issues

    NASA Astrophysics Data System (ADS)

    Leimer, Michael E.

    1996-01-01

    Selecting the best network for a given cable or telephone company provider is not as obvious as it appears. The cost and performance trades between Hybrid Fiber Coax (HFC), Fiber to the Curb (FTTC) and Asymmetric Digital Subscriber Line networks lead to very different choices based on the existing plant and the expected interactive subscriber usage model. This paper presents some of the issues and trades that drive network selection. The majority of the Interactive Television trials currently underway or planned are based on HFC networks. As a throw away market trial or a short term strategic incursion into a cable market, HFC may make sense. In the long run, if interactive services see high demand, HFC costs per node and an ever shrinking neighborhood node size to service large numbers of subscribers make FTTC appear attractive. For example, thirty-three 64-QAM modulators are required to fill the 550 MHz to 750 MHz spectrum with compressed video streams in 6 MHz channels. This large amount of hardware at each node drives not only initial build-out costs, but operations and maintenance costs as well. FTTC, with its potential for digitally switching large amounts of bandwidth to an given home, offers the potential to grow with the interactive subscriber base with less downstream cost. Integrated telephony on these networks is an issue that appears to be an afterthought for most of the networks being selected at the present time. The major players seem to be videocentric and include telephony as a simple add-on later. This may be a reasonable view point for the telephone companies that plan to leave their existing phone networks untouched. However, a phone company planning a network upgrade or a cable company jumping into the telephony business needs to carefully weigh the cost and performance issues of the various network choices. Each network type provides varying capability in both upstream and downstream bandwidth for voice channels. The noise characteristics vary as well. Cellular quality will not be tolerated by the home or business consumer. The network choices are not simple or obvious. Careful consideration of the cost and performance trades along with cable or telephone company strategic plans is required to ensure selecting the best network.

  11. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOEpatents

    Chen, Dong; Eisley, Noel A.; Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2016-11-15

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to the collective logic device and receive outputs only once from the collective logic device.

  12. An Adaptive OFDMA-Based MAC Protocol for Underwater Acoustic Wireless Sensor Networks

    PubMed Central

    Khalil, Issa M.; Gadallah, Yasser; Hayajneh, Mohammad; Khreishah, Abdallah

    2012-01-01

    Underwater acoustic wireless sensor networks (UAWSNs) have many applications across various civilian and military domains. However, they suffer from the limited available bandwidth of acoustic signals and harsh underwater conditions. In this work, we present an Orthogonal Frequency Division Multiple Access (OFDMA)-based Media Access Control (MAC) protocol that is configurable to suit the operating requirements of the underwater sensor network. The protocol has three modes of operation, namely random, equal opportunity and energy-conscious modes of operation. Our MAC design approach exploits the multi-path characteristics of a fading acoustic channel to convert it into parallel independent acoustic sub-channels that undergo flat fading. Communication between node pairs within the network is done using subsets of these sub-channels, depending on the configurations of the active mode of operation. Thus, the available limited bandwidth gets fully utilized while completely avoiding interference. We derive the mathematical model for optimal power loading and subcarrier selection, which is used as basis for all modes of operation of the protocol. We also conduct many simulation experiments to evaluate and compare our protocol with other Code Division Multiple Access (CDMA)-based MAC protocols. PMID:23012517

  13. An adaptive OFDMA-based MAC protocol for underwater acoustic wireless sensor networks.

    PubMed

    Khalil, Issa M; Gadallah, Yasser; Hayajneh, Mohammad; Khreishah, Abdallah

    2012-01-01

    Underwater acoustic wireless sensor networks (UAWSNs) have many applications across various civilian and military domains. However, they suffer from the limited available bandwidth of acoustic signals and harsh underwater conditions. In this work, we present an Orthogonal Frequency Division Multiple Access (OFDMA)-based Media Access Control (MAC) protocol that is configurable to suit the operating requirements of the underwater sensor network. The protocol has three modes of operation, namely random, equal opportunity and energy-conscious modes of operation. Our MAC design approach exploits the multi-path characteristics of a fading acoustic channel to convert it into parallel independent acoustic sub-channels that undergo flat fading. Communication between node pairs within the network is done using subsets of these sub-channels, depending on the configurations of the active mode of operation. Thus, the available limited bandwidth gets fully utilized while completely avoiding interference. We derive the mathematical model for optimal power loading and subcarrier selection, which is used as basis for all modes of operation of the protocol. We also conduct many simulation experiments to evaluate and compare our protocol with other Code Division Multiple Access (CDMA)-based MAC protocols.

  14. Dual-Stack Single-Radio Communication Architecture for UAV Acting As a Mobile Node to Collect Data in WSNs

    PubMed Central

    Sayyed, Ali; Medeiros de Araújo, Gustavo; Bodanese, João Paulo; Buss Becker, Leandro

    2015-01-01

    The use of mobile nodes to collect data in a Wireless Sensor Network (WSN) has gained special attention over the last years. Some researchers explore the use of Unmanned Aerial Vehicles (UAVs) as mobile node for such data-collection purposes. Analyzing these works, it is apparent that mobile nodes used in such scenarios are typically equipped with at least two different radio interfaces. The present work presents a Dual-Stack Single-Radio Communication Architecture (DSSRCA), which allows a UAV to communicate in a bidirectional manner with a WSN and a Sink node. The proposed architecture was specifically designed to support different network QoS requirements, such as best-effort and more reliable communications, attending both UAV-to-WSN and UAV-to-Sink communications needs. DSSRCA was implemented and tested on a real UAV, as detailed in this paper. This paper also includes a simulation analysis that addresses bandwidth consumption in an environmental monitoring application scenario. It includes an analysis of the data gathering rate that can be achieved considering different UAV flight speeds. Obtained results show the viability of using a single radio transmitter for collecting data from the WSN and forwarding such data to the Sink node. PMID:26389911

  15. Dual-Stack Single-Radio Communication Architecture for UAV Acting As a Mobile Node to Collect Data in WSNs.

    PubMed

    Sayyed, Ali; de Araújo, Gustavo Medeiros; Bodanese, João Paulo; Becker, Leandro Buss

    2015-09-16

    The use of mobile nodes to collect data in a Wireless Sensor Network (WSN) has gained special attention over the last years. Some researchers explore the use of Unmanned Aerial Vehicles (UAVs) as mobile node for such data-collection purposes. Analyzing these works, it is apparent that mobile nodes used in such scenarios are typically equipped with at least two different radio interfaces. The present work presents a Dual-Stack Single-Radio Communication Architecture (DSSRCA), which allows a UAV to communicate in a bidirectional manner with a WSN and a Sink node. The proposed architecture was specifically designed to support different network QoS requirements, such as best-effort and more reliable communications, attending both UAV-to-WSN and UAV-to-Sink communications needs. DSSRCA was implemented and tested on a real UAV, as detailed in this paper. This paper also includes a simulation analysis that addresses bandwidth consumption in an environmental monitoring application scenario. It includes an analysis of the data gathering rate that can be achieved considering different UAV flight speeds. Obtained results show the viability of using a single radio transmitter for collecting data from the WSN and forwarding such data to the Sink node.

  16. Announcing Supercomputer Summit

    ScienceCinema

    Wells, Jack; Bland, Buddy; Nichols, Jeff; Hack, Jim; Foertter, Fernanda; Hagen, Gaute; Maier, Thomas; Ashfaq, Moetasim; Messer, Bronson; Parete-Koon, Suzanne

    2018-01-16

    Summit is the next leap in leadership-class computing systems for open science. With Summit we will be able to address, with greater complexity and higher fidelity, questions concerning who we are, our place on earth, and in our universe. Summit will deliver more than five times the computational performance of Titan’s 18,688 nodes, using only approximately 3,400 nodes when it arrives in 2017. Like Titan, Summit will have a hybrid architecture, and each node will contain multiple IBM POWER9 CPUs and NVIDIA Volta GPUs all connected together with NVIDIA’s high-speed NVLink. Each node will have over half a terabyte of coherent memory (high bandwidth memory + DDR4) addressable by all CPUs and GPUs plus 800GB of non-volatile RAM that can be used as a burst buffer or as extended memory. To provide a high rate of I/O throughput, the nodes will be connected in a non-blocking fat-tree using a dual-rail Mellanox EDR InfiniBand interconnect. Upon completion, Summit will allow researchers in all fields of science unprecedented access to solving some of the world’s most pressing challenges.

  17. An extended smart utilization medium access control (ESU-MAC) protocol for ad hoc wireless systems

    NASA Astrophysics Data System (ADS)

    Vashishtha, Jyoti; Sinha, Aakash

    2006-05-01

    The demand for spontaneous setup of a wireless communication system has increased in recent years for areas like battlefield, disaster relief operations etc., where a pre-deployment of network infrastructure is difficult or unavailable. A mobile ad-hoc network (MANET) is a promising solution, but poses a lot of challenges for all the design layers, specifically medium access control (MAC) layer. Recent existing works have used the concepts of multi-channel and power control in designing MAC layer protocols. SU-MAC developed by the same authors, efficiently uses the 'available' data and control bandwidth to send control information and results in increased throughput via decreasing contention on the control channel. However, SU-MAC protocol was limited for static ad-hoc network and also faced the busy-receiver node problem. We present the Extended SU-MAC (ESU-MAC) protocol which works mobile nodes. Also, we significantly improve the scheme of control information exchange in ESU-MAC to overcome the busy-receiver node problem and thus, further avoid the blockage of control channel for longer periods of time. A power control scheme is used as before to reduce interference and to effectively re-use the available bandwidth. Simulation results show that ESU-MAC protocol is promising for mobile, ad-hoc network in terms of reduced contention at the control channel and improved throughput because of channel re-use. Results show a considerable increase in throughput compared to SU-MAC which could be attributed to increased accessibility of control channel and improved utilization of data channels due to superior control information exchange scheme.

  18. Adapting Wave-front Algorithms to Efficiently Utilize Systems with Deep Communication Hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J.; Lang, Michael; Pakin, Scott

    2011-09-30

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance especially in hybrid systems using accelerators. Processorcores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contains wavefront processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundarymore » data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional steps in the parallel computation and higher use of on-chip communications. This tradeoff is explored using a performance model. An implementation using the Reverse-acceleration programming model on the petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  19. A Cluster-Based Architecture to Structure the Topology of Parallel Wireless Sensor Networks

    PubMed Central

    Lloret, Jaime; Garcia, Miguel; Bri, Diana; Diaz, Juan R.

    2009-01-01

    A wireless sensor network is a self-configuring network of mobile nodes connected by wireless links where the nodes have limited capacity and energy. In many cases, the application environment requires the design of an exclusive network topology for a particular case. Cluster-based network developments and proposals in existence have been designed to build a network for just one type of node, where all nodes can communicate with any other nodes in their coverage area. Let us suppose a set of clusters of sensor nodes where each cluster is formed by different types of nodes (e.g., they could be classified by the sensed parameter using different transmitting interfaces, by the node profile or by the type of device: laptops, PDAs, sensor etc.) and exclusive networks, as virtual networks, are needed with the same type of sensed data, or the same type of devices, or even the same type of profiles. In this paper, we propose an algorithm that is able to structure the topology of different wireless sensor networks to coexist in the same environment. It allows control and management of the topology of each network. The architecture operation and the protocol messages will be described. Measurements from a real test-bench will show that the designed protocol has low bandwidth consumption and also demonstrates the viability and the scalability of the proposed architecture. Our ccluster-based algorithm is compared with other algorithms reported in the literature in terms of architecture and protocol measurements. PMID:22303185

  20. Cluster based architecture and network maintenance protocol for medical priority aware cognitive radio based hospital.

    PubMed

    Al Mamoon, Ishtiak; Muzahidul Islam, A K M; Baharun, Sabariah; Ahmed, Ashir; Komaki, Shozo

    2016-08-01

    Due to the rapid growth of wireless medical devices in near future, wireless healthcare services may face some inescapable issue such as medical spectrum scarcity, electromagnetic interference (EMI), bandwidth constraint, security and finally medical data communication model. To mitigate these issues, cognitive radio (CR) or opportunistic radio network enabled wireless technology is suitable for the upcoming wireless healthcare system. The up-to-date research on CR based healthcare has exposed some developments on EMI and spectrum problems. However, the investigation recommendation on system design and network model for CR enabled hospital is rare. Thus, this research designs a hierarchy based hybrid network architecture and network maintenance protocols for previously proposed CR hospital system, known as CogMed. In the previous study, the detail architecture of CogMed and its maintenance protocols were not present. The proposed architecture includes clustering concepts for cognitive base stations and non-medical devices. Two cluster head (CH selector equations are formulated based on priority of location, device, mobility rate of devices and number of accessible channels. In order to maintain the integrity of the proposed network model, node joining and node leaving protocols are also proposed. Finally, the simulation results show that the proposed network maintenance time is very low for emergency medical devices (average maintenance period 9.5 ms) and the re-clustering effects for different mobility enabled non-medical devices are also balanced.

  1. Resource Control in Large-Scale Mobile-Agents Systems

    DTIC Science & Technology

    2005-07-01

    wakeup node schedule , much energy can be conserved. We also designed several protocols for global clock synchronization. The most interesting one is...choice as to which remote hosts to visit and in which order. Scheduling mobile-agent migration in a way that minimizes bandwidth and other resource...use, therefore, is both feasible and attractive. Dartmouth considered several variations of the scheduling problem, and devel- oped an algorithm for

  2. Primary path reservation using enhanced slot assignment in TDMA for session admission.

    PubMed

    Koneri Chandrasekaran, Suresh; Savarimuthu, Prakash; Andi Elumalai, Priya; Ayyaswamy, Kathirvel

    2015-01-01

    Mobile ad hoc networks (MANET) is a self-organized collection of nodes that communicates without any infrastructure. Providing quality of service (QoS) in such networks is a competitive task due to unreliable wireless link, mobility, lack of centralized coordination, and channel contention. The success of many real time applications is purely based on the QoS, which can be achieved by quality aware routing (QAR) and admission control (AC). Recently proposed QoS mechanisms do focus completely on either reservation or admission control but are not better enough. In MANET, high mobility causes frequent path break due to the fact that every time the source node must find the route. In such cases the QoS session is affected. To admit a QoS session, admission control protocols must ensure the bandwidth of the relaying path before transmission starts; reservation of such bandwidth noticeably improves the admission control performance. Many TDMA based reservation mechanisms are proposed but need some improvement over slot reservation procedures. In order to overcome this specific issue, we propose a framework-PRAC (primary path reservation admission control protocol), which achieves improved QoS by making use of backup route combined with resource reservation. A network topology has been simulated and our approach proves to be a mechanism that admits the session effectively.

  3. Reducing I/O variability using dynamic I/O path characterization in petascale storage systems

    DOE PAGES

    Son, Seung Woo; Sehrish, Saba; Liao, Wei-keng; ...

    2016-11-01

    In petascale systems with a million CPU cores, scalable and consistent I/O performance is becoming increasingly difficult to sustain mainly because of I/O variability. Furthermore, the I/O variability is caused by concurrently running processes/jobs competing for I/O or a RAID rebuild when a disk drive fails. We present a mechanism that stripes across a selected subset of I/O nodes with the lightest workload at runtime to achieve the highest I/O bandwidth available in the system. In this paper, we propose a probing mechanism to enable application-level dynamic file striping to mitigate I/O variability. We also implement the proposed mechanism inmore » the high-level I/O library that enables memory-to-file data layout transformation and allows transparent file partitioning using subfiling. Subfiling is a technique that partitions data into a set of files of smaller size and manages file access to them, making data to be treated as a single, normal file to users. Here, we demonstrate that our bandwidth probing mechanism can successfully identify temporally slower I/O nodes without noticeable runtime overhead. Experimental results on NERSC’s systems also show that our approach isolates I/O variability effectively on shared systems and improves overall collective I/O performance with less variation.« less

  4. Acoustic communications for cabled seafloor observatories

    NASA Astrophysics Data System (ADS)

    Freitag, L.; Stojanovic, M.

    2003-04-01

    Cabled seafloor observatories will provide scientists with a continuous presence in both deep and shallow water. In the deep ocean, connecting sensors to seafloor nodes for power and data transfer will require cables and a highly-capable ROV, both of which are potentially expensive. For many applications where very high bandwidth is not required, and where a sensor is already designed to operate on battery power, the use of acoustic links should be considered. Acoustic links are particularly useful for large numbers of low-bandwidth sensors scattered over tens of square kilometers. Sensors used to monitor the chemistry and biology of vent fields are one example. Another important use for acoustic communication is monitoring of AUVs performing pre-programmed or adaptive sampling missions. A high data rate acoustic link with an AUV allows the observer on shore to direct the vehicle in real-time, providing for dynamic event response. Thus both fixed and mobile sensors motivate the development of observatory infrastructure that provides power-efficient, high bandwidth acoustic communication. A proposed system design that can provide the wireless infrastructure, and further examples of its use in networks such as NEPTUNE, are presented.

  5. Multi-input and binary reproducible, high bandwidth floating point adder in a collective network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Dong; Eisley, Noel A; Heidelberger, Philip

    To add floating point numbers in a parallel computing system, a collective logic device receives the floating point numbers from computing nodes. The collective logic devices converts the floating point numbers to integer numbers. The collective logic device adds the integer numbers and generating a summation of the integer numbers. The collective logic device converts the summation to a floating point number. The collective logic device performs the receiving, the converting the floating point numbers, the adding, the generating and the converting the summation in one pass. One pass indicates that the computing nodes send inputs only once to themore » collective logic device and receive outputs only once from the collective logic device.« less

  6. A novel unbalanced multiple description coder for robust video transmission over ad hoc wireless networks

    NASA Astrophysics Data System (ADS)

    Huang, Feng; Sun, Lifeng; Zhong, Yuzhuo

    2006-01-01

    Robust transmission of live video over ad hoc wireless networks presents new challenges: high bandwidth requirements are coupled with delay constraints; even a single packet loss causes error propagation until a complete video frame is coded in the intra-mode; ad hoc wireless networks suffer from bursty packet losses that drastically degrade the viewing experience. Accordingly, we propose a novel UMD coder capable of quickly recovering from losses and ensuring continuous playout. It uses 'peg' frames to prevent error propagation in the High-Resolution (HR) description and improve the robustness of key frames. The Low-Resolution (LR) coder works independent of the HR one, but they can also help each other recover from losses. Like many UMD coders, our UMD coder is drift-free, disruption-tolerant and able to make good use of the asymmetric available bandwidths of multiple paths. The simulation results under different conditions show that the proposed UMD coder has the highest decoded quality and lowest probability of pause when compared with concurrent UMDC techniques. The coder also has a comparable decoded quality, lower startup delay and lower probability of pause than a state-of-the-art FEC-based scheme. To provide robustness for video multicast applications, we propose non-end-to-end UMDC-based video distribution over a multi-tree multicast network. The multiplicity of parents decorrelates losses and the non-end-to-end feature increases the throughput of UMDC video data. We deploy an application-level service of LR description reconstruction in some intermediate nodes of the LR multicast tree. The principle behind this is to reconstruct the disrupted LR frames by the correctly received HR frames. As a result, the viewing experience at the downstream nodes benefits from the protection reconstruction at the upstream nodes.

  7. Evaluation Metrics for the Paragon XP/S-15

    NASA Technical Reports Server (NTRS)

    Traversat, Bernard; McNab, David; Nitzberg, Bill; Fineberg, Sam; Blaylock, Bruce T. (Technical Monitor)

    1993-01-01

    On February 17th 1993, the Numerical Aerodynamic Simulation (NAS) facility located at the NASA Ames Research Center installed a 224 node Intel Paragon XP/S-15 system. After its installation, the Paragon was found to be in a very immature state and was unable to support a NAS users' workload, composed of a wide range of development and production activities. As a first step towards addressing this problem, we implemented a set of metrics to objectively monitor the system as operating system and hardware upgrades were installed. The metrics were designed to measure four aspects of the system that we consider essential to support our workload: availability, utilization, functionality, and performance. This report presents the metrics collected from February 1993 to August 1993. Since its installation, the Paragon availability has improved from a low of 15% uptime to a high of 80%, while its utilization has remained low. Functionality and performance have improved from merely running one of the NAS Parallel Benchmarks to running all of them faster (between 1 and 2 times) than on the iPSC/860. In spite of the progress accomplished, fundamental limitations of the Paragon operating system are restricting the Paragon from supporting the NAS workload. The maximum operating system message passing (NORMA IPC) bandwidth was measured at 11 Mbytes/s, well below the peak hardware bandwidth (175 Mbytes/s), limiting overall virtual memory and Unix services (i.e. Disk and HiPPI I/O) performance. The high NX application message passing latency (184 microns), three times than on the iPSC/860, was found to significantly degrade performance of applications relying on small message sizes. The amount of memory available for an application was found to be approximately 10 Mbytes per node, indicating that the OS is taking more space than anticipated (6 Mbytes per node).

  8. An Energy Scaled and Expanded Vector-Based Forwarding Scheme for Industrial Underwater Acoustic Sensor Networks with Sink Mobility.

    PubMed

    Wadud, Zahid; Hussain, Sajjad; Javaid, Nadeem; Bouk, Safdar Hussain; Alrajeh, Nabil; Alabed, Mohamad Souheil; Guizani, Nadra

    2017-09-30

    Industrial Underwater Acoustic Sensor Networks (IUASNs) come with intrinsic challenges like long propagation delay, small bandwidth, large energy consumption, three-dimensional deployment, and high deployment and battery replacement cost. Any routing strategy proposed for IUASN must take into account these constraints. The vector based forwarding schemes in literature forward data packets to sink using holding time and location information of the sender, forwarder, and sink nodes. Holding time suppresses data broadcasts; however, it fails to keep energy and delay fairness in the network. To achieve this, we propose an Energy Scaled and Expanded Vector-Based Forwarding (ESEVBF) scheme. ESEVBF uses the residual energy of the node to scale and vector pipeline distance ratio to expand the holding time. Resulting scaled and expanded holding time of all forwarding nodes has a significant difference to avoid multiple forwarding, which reduces energy consumption and energy balancing in the network. If a node has a minimum holding time among its neighbors, it shrinks the holding time and quickly forwards the data packets upstream. The performance of ESEVBF is analyzed through in network scenario with and without node mobility to ensure its effectiveness. Simulation results show that ESEVBF has low energy consumption, reduces forwarded data copies, and less end-to-end delay.

  9. Secure data aggregation in wireless sensor networks using homomorphic encryption

    NASA Astrophysics Data System (ADS)

    Kumar, Manish; Verma, Shekhar; Lata, Kusum

    2015-04-01

    In a Wireless Sensor Network (WSN), aggregation exploits the correlation between spatially and temporally proximate sensor data to reduce the total data volume to be transmitted to the sink. Mobile agents (MAs) fit into this paradigm, and data can be aggregated and collected by an MA from different sensor nodes using context specific codes. The MA-based data collection suffers due to large size of a typical WSN and is prone to security problems. In this article, homomorphic encryption in a clustered WSN has been proposed for secure and efficient data collection using MAs. The nodes keep encrypted data that are given to an MA for data aggregation tasks. The MA performs all the data aggregation operations upon encrypted data as it migrates between nodes in a tree-like structure in which the nodes are leafs and the cluster head is the root of the tree. It returns and deposits the encrypted aggregated data to the cluster head after traversing through all the intra cluster nodes over a shortest path route. The homomorphic encryption and aggregation processing in encrypted domain makes the data collection process secure. Simulation results confirm the effectiveness of the proposed secure data aggregation mechanism. In addition to security, MA-based mechanism leads to lesser delay and bandwidth requirements.

  10. National Test Bed Security and Communications Architecture Working Group Report

    DTIC Science & Technology

    1992-04-01

    computer systems via a physical medium. Most of those physical media are tappable or interceptable. This means that all the data that flows across the...provides the capability for NTBN nodes to support users operating in differing COIs to share the computing resources and communication media and for...representation. Again generally speaking, the NTBN must act as the high-speed, wide-bandwidth communications media that would provide the "near real-time

  11. Using IKAROS as a data transfer and management utility within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. IKAROS is a framework that enables creating scalable storage formations on-demand and helps addressing several limitations that the current file systems face when dealing with very large scale infrastructures. It enables creating ad-hoc nearby storage formations and can use a huge number of I/O nodes in order to increase the available bandwidth (I/O and network). IKAROS unifies remote and local access in the overall data flow, by permitting direct access to each I/O node. In this way we can handle the overall data flow at the network layer, limiting the interaction with the operating system. This approach allows virtually connecting, at the users level, the several different computing facilities used (Grids, Clouds, HPCs, Data Centers, Local computing Clusters and personal storage devices), on-demand, based on the needs, by using well known standards and protocols, like HTTP.

  12. Integration of communications and tracking data processing simulation for space station

    NASA Technical Reports Server (NTRS)

    Lacovara, Robert C.

    1987-01-01

    A simplified model of the communications network for the Communications and Tracking Data Processing System (CTDP) was developed. It was simulated by use of programs running on several on-site computers. These programs communicate with one another by means of both local area networks and direct serial connections. The domain of the model and its simulation is from Orbital Replaceable Unit (ORU) interface to Data Management Systems (DMS). The simulation was designed to allow status queries from remote entities across the DMS networks to be propagated through the model to several simulated ORU's. The ORU response is then propagated back to the remote entity which originated the request. Response times at the various levels were investigated in a multi-tasking, multi-user operating system environment. Results indicate that the effective bandwidth of the system may be too low to support expected data volume requirements under conventional operating systems. Instead, some form of embedded process control program may be required on the node computers.

  13. A practical model for pressure probe system response estimation (with review of existing models)

    NASA Astrophysics Data System (ADS)

    Hall, B. F.; Povey, T.

    2018-04-01

    The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.

  14. Unstructured P2P Network Load Balance Strategy Based on Multilevel Partitioning of Hypergraph

    NASA Astrophysics Data System (ADS)

    Feng, Lv; Chunlin, Gao; Kaiyang, Ma

    2017-05-01

    With rapid development of computer performance and distributed technology, P2P-based resource sharing mode plays important role in Internet. P2P network users continued to increase so the high dynamic characteristics of the system determine that it is difficult to obtain the load of other nodes. Therefore, a dynamic load balance strategy based on hypergraph is proposed in this article. The scheme develops from the idea of hypergraph theory in multilevel partitioning. It adopts optimized multilevel partitioning algorithms to partition P2P network into several small areas, and assigns each area a supernode for the management and load transferring of the nodes in this area. In the case of global scheduling is difficult to be achieved, the priority of a number of small range of load balancing can be ensured first. By the node load balance in each small area the whole network can achieve relative load balance. The experiments indicate that the load distribution of network nodes in our scheme is obviously compacter. It effectively solves the unbalanced problems in P2P network, which also improve the scalability and bandwidth utilization of system.

  15. Game Theory-Based Cooperation for Underwater Acoustic Sensor Networks: Taxonomy, Review, Research Challenges and Directions.

    PubMed

    Muhammed, Dalhatu; Anisi, Mohammad Hossein; Zareei, Mahdi; Vargas-Rosales, Cesar; Khan, Anwar

    2018-02-01

    Exploring and monitoring the underwater world using underwater sensors is drawing a lot of attention these days. In this field cooperation between acoustic sensor nodes has been a critical problem due to the challenging features such as acoustic channel failure (sound signal), long propagation delay of acoustic signal, limited bandwidth and loss of connectivity. There are several proposed methods to improve cooperation between the nodes by incorporating information/game theory in the node's cooperation. However, there is a need to classify the existing works and demonstrate their performance in addressing the cooperation issue. In this paper, we have conducted a review to investigate various factors affecting cooperation in underwater acoustic sensor networks. We study various cooperation techniques used for underwater acoustic sensor networks from different perspectives, with a concentration on communication reliability, energy consumption, and security and present a taxonomy for underwater cooperation. Moreover, we further review how the game theory can be applied to make the nodes cooperate with each other. We further analyze different cooperative game methods, where their performance on different metrics is compared. Finally, open issues and future research direction in underwater acoustic sensor networks are highlighted.

  16. Incremental Support Vector Machine Framework for Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Awad, Mariette; Jiang, Xianhua; Motai, Yuichi

    2006-12-01

    Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.

  17. An Application-Based Performance Characterization of the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Djomehri, Jahed M.; Hood, Robert; Jin, Hoaqiang; Kiris, Cetin; Saini, Subhash

    2005-01-01

    Columbia is a 10,240-processor supercluster consisting of 20 Altix nodes with 512 processors each, and currently ranked as the second-fastest computer in the world. In this paper, we present the performance characteristics of Columbia obtained on up to four computing nodes interconnected via the InfiniBand and/or NUMAlink4 communication fabrics. We evaluate floating-point performance, memory bandwidth, message passing communication speeds, and compilers using a subset of the HPC Challenge benchmarks, and some of the NAS Parallel Benchmarks including the multi-zone versions. We present detailed performance results for three scientific applications of interest to NASA, one from molecular dynamics, and two from computational fluid dynamics. Our results show that both the NUMAlink4 and the InfiniBand hold promise for application scaling to a large number of processors.

  18. Time-Efficient High-Rate Data Flooding in One-Dimensional Acoustic Underwater Sensor Networks

    PubMed Central

    Kwon, Jae Kyun; Seo, Bo-Min; Yun, Kyungsu; Cho, Ho-Shin

    2015-01-01

    Because underwater communication environments have poor characteristics, such as severe attenuation, large propagation delays and narrow bandwidths, data is normally transmitted at low rates through acoustic waves. On the other hand, as high traffic has recently been required in diverse areas, high rate transmission has become necessary. In this paper, transmission/reception timing schemes that maximize the time axis use efficiency to improve the resource efficiency for high rate transmission are proposed. The excellence of the proposed scheme is identified by examining the power distributions by node, rate bounds, power levels depending on the rates and number of nodes, and network split gains through mathematical analysis and numerical results. In addition, the simulation results show that the proposed scheme outperforms the existing packet train method. PMID:26528983

  19. High-port low-latency optical switch architecture with optical feed-forward buffering for 256-node disaggregated data centers.

    PubMed

    Terzenidis, Nikos; Moralis-Pegios, Miltiadis; Mourgias-Alexandris, George; Vyrsokinos, Konstantinos; Pleros, Nikos

    2018-04-02

    Departing from traditional server-centric data center architectures towards disaggregated systems that can offer increased resource utilization at reduced cost and energy envelopes, the use of high-port switching with highly stringent latency and bandwidth requirements becomes a necessity. We present an optical switch architecture exploiting a hybrid broadcast-and-select/wavelength routing scheme with small-scale optical feedforward buffering. The architecture is experimentally demonstrated at 10Gb/s, reporting error-free performance with a power penalty of <2.5dB. Moreover, network simulations for a 256-node system, revealed low-latency values of only 605nsec, at throughput values reaching 80% when employing 2-packet-size optical buffers, while multi-rack network performance was also investigated.

  20. A multiprocessor computer simulation model employing a feedback scheduler/allocator for memory space and bandwidth matching and TMR processing

    NASA Technical Reports Server (NTRS)

    Bradley, D. B.; Irwin, J. D.

    1974-01-01

    A computer simulation model for a multiprocessor computer is developed that is useful for studying the problem of matching multiprocessor's memory space, memory bandwidth and numbers and speeds of processors with aggregate job set characteristics. The model assumes an input work load of a set of recurrent jobs. The model includes a feedback scheduler/allocator which attempts to improve system performance through higher memory bandwidth utilization by matching individual job requirements for space and bandwidth with space availability and estimates of bandwidth availability at the times of memory allocation. The simulation model includes provisions for specifying precedence relations among the jobs in a job set, and provisions for specifying precedence execution of TMR (Triple Modular Redundant and SIMPLEX (non redundant) jobs.

  1. Digital seismo-acoustic signal processing aboard a wireless sensor platform

    NASA Astrophysics Data System (ADS)

    Marcillo, O.; Johnson, J. B.; Lorincz, K.; Werner-Allen, G.; Welsh, M.

    2006-12-01

    We are developing a low power, low-cost wireless sensor array to conduct real-time signal processing of earthquakes at active volcanoes. The sensor array, which integrates data from both seismic and acoustic sensors, is based on Moteiv TMote Sky wireless sensor nodes (www.moteiv.com). The nodes feature a Texas Instruments MSP430 microcontroller, 48 Kbytes of program memory, 10 Kbytes of static RAM, 1 Mbyte of external flash memory, and a 2.4-GHz Chipcon CC2420 IEEE 802.15.4 radio. The TMote Sky is programmed in TinyOS. Basic signal processing occurs on an array of three peripheral sensor nodes. These nodes are tied into a dedicated GPS receiver node, which is focused on time synchronization, and a central communications node, which handles data integration and additional processing. The sensor nodes incorporate dual 12-bit digitizers sampling a seismic sensor and a pressure transducer at 100 samples per second. The wireless capabilities of the system allow flexible array geometry, with a maximum aperture of 200m. We have already developed the digital signal processing routines on board the Moteiv Tmote sensor nodes. The developed routines accomplish Real-time Seismic-Amplitude Measurement (RSAM), Seismic Spectral- Amplitude Measurement (SSAM), and a user-configured Short Term Averaging / Long Term Averaging (STA LTA ratio), which is used to calculate first arrivals. The processed data from individual nodes are transmitted back to a central node, where additional processing may be performed. Such processing will include back azimuth determination and other wave field analyses. Future on-board signal processing will focus on event characterization utilizing pattern recognition and spectral characterization. The processed data is intended as low bandwidth information which can be transmitted periodically and at low cost through satellite telemetry to a web server. The processing is limited by the computational capabilities (RAM, ROM) of the nodes. Nevertheless, we envision this product to be a useful tool for assessing the state of unrest at remote volcanoes.

  2. Polyhedral integrated and free space optical interconnection

    DOEpatents

    Erteza, I.A.

    1998-01-06

    An optical communication system uses holographic optical elements to provide guided wave and non-guided communication, resulting in high bandwidth, high connectivity optical communications. Holograms within holographic optical elements route optical signals between elements and between nodes connected to elements. Angular and wavelength multiplexing allow the elements to provide high connectivity. The combination of guided and non-guided communication allows compact polyhedral system geometries. Guided wave communications provided by multiplexed substrate-mode holographic optical elements eases system alignment. 7 figs.

  3. Polyhedral integrated and free space optical interconnection

    DOEpatents

    Erteza, Ireena A.

    1998-01-01

    An optical communication system uses holographic optical elements to provide guided wave and non-guided communication, resulting in high bandwidth, high connectivity optical communications. Holograms within holographic optical elements route optical signals between elements and between nodes connected to elements. Angular and wavelength multiplexing allow the elements to provide high connectivity. The combination of guided and non-guided communication allows compact polyhedral system geometries. Guided wave communications provided by multiplexed substrate-mode holographic optical elements eases system alignment.

  4. TRIO: Burst Buffer Based I/O Orchestration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Oral, H Sarp; Pritchard, Michael

    The growing computing power on leadership HPC systems is often accompanied by ever-escalating failure rates. Checkpointing is a common defensive mechanism used by scientific applications for failure recovery. However, directly writing the large and bursty checkpointing dataset to parallel filesystem can incur significant I/O contention on storage servers. Such contention in turn degrades the raw bandwidth utilization of storage servers and prolongs the average job I/O time of concurrent applications. Recently burst buffer has been proposed as an intermediate layer to absorb the bursty I/O traffic from compute nodes to storage backend. But an I/O orchestration mechanism is still desiredmore » to efficiently move checkpointing data from bursty buffers to storage backend. In this paper, we propose a burst buffer based I/O orchestration framework, named TRIO, to intercept and reshape the bursty writes for better sequential write traffic to storage severs. Meanwhile, TRIO coordinates the flushing orders among concurrent burst buffers to alleviate the contention on storage server bandwidth. Our experimental results reveal that TRIO can deliver 30.5% higher bandwidth and reduce the average job I/O time by 37% on average for data-intensive applications in various checkpointing scenarios.« less

  5. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  6. Secured Hash Based Burst Header Authentication Design for Optical Burst Switched Networks

    NASA Astrophysics Data System (ADS)

    Balamurugan, A. M.; Sivasubramanian, A.; Parvathavarthini, B.

    2017-12-01

    The optical burst switching (OBS) is a promising technology that could meet the fast growing network demand. They are featured with the ability to meet the bandwidth requirement of applications that demand intensive bandwidth. OBS proves to be a satisfactory technology to tackle the huge bandwidth constraints, but suffers from security vulnerabilities. The objective of this proposed work is to design a faster and efficient burst header authentication algorithm for core nodes. There are two important key features in this work, viz., header encryption and authentication. Since the burst header is an important in optical burst switched network, it has to be encrypted; otherwise it is be prone to attack. The proposed MD5&RC4-4S based burst header authentication algorithm runs 20.75 ns faster than the conventional algorithms. The modification suggested in the proposed RC4-4S algorithm gives a better security and solves the correlation problems between the publicly known outputs during key generation phase. The modified MD5 recommended in this work provides 7.81 % better avalanche effect than the conventional algorithm. The device utilization result also shows the suitability of the proposed algorithm for header authentication in real time applications.

  7. Network bandwidth utilization forecast model on high bandwidth networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wuchert; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less

  8. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology,more » our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.« less

  9. Dynamic storage in resource-scarce browsing multimedia applications

    NASA Astrophysics Data System (ADS)

    Elenbaas, Herman; Dimitrova, Nevenka

    1998-10-01

    In the convergence of information and entertainment there is a conflict between the consumer's expectation of fast access to high quality multimedia content through narrow bandwidth channels versus the size of this content. During the retrieval and information presentation of a multimedia application there are two problems that have to be solved: the limited bandwidth during transmission of the retrieved multimedia content and the limited memory for temporary caching. In this paper we propose an approach for latency optimization in information browsing applications. We proposed a method for flattening hierarchically linked documents in a manner convenient for network transport over slow channels to minimize browsing latency. Flattening of the hierarchy involves linearization, compression and bundling of the document nodes. After the transfer, the compressed hierarchy is stored on a local device where it can be partly unbundled to fit the caching limits at the local site while giving the user availability to the content.

  10. Designing a VMEbus FDDI adapter card

    NASA Astrophysics Data System (ADS)

    Venkataraman, Raman

    1992-03-01

    This paper presents a system architecture for a VMEbus FDDI adapter card containing a node core, FDDI block, frame buffer memory and system interface unit. Most of the functions of the PHY and MAC layers of FDDI are implemented with National's FDDI chip set and the SMT implementation is simplified with a low cost microcontroller. The factors that influence the system bus bandwidth utilization and FDDI bandwidth utilization are the data path and frame buffer memory architecture. The VRAM based frame buffer memory has two sections - - LLC frame memory and SMT frame memory. Each section with an independent serial access memory (SAM) port provides an independent access after the initial data transfer cycle on the main port and hence, the throughput is maximized on each port of the memory. The SAM port simplifies the system bus master DMA design and the VMEbus interface can be designed with low-cost off-the-shelf interface chips.

  11. Distributed reservation control protocols for random access broadcasting channels

    NASA Technical Reports Server (NTRS)

    Greene, E. P.; Ephremides, A.

    1981-01-01

    Attention is given to a communication network consisting of an arbitrary number of nodes which can communicate with each other via a time-division multiple access (TDMA) broadcast channel. The reported investigation is concerned with the development of efficient distributed multiple access protocols for traffic consisting primarily of single packet messages in a datagram mode of operation. The motivation for the design of the protocols came from the consideration of efficient multiple access utilization of moderate to high bandwidth (4-40 Mbit/s capacity) communication satellite channels used for the transmission of short (1000-10,000 bits) fixed length packets. Under these circumstances, the ratio of roundtrip propagation time to packet transmission time is between 100 to 10,000. It is shown how a TDMA channel can be adaptively shared by datagram traffic and constant bandwidth users such as in digital voice applications. The distributed reservation control protocols described are a hybrid between contention and reservation protocols.

  12. Interconnect Performance Evaluation of SGI Altix 3700 BX2, Cray X1, Cray Opteron Cluster, and Dell PowerEdge

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod; Saini, Subbash; Ciotti, Robert

    2006-01-01

    We study the performance of inter-process communication on four high-speed multiprocessor systems using a set of communication benchmarks. The goal is to identify certain limiting factors and bottlenecks with the interconnect of these systems as well as to compare these interconnects. We measured network bandwidth using different number of communicating processors and communication patterns, such as point-to-point communication, collective communication, and dense communication patterns. The four platforms are: a 512-processor SGI Altix 3700 BX2 shared-memory machine with 3.2 GB/s links; a 64-processor (single-streaming) Cray XI shared-memory machine with 32 1.6 GB/s links; a 128-processor Cray Opteron cluster using a Myrinet network; and a 1280-node Dell PowerEdge cluster with an InfiniBand network. Our, results show the impact of the network bandwidth and topology on the overall performance of each interconnect.

  13. Interior Noise Predictions in the Preliminary Design of the Large Civil Tiltrotor (LCTR2)

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.; Cabell, Randolph H.; Boyd, David D.

    2013-01-01

    A prediction scheme was established to compute sound pressure levels in the interior of a simplified cabin model of the second generation Large Civil Tiltrotor (LCTR2) during cruise conditions, while being excited by turbulent boundary layer flow over the fuselage, or by tiltrotor blade loading and thickness noise. Finite element models of the cabin structure, interior acoustic space, and acoustically absorbent (poro-elastic) materials in the fuselage were generated and combined into a coupled structural-acoustic model. Fluctuating power spectral densities were computed according to the Efimtsov turbulent boundary layer excitation model. Noise associated with the tiltrotor blades was predicted in the time domain as fluctuating surface pressures and converted to power spectral densities at the fuselage skin finite element nodes. A hybrid finite element (FE) approach was used to compute the low frequency acoustic cabin response over the frequency range 6-141 Hz with a 1 Hz bandwidth, and the Statistical Energy Analysis (SEA) approach was used to predict the interior noise for the 125-8000 Hz one-third octave bands.

  14. New optimization model for routing and spectrum assignment with nodes insecurity

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-04-01

    By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.

  15. Communication analysis for feedback control of civil infrastructure using cochlea-inspired sensing nodes

    NASA Astrophysics Data System (ADS)

    Peckens, Courtney A.; Cook, Ireana; Lynch, Jerome P.

    2016-04-01

    Wireless sensor networks (WSNs) have emerged as a reliable, low-cost alternative to the traditional wired sensing paradigm. While such networks have made significant progress in the field of structural monitoring, significantly less development has occurred for feedback control applications. Previous work in WSNs for feedback control has highlighted many of the challenges of using this technology including latency in the wireless communication channel and computational inundation at the individual sensing nodes. This work seeks to overcome some of those challenges by drawing inspiration from the real-time sensing and control techniques employed by the biological central nervous system and in particular the mammalian cochlea. A novel bio-inspired wireless sensor node was developed that employs analog filtering techniques to perform time-frequency decomposition of a sensor signal, thus encompassing the functionality of the cochlea. The node then utilizes asynchronous sampling of the filtered signal to compress the signal prior to communication. This bio-inspired sensing architecture is extended to a feedback control application in order to overcome the traditional challenges currently faced by wireless control. In doing this, however, the network experiences high bandwidths of low-significance information exchange between nodes, resulting in some lost data. This study considers the impact of this lost data on the control capabilities of the bio-inspired control architecture and finds that it does not significantly impact the effectiveness of control.

  16. Intra-Chip Free-Space Optical Interconnect: System, Device, Integration and Prototyping

    NASA Astrophysics Data System (ADS)

    Ciftcioglu, Berkehan

    Currently, on-chip optical interconnect schemes already proposed utilize circuit switching using wavelength division multiplexing (WDM) or all-optical packet switching, all based on planar optical waveguides and related photonic devices such as microrings. These proposed approaches pose significant challenges in latency, energy efficiency, integration, and scalability. This thesis presents a new alternative approach by utilizing free-space optics. This 3-D integrated intra-chip free-space optical interconnect (FSOI) leverages mature photonic devices such as integrated lasers, photodiodes, microlenses and mirrors. It takes full advantages of the latest developments in 3-D integration technologies. This interconnect system provides point-to-point free-space optical links between any two communication nodes to construct an all-to-all intra-chip communication network with little or no arbitration. Therefore, it has significant networking advantages over conventional electrical and waveguide-based optical interconnects. An FSOI system is evaluated based on the real device parameters, predictive technology models and International Roadmap of Semiconductor's predictions. A single FSOI link achieves 10-Gbps data rate with 0.5-pJ/bit energy efficiency and less than 10--12 bit-error-rate (BER). A system using this individual link can provide scalability up to 36 nodes, providing 10-Tbps aggregate bandwidth. A comparison analysis performed between a WDM-based waveguide interconnect system and the proposed FSOI system shows that FSOI achieves better energy efficiency than the WDM one as the technology scales. Similarly, network simulation on a 16-core microprocessor using the proposed FSOI system instead of mesh networks has been shown to speed up the system by 12% and reduce the energy consumption by 33%. As a part of the development of a 3-D integrated FSOI system, operating at 850 nm with a 10-Gbps data rate per optical link, the photonics devices and optical components are individually designed and fabricated. The photodiodes (PDs) are designed to have large area for efficient light coupling and low capacitance to achieve large bandwidth, while achieving reasonably high responsivity. A metal-semiconductor-metal (MSM) structure is chosen over p-i-n ones to reduce parasitic capacitance per area, to allow less stringent microlens-to-PD alignment for efficient light coupling with a large bandwidth. A novel MSM germanium PD is implemented using an amorphous silicon (a-Si) layer on top of the undoped germanium substrate, serving as a barrier enhancement layer, mitigating the low Schottky barrier height for holes due to fermi level pinning and a surface passivation layer, preventing charge accumulation and image force lowering of the barrier. Therefore, the dark current is reduced and low-frequency gain is eliminated. The PDs achieve a 13-GHz bandwidth with a 0.315-A/W responsivity and a 1.7-nAmum² dark current density. The microlenses are fabricated on a fused silica substrate based on the photoresist melt-and-reflow technique, followed by dry etching into fused silica substrate. The measured focal length of a 220-mum aperture size microlens is 350-mum away from the backside of the substrate. The vertical-cavity surface-emitting lasers (VCSELs) are fabricated on a commercial molecular beam epitaxially (MBE) grown GaAs wafer. The fabricated 8-mum aperture size VCSEL can achieve 0.65-mW optical power at a 1.5-mA forward bias current with a threshold current of 0.48 mA and a 0.67-A/W slope efficiency. Three prototypes are implemented via integrating the individually fabricated components using non-conductive epoxy and wirebonding. The first prototype, built on a printed circuit board (PCB) using commercial VCSEL arrays, achieves a 5-dB transmission loss and less than -30-dB crosstalk at 1-cm distance with a small-signal bandwidth of 10 GHz, limited by the VCSEL. The second board-level prototype uses all fabricated components integrated on a PCB. The prototype achieves a 9-dB transmission loss at 3-cm distance and a 4.4-GHz bandwidth. The chip-level prototype is built on a germanium carrier with integrated MSM Ge PDs, microlenses on fused silica and VCSEL chip on GaAs substrates. The prototype achieves 4-dB transmission loss at 1 cm and 3.3-GHz bandwidth, limited by commercial VCSEL bandwidth. (Abstract shortened by UMI.)

  17. The role of predicted solar activity in TOPEX/Poseidon orbit maintenance maneuver design

    NASA Technical Reports Server (NTRS)

    Frauenholz, Raymond B.; Shapiro, Bruce E.

    1992-01-01

    Following launch in June 1992, the TOPEX/Poseidon satellite will be placed in a near-circular frozen orbit at an altitude of about 1336 km. Orbit maintenance maneuvers are planned to assure all nodes of the 127-orbit 10-day repeat ground track remain within a 2 km equatorial longitude bandwidth. Orbit determination, maneuver execution, and atmospheric drag prediction errors limit overall targeting performance. This paper focuses on the effects of drag modeling errors, with primary emphasis on the role of SESC solar activity predictions, especially the 27-day outlook of the 10.7 cm solar flux and geomagnetic index used by a simplified version of the Jacchia-Roberts density model developed for this TOPEX/Poseidon application. For data evaluated from 1983-90, the SESC outlook performed better than a simpler persistence strategy, especially during the first 7-10 days. A targeting example illustrates the use of ground track biasing to compensate for expected orbit predictions errors, emphasizing the role of solar activity prediction errors.

  18. Event-triggered distributed filtering over sensor networks with deception attacks and partial measurements

    NASA Astrophysics Data System (ADS)

    Bu, Xianye; Dong, Hongli; Han, Fei; Li, Gongfa

    2018-07-01

    This paper is concerned with the distributed filtering problem for a class of time-varying systems subject to deception attacks and event-triggering protocols. Due to the bandwidth limitation, an event-triggered communication strategy is adopted to alleviate the data transmission pressure in the algorithm implementation process. The partial nodes-based filtering problem is considered, where only a partial of nodes can measure the information of the plant. Meanwhile, the measurement information possibly suffers the deception attacks in the transmission process. Sufficient conditions can be established such that the error dynamics satisfies the prescribed average ? performance constraints. The parameters of designed filters can be calculated by solving a series of recursive linear matrix inequalities. A simulation example is presented to demonstrate the effectiveness of the proposed filtering method in this paper.

  19. Meeting the future metro network challenges and requirements by adopting programmable S-BVT with direct-detection and PDM functionality

    NASA Astrophysics Data System (ADS)

    Nadal, Laia; Svaluto Moreolo, Michela; Fàbrega, Josep M.; Vílchez, F. Javier

    2017-07-01

    In this paper, we propose an advanced programmable sliceable-bandwidth variable transceiver (S-BVT) with polarization division multiplexing (PDM) capability as a key enabler to fulfill the requirements for future 5G networks. Thanks to its cost-effective optoelectronic front-end based on orthogonal frequency division multiplexing (OFDM) technology and direct-detection (DD), the proposed S-BVT becomes suitable for next generation highly flexible and scalable metro networks. Polarization beam splitters (PBSs) and controllers (PCs), available on-demand, are included at the transceivers and at the network nodes, further enhancing the system flexibility and promoting an efficient use of the spectrum. 40G-100G PDM transmission has been experimentally demonstrated, within a 4-node photonic mesh network (ADRENALINE testbed), implementing a simplified equalization process.

  20. Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerbyson, Darren J; Lang, Michael; Pakin, Scott

    2009-01-01

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost ismore » typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.« less

  1. Design of a network for concurrent message passing systems

    NASA Astrophysics Data System (ADS)

    Song, Paul Y.

    1988-08-01

    We describe the design of the network design frame (NDF), a self-timed routing chip for a message-passing concurrent computer. The NDF uses a partitioned data path, low-voltage output drivers, and a distributed token-passing arbiter to provide a bandwidth of 450 Mbits/sec into the network. Wormhole routing and bidirectional virtual channels are used to provide low latency communications, less than 2us latency to deliver a 216 bit message across the diameter of a 1K node mess-connected machine. To support concurrent software systems, the NDF provides two logical networks, one for user messages and one for system messages. The two networks share the same set of physical wires. To facilitate the development of network nodes, the NDF is a design frame. The NDF circuitry is integrated into the pad frame of a chip leaving the center of the chip uncommitted. We define an analytic framework in which to study the effects of network size, network buffering capacity, bidirectional channels, and traffic on this class of networks. The response of the network to various combinations of these parameters are obtained through extensive simulation of the network model. Through simulation, we are able to observe the macro behavior of the network as opposed to the micro behavior of the NDF routing controller.

  2. Feasibility of optically interconnected parallel processors using wavelength division multiplexing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deri, R.J.; De Groot, A.J.; Haigh, R.E.

    1996-03-01

    New national security demands require enhanced computing systems for nearly ab initio simulations of extremely complex systems and analyzing unprecedented quantities of remote sensing data. This computational performance is being sought using parallel processing systems, in which many less powerful processors are ganged together to achieve high aggregate performance. Such systems require increased capability to communicate information between individual processor and memory elements. As it is likely that the limited performance of today`s electronic interconnects will prevent the system from achieving its ultimate performance, there is great interest in using fiber optic technology to improve interconnect communication. However, little informationmore » is available to quantify the requirements on fiber optical hardware technology for this application. Furthermore, we have sought to explore interconnect architectures that use the complete communication richness of the optical domain rather than using optics as a simple replacement for electronic interconnects. These considerations have led us to study the performance of a moderate size parallel processor with optical interconnects using multiple optical wavelengths. We quantify the bandwidth, latency, and concurrency requirements which allow a bus-type interconnect to achieve scalable computing performance using up to 256 nodes, each operating at GFLOP performance. Our key conclusion is that scalable performance, to {approx}150 GFLOPS, is achievable for several scientific codes using an optical bus with a small number of WDM channels (8 to 32), only one WDM channel received per node, and achievable optoelectronic bandwidth and latency requirements. 21 refs. , 10 figs.« less

  3. Manycast routing, modulation level and spectrum assignment over elastic optical networks

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Zhao, Yang; Chen, Xue; Wang, Lei; Zhang, Min; Zhang, Jie; Ji, Yuefeng; Wang, Huitao; Wang, Taili

    2017-07-01

    Manycast is a point to multi-point transmission framework that requires a subset of destination nodes successfully reached. It is particularly applicable for dealing with large amounts of data simultaneously in bandwidth-hungry, dynamic and cloud-based applications. As rapid increasing of traffics in these applications, the elastic optical networks (EONs) may be relied on to achieve high throughput manycast. In terms of finer spectrum granularity, the EONs could reach flexible accessing to network spectrum and efficient providing exact spectrum resource to demands. In this paper, we focus on the manycast routing, modulation level and spectrum assignment (MA-RMLSA) problem in EONs. Both EONs planning with static manycast traffic and EONs provisioning with dynamic manycast traffic are investigated. An integer linear programming (ILP) model is formulated to derive MA-RMLSA problem in static manycast scenario. Then corresponding heuristic algorithm called manycast routing, modulation level and spectrum assignment genetic algorithm (MA-RMLSA-GA) is proposed to adapt for both static and dynamic manycast scenarios. The MA-RMLSA-GA optimizes MA-RMLSA problem in destination nodes selection, routing light-tree constitution, modulation level allocation and spectrum resource assignment jointly, to achieve an effective improvement in network performance. Simulation results reveal that MA-RMLSA strategies offered by MA-RMLSA-GA have slightly disparity from the optimal solutions provided by ILP model in static scenario. Moreover, the results demonstrate that MA-RMLSA-GA realizes a highly efficient MA-RMLSA strategy with the lowest blocking probability in dynamic scenario compared with benchmark algorithms.

  4. Si photonics technology for future optical interconnection

    NASA Astrophysics Data System (ADS)

    Zheng, Xuezhe; Krishnamoorthy, Ashok V.

    2011-12-01

    Scaling of computing systems require ultra-efficient interconnects with large bandwidth density. Silicon photonics offers a disruptive solution with advantages in reach, energy efficiency and bandwidth density. We review our progress in developing building blocks for ultra-efficient WDM silicon photonic links. Employing microsolder based hybrid integration with low parasitics and high density, we optimize photonic devices on SOI platforms and VLSI circuits on more advanced bulk CMOS technology nodes independently. Progressively, we successfully demonstrated single channel hybrid silicon photonic transceivers at 5 Gbps and 10 Gbps, and 80 Gbps arrayed WDM silicon photonic transceiver using reverse biased depletion ring modulators and Ge waveguide photo detectors. Record-high energy efficiency of less than 100fJ/bit and 385 fJ/bit were achieved for the hybrid integrated transmitter and receiver, respectively. Waveguide grating based optical proximity couplers were developed with low loss and large optical bandwidth to enable multi-layer intra/inter-chip optical interconnects. Thermal engineering of WDM devices by selective substrate removal, together with WDM link using synthetic wavelength comb, we significantly improved the device tuning efficiency and reduced the tuning range. Using these innovative techniques, two orders of magnitude tuning power reduction was achieved. And tuning cost of only a few 10s of fJ/bit is expected for high data rate WDM silicon photonic links.

  5. Using architecture information and real-time resource state to reduce power consumption and communication costs in parallel applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, James M.; Devine, Karen Dragon; Gentile, Ann C.

    2014-09-01

    As computer systems grow in both size and complexity, the need for applications and run-time systems to adjust to their dynamic environment also grows. The goal of the RAAMP LDRD was to combine static architecture information and real-time system state with algorithms to conserve power, reduce communication costs, and avoid network contention. We devel- oped new data collection and aggregation tools to extract static hardware information (e.g., node/core hierarchy, network routing) as well as real-time performance data (e.g., CPU uti- lization, power consumption, memory bandwidth saturation, percentage of used bandwidth, number of network stalls). We created application interfaces that allowedmore » this data to be used easily by algorithms. Finally, we demonstrated the benefit of integrating system and application information for two use cases. The first used real-time power consumption and memory bandwidth saturation data to throttle concurrency to save power without increasing application execution time. The second used static or real-time network traffic information to reduce or avoid network congestion by remapping MPI tasks to allocated processors. Results from our work are summarized in this report; more details are available in our publications [2, 6, 14, 16, 22, 29, 38, 44, 51, 54].« less

  6. Mathematical Modeling and Analysis of a Wide Bandwidth Bipolar Power Supply for the Fast Correctors in the APS Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Byeong M.; Wang, Ju

    This paper presents the mathematical modeling and analysis of a wide bandwidth bipolar power supply for the fast correctors in the APS Upgrade. A wide bandwidth current regulator with a combined PI and phase-lead compensator has been newly proposed, analyzed, and simulated through both a mathematical model and a physical electronic circuit model using MATLAB and PLECS. The proposed regulator achieves a bandwidth with a -1.23dB attenuation and a 32.40° phase-delay at 10 kHz for a small signal less than 1% of the DC scale. The mathematical modeling and design, simulation results of a fast corrector power supply control systemmore » are presented in this paper.« less

  7. Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation

    NASA Astrophysics Data System (ADS)

    Bedi, Amrit Singh; Rajawat, Ketan

    2018-05-01

    Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.

  8. Minimal-delay traffic grooming for WDM star networks

    NASA Astrophysics Data System (ADS)

    Choi, Hongsik; Garg, Nikhil; Choi, Hyeong-Ah

    2003-10-01

    All-optical networks face the challenge of reducing slower opto-electronic conversions by managing assignment of traffic streams to wavelengths in an intelligent manner, while at the same time utilizing bandwidth resources to the maximum. This challenge becomes harder in networks closer to the end users that have insufficient data to saturate single wavelengths as well as traffic streams outnumbering the usable wavelengths, resulting in traffic grooming which requires costly traffic analysis at access nodes. We study the problem of traffic grooming that reduces the need to analyze traffic, for a class of network architecture most used by Metropolitan Area Networks; the star network. The problem being NP-complete, we provide an efficient twice-optimal-bound greedy heuristic for the same, that can be used to intelligently groom traffic at the LANs to reduce latency at the access nodes. Simulation results show that our greedy heuristic achieves a near-optimal solution.

  9. Enhancing the transmission efficiency by edge deletion in scale-free networks

    NASA Astrophysics Data System (ADS)

    Zhang, Guo-Qing; Wang, Di; Li, Guo-Jie

    2007-07-01

    How to improve the transmission efficiency of Internet-like packet switching networks is one of the most important problems in complex networks as well as for the Internet research community. In this paper we propose a convenient method to enhance the transmission efficiency of scale-free networks dramatically by kicking out the edges linking to nodes with large betweenness, which we called the “black sheep.” The advantages of our method are of facility and practical importance. Since the black sheep edges are very costly due to their large bandwidth, our method could decrease the cost as well as gain higher throughput of networks. Moreover, we analyze the curve of the largest betweenness on deleting more and more black sheep edges and find that there is a sharp transition at the critical point where the average degree of the nodes ⟨k⟩→2 .

  10. A novel biasing dependent circuit model of resonant cavity enhanced avalanche photodetectors (RCE-APDs)

    NASA Astrophysics Data System (ADS)

    Abdelhamid, Mostafa R.; El-Batawy, Yasser M.; Deen, M. Jamal

    2018-02-01

    In Resonant Cavity Enhanced Photodetectors (RCE-PDs), the trade-off between the bandwidth and the quantum efficiency in the conventional photodetectors is overcome. In RCE-PDs, large bandwidth can be achieved using a thin absorption layer while the use of a resonant cavity allows for multiple passes of light in the absorption which boosts the quantum efficiency. In this paper, a complete bias-dependent model for the Resonant Cavity Enhanced-Separated Absorption Graded Charge Multiplication-Avalanche Photodetector (RCE-SAGCM-APD) is presented. The proposed model takes into account the case of drift velocities other than the saturation velocity, thus modeling this effect on the photodetector different design parameters such as Gain, Bandwidth and Gain-Bandwidth product.

  11. Video bandwidth compression system

    NASA Astrophysics Data System (ADS)

    Ludington, D.

    1980-08-01

    The objective of this program was the development of a Video Bandwidth Compression brassboard model for use by the Air Force Avionics Laboratory, Wright-Patterson Air Force Base, in evaluation of bandwidth compression techniques for use in tactical weapons and to aid in the selection of particular operational modes to be implemented in an advanced flyable model. The bandwidth compression system is partitioned into two major divisions: the encoder, which processes the input video with a compression algorithm and transmits the most significant information; and the decoder where the compressed data is reconstructed into a video image for display.

  12. mpiGraph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Adam

    2007-05-22

    MpiGraph consists of an MPI application called mpiGraph written in C to measure message bandwidth and an associated crunch_mpiGraph script written in Perl to process the application output into an HTMO report. The mpiGraph application is designed to inspect the health and scalability of a high-performance interconnect while under heavy load. This is useful to detect hardware and software problems in a system, such as slow nodes, links, switches, or contention in switch routing. It is also useful to characterize how interconnect performance changes with different settings or how one interconnect type compares to another.

  13. Design and Analysis of Underwater Acoustic Networks with Reflected Links

    NASA Astrophysics Data System (ADS)

    Emokpae, Lloyd

    Underwater acoustic networks (UWANs) have applications in environmental state monitoring, oceanic profile measurements, leak detection in oil fields, distributed surveillance, and navigation. For these applications, sets of nodes are employed to collaboratively monitor an area of interest and track certain events or phenomena. In addition, it is common to find autonomous underwater vehicles (AUVs) acting as mobile sensor nodes that perform search-and-rescue missions, reconnaissance in combat zones, and coastal patrol. These AUVs are to work cooperatively to achieve a desired goal and thus need to be able to, in an ad-hoc manner, establish and sustain communication links in order to ensure some desired level of quality of service. Therefore, each node is required to adapt to environmental changes and be able to overcome broken communication links caused by external noise affecting the communication channel due to node mobility. In addition, since radio waves are quickly absorbed in the water medium, it is common for most underwater applications to rely on acoustic (or sound) rather than radio channels for mid-to-long range communications. However, acoustic channels pose multiple challenging issues, most notably the high transmission delay due to slow signal propagation and the limited channel bandwidth due to high frequency attenuation. Moreover, the inhomogeneous property of the water medium affects the sound speed profile while the signal surface and bottom reflections leads to multipath effects. In this dissertation, we address these networking challenges by developing protocols that take into consideration the underwater physical layer dynamics. We begin by introducing a novel surface-based reflection scheme (SBR), which takes advantage of the multipath effects of the acoustic channel. SBR works by using reflections from the water surface, and bottom, to establish non-line-of-sight (NLOS) communication links. SBR makes it possible to incorporate both line-of-sight (LOS) and NLOS links by utilizing directional antennas, which will boost the signal-to-noise ratio (SNR) at the receiver while promoting NLOS usage. In our model, we employ a directional underwater acoustic antenna composed of an array of hydrophones that can be summed up at various phases and amplitudes resulting in a beam-former. We have also adopted a practical multimodal directional transducer concept which generates both directional and omni-directional beam patterns by combining the fundamental vibration modes of a cylindrical acoustic radiator. This allows the transducer to be electrically controlled and steered by simply adjusting the electrical voltage weights. A prototype acoustic modem is then developed to utilize the multimodal directional transducer for both LOS and NLOS communication. The acoustic modem has also been used as a platform for empirically validating our SBR communication model in a tank and with empirical data. Networking protocols have been developed to exploit the SBR communication model. These protocols include node discovery and localization, directional medium access control (D-MAC) and geographical routing. In node discovery and localization, each node will utilize SBR-based range measurements to its neighbors to determine their relative position. The D-MAC protocol utilizes directional antennas to increase the network throughput due to the spatial efficiency of the antenna model. In the proposed reflection-enabled directional MAC protocol (RED-MAC), each source node will be able to determine if an obstacle is blocking the LOS link to the destination and switch to the best NLOS link by utilizing surface/bottom reflections. Finally, we have developed a geographical routing algorithm which aims to establish the best stable route from a source node to a destination node. The optimized route is selected to achieve maximum network throughput. Extensive analysis of the network throughput when utilizing directional antennas is also presented to show the benefits of directional communication on the overall network throughput.

  14. Collective input/output under memory constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Yin; Chen, Yong; Zhuang, Yu

    2014-12-18

    Compared with current high-performance computing (HPC) systems, exascale systems are expected to have much less memory per node, which can significantly reduce necessary collective input/output (I/O) performance. In this study, we introduce a memory-conscious collective I/O strategy that takes into account memory capacity and bandwidth constraints. The new strategy restricts aggregation data traffic within disjointed subgroups, coordinates I/O accesses in intranode and internode layers, and determines I/O aggregators at run time considering memory consumption among processes. We have prototyped the design and evaluated it with commonly used benchmarks to verify its potential. The evaluation results demonstrate that this strategy holdsmore » promise in mitigating the memory pressure, alleviating the contention for memory bandwidth, and improving the I/O performance for projected extreme-scale systems. Given the importance of supporting increasingly data-intensive workloads and projected memory constraints on increasingly larger scale HPC systems, this new memory-conscious collective I/O can have a significant positive impact on scientific discovery productivity.« less

  15. Silicon photonics plasma-modulators with advanced transmission line design.

    PubMed

    Merget, Florian; Azadeh, Saeed Sharif; Mueller, Juliana; Shen, Bin; Nezhad, Maziar P; Hauck, Johannes; Witzens, Jeremy

    2013-08-26

    We have investigated two novel concepts for the design of transmission lines in travelling wave Mach-Zehnder interferometer based Silicon Photonics depletion modulators overcoming the analog bandwidth limitations arising from cross-talk between signal lines in push-pull modulators and reducing the linear losses of the transmission lines. We experimentally validate the concepts and demonstrate an E/O -3 dBe bandwidth of 16 GHz with a 4V drive voltage (in dual drive configuration) and 8.8 dB on-chip insertion losses. Significant bandwidth improvements result from suppression of cross-talk. An additional bandwidth enhancement of ~11% results from a reduction of resistive transmission line losses. Frequency dependent loss models for loaded transmission lines and E/O bandwidth modeling are fully verified.

  16. Characterizing the In-Phase Reflection Bandwidth Theoretical Limit of Artificial Magnetic Conductors With a Transmission Line Model

    NASA Technical Reports Server (NTRS)

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeefrey D.; Simons, Rainee N.; Xiao, John Q.

    2013-01-01

    We validate through simulation and experiment that artificial magnetic conductors (AMC s) can be well characterized by a transmission line model. The theoretical bandwidth limit of the in-phase reflection can be expressed in terms of the effective RLC parameters from the surface patch and the properties of the substrate. It is found that the existence of effective inductive components will reduce the in-phase reflection bandwidth of the AMC. Furthermore, we propose design strategies to optimize AMC structures with an in-phase reflection bandwidth closer to the theoretical limit.

  17. The effect of bandwidth on filter instrument total ozone accuracy

    NASA Technical Reports Server (NTRS)

    Basher, R. E.

    1977-01-01

    The effect of the width and shape of the New Zealand filter instrument's passbands on measured total-ozone accuracy is determined using a numerical model of the spectral measurement process. The model enables the calculation of corrections for the 'bandwidth-effect' error and shows that highly attenuating passband skirts and well-suppressed leakage bands are at least as important as narrow half-bandwidths. Over typical ranges of airmass and total ozone, the range in the bandwidth-effect correction is about 2% in total ozone for the filter instrument, compared with about 1% for the Dobson instrument.

  18. Architecture and design of optical path networks utilizing waveband virtual links

    NASA Astrophysics Data System (ADS)

    Ito, Yusaku; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi

    2016-02-01

    We propose a novel optical network architecture that uses waveband virtual links, each of which can carry several optical paths, to directly bridge distant node pairs. Future photonic networks should not only transparently cover extended areas but also expand fiber capacity. However, the traversal of many ROADM nodes impairs the optical signal due to spectrum narrowing. To suppress the degradation, the bandwidth of guard bands needs to be increased, which degrades fiber frequency utilization. Waveband granular switching allows us to apply broader pass-band filtering at ROADMs and to insert sufficient guard bands between wavebands with minimum frequency utilization offset. The scheme resolves the severe spectrum narrowing effect. Moreover, the guard band between optical channels in a waveband can be minimized, which increases the number of paths that can be accommodated per fiber. In the network, wavelength path granular routing is done without utilizing waveband virtual links, and it still suffers from spectrum narrowing. A novel network design algorithm that can bound the spectrum narrowing effect by limiting the number of hops (traversed nodes that need wavelength path level routing) is proposed in this paper. This algorithm dynamically changes the waveband virtual link configuration according to the traffic distribution variation, where optical paths that need many node hops are effectively carried by virtual links. Numerical experiments demonstrate that the number of necessary fibers is reduced by 23% compared with conventional optical path networks.

  19. Stability and performance of propulsion control systems with distributed control architectures and failures

    NASA Astrophysics Data System (ADS)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  20. Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Christopher H.; Long, Hai; Sides, Scott

    2015-10-15

    Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement ofmore » future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.« less

  1. Flexible-rate optical packet generation/detection and label swapping for optical label switching networks

    NASA Astrophysics Data System (ADS)

    Wu, Zhongying; Li, Juhao; Tian, Yu; Ge, Dawei; Zhu, Paikun; Chen, Yuanxiang; Chen, Zhangyuan; He, Yongqi

    2017-03-01

    In recent years, optical label switching (OLS) gains lots of attentions due to its intrinsic advantages to implement protocol, bit-rate, granularity and data format transparency packet switching. In this paper, we propose a novel scheme to realize flexible-rate optical packet switching for OLS networks. At the transmitter node, flexible-rate packet is generated by parallel modulating different combinations of optical carriers generated from the optical multi-carrier generator (OMCG), among which the low-speed optical label occupies one carrier. At the switching node, label is extracted and re-generated in label processing unit (LPU). The payloads are switched based on routing information and new label is added after switching. At the receiver node, another OMCG serves as local oscillators (LOs) for optical payloads coherent detection. The proposed scheme offers good flexibility for dynamic optical packet switching by adjusting the payload bandwidth and could also effectively reduce the number of lasers, modulators and receivers for packet generation/detection. We present proof-of-concept demonstrations of flexible-rate packet generation/detection and label swapping in 12.5 GHz grid. The influence of crosstalk for cascaded label swapping is also investigated.

  2. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  3. Game Theory-Based Cooperation for Underwater Acoustic Sensor Networks: Taxonomy, Review, Research Challenges and Directions

    PubMed Central

    Muhammed, Dalhatu; Anisi, Mohammad Hossein; Vargas-Rosales, Cesar; Khan, Anwar

    2018-01-01

    Exploring and monitoring the underwater world using underwater sensors is drawing a lot of attention these days. In this field cooperation between acoustic sensor nodes has been a critical problem due to the challenging features such as acoustic channel failure (sound signal), long propagation delay of acoustic signal, limited bandwidth and loss of connectivity. There are several proposed methods to improve cooperation between the nodes by incorporating information/game theory in the node’s cooperation. However, there is a need to classify the existing works and demonstrate their performance in addressing the cooperation issue. In this paper, we have conducted a review to investigate various factors affecting cooperation in underwater acoustic sensor networks. We study various cooperation techniques used for underwater acoustic sensor networks from different perspectives, with a concentration on communication reliability, energy consumption, and security and present a taxonomy for underwater cooperation. Moreover, we further review how the game theory can be applied to make the nodes cooperate with each other. We further analyze different cooperative game methods, where their performance on different metrics is compared. Finally, open issues and future research direction in underwater acoustic sensor networks are highlighted. PMID:29389874

  4. TreeMAC: Localized TDMA MAC protocol for real-time high-data-rate sensor networks

    USGS Publications Warehouse

    Song, W.-Z.; Huang, R.; Shirazi, B.; Husent, R.L.

    2009-01-01

    Earlier sensor network MAC protocols focus on energy conservation in low-duty cycle applications, while some recent applications involve real-time high-data-rate signals. This motivates us to design an innovative localized TDMA MAC protocol to achieve high throughput and low congestion in data collection sensor networks, besides energy conservation. TreeMAC divides a time cycle into frames and frame into slots. Parent determines children's frame assigmnent based on their relative bandwidth demand, and each node calculates its own slot assignment based on its hop-count to the sink. This innovative 2-dimensional frame-slot assignment algorithm has the following nice theory properties. Firstly, given any node, at any time slot, there is at most one active sender in its neighborhood (includ ing itself). Secondly, the packet scheduling with TreelMAC is bufferless, which therefore minimizes the probability of network congestion. Thirdly, the data throughput to gateway is at least 1/3 of the optimum assuming reliable links. Our experiments on a 24 node test bed demonstrate that TreeMAC protocol significantly improves network throughput and energy efficiency, by comparing to the TinyOS's default CSMA MAC protocol and a recent TDMA MAC protocol Funneling-MAC[8]. ?? 2009 IEEE.

  5. An Optimized Autonomous Space In-situ Sensorweb (OASIS) for Volcano Monitoring

    NASA Astrophysics Data System (ADS)

    Song, W.; Shirazi, B.; Lahusen, R.; Chien, S.; Kedar, S.; Webb, F.

    2006-12-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, we are developing a prototype real-time Optimized Autonomous Space In-situ Sensorweb. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been in continuous eruption since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO- 1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real- time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of triggering the other. Sensor-web data acquisition and dissemination will be accomplished through the use of SensorML language standards for geospatial information. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform.

  6. Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring

    USGS Publications Warehouse

    Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.

    2008-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of autonomously tasking the other. Sensor-web data acquisition and dissemination will be accomplished through the use of the Open Geospatial Consortium Sensorweb Enablement protocols. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform. ??2008 IEEE.

  7. An Online Scheduling Algorithm with Advance Reservation for Large-Scale Data Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balman, Mehmet; Kosar, Tevfik

    Scientific applications and experimental facilities generate massive data sets that need to be transferred to remote collaborating sites for sharing, processing, and long term storage. In order to support increasingly data-intensive science, next generation research networks have been deployed to provide high-speed on-demand data access between collaborating institutions. In this paper, we present a practical model for online data scheduling in which data movement operations are scheduled in advance for end-to-end high performance transfers. In our model, data scheduler interacts with reservation managers and data transfer nodes in order to reserve available bandwidth to guarantee completion of jobs that aremore » accepted and confirmed to satisfy preferred time constraint given by the user. Our methodology improves current systems by allowing researchers and higher level meta-schedulers to use data placement as a service where theycan plan ahead and reserve the scheduler time in advance for their data movement operations. We have implemented our algorithm and examined possible techniques for incorporation into current reservation frameworks. Performance measurements confirm that the proposed algorithm is efficient and scalable.« less

  8. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  9. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  10. OTACT: ONU Turning with Adaptive Cycle Times in Long-Reach PONs

    NASA Astrophysics Data System (ADS)

    Zare, Sajjad; Ghaffarpour Rahbar, Akbar

    2015-01-01

    With the expansion of PON networks as Long-Reach PON (LR-PON) networks, the problem of degrading the efficiency of centralized bandwidth allocation algorithms threatens this network due to high propagation delay. This is because these algorithms are based on bandwidth negotiation messages frequently exchanged between the optical line terminal (OLT) in the Central Office and optical network units (ONUs) near the users, which become seriously delayed when the network is extended. To solve this problem, some decentralized algorithms are proposed based on bandwidth negotiation messages frequently exchanged between the Remote Node (RN)/Local Exchange (LX) and ONUs near the users. The network has a relatively high delay since there are relatively large distances between RN/LX and ONUs, and therefore, control messages should travel twice between ONUs and RN/LX in order to go from one ONU to another ONU. In this paper, we propose a novel framework, called ONU Turning with Adaptive Cycle Times (OTACT), that uses Power Line Communication (PLC) to connect two adjacent ONUs. Since there is a large population density in urban areas, ONUs are closer to each other. Thus, the efficiency of the proposed method is high. We investigate the performance of the proposed scheme in contrast with other decentralized schemes under the worst case conditions. Simulation results show that the average upstream packet delay can be decreased under the proposed scheme.

  11. Service models and realization of differentiated services networks

    NASA Astrophysics Data System (ADS)

    Elizondo, Antonio J.; Garcia Osma, Maria L.; Einsiedler, Hans J.; Roth, Rudolf; Smirnov, Michael I.; Bartoli, Maurizio; Castelli, Paolo; Varga, Balazs; Krampell, Magnus

    2001-07-01

    Internet Service Providers need to offer Quality of Service (QoS) to fulfil the requirements of applications of their customers. Moreover, in a competitive market environment costs must be low. The selected service model must be effective and low in complexity, but it should still provide high quality and service differentiation, that the current Internet is not yet capable to support. The Differentiated Services (DiffServ) Architecture has been proposed for enabling a range of different Classes of Service (CoS). In the EURESCOM project P1006 several European service providers co-operated to examine various aspects involved in the introduction of service differentiation using the DiffServ approach. The project explored a set of service models for Expedited Forwarding (EF) and Assured Forwarding (AF) and identified requirements for network nodes. Besides, we addressed also measurement issues, charging and accounting issues. Special attention has been devoted to requirements of elastic traffic that adapts its sending rate to congestion state and available bandwidth. QoS mechanisms must prove Transmission Control Protocol (TCP) friendliness. TCP performance degrades under multiple losses. Since RED based queue management may still cause multiple discards, a modified marking scheme called Capped Leaky Bucket is proposed to improve the performance of elastic applications.

  12. INTEGRATED MONITORING HARDWARE DEVELOPMENTS AT LOS ALAMOS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. PARKER; J. HALBIG; ET AL

    1999-09-01

    The hardware of the integrated monitoring system supports a family of instruments having a common internal architecture and firmware. Instruments can be easily configured from application-specific personality boards combined with common master-processor and high- and low-voltage power supply boards, and basic operating firmware. The instruments are designed to function autonomously to survive power and communication outages and to adapt to changing conditions. The personality boards allow measurement of gross gammas and neutrons, neutron coincidence and multiplicity, and gamma spectra. In addition, the Intelligent Local Node (ILON) provides a moderate-bandwidth network to tie together instruments, sensors, and computers.

  13. An Atmospheric General Circulation Model with Chemistry for the CRAY T3E: Design, Performance Optimization and Coupling to an Ocean Model

    NASA Technical Reports Server (NTRS)

    Farrara, John D.; Drummond, Leroy A.; Mechoso, Carlos R.; Spahr, Joseph A.

    1998-01-01

    The design, implementation and performance optimization on the CRAY T3E of an atmospheric general circulation model (AGCM) which includes the transport of, and chemical reactions among, an arbitrary number of constituents is reviewed. The parallel implementation is based on a two-dimensional (longitude and latitude) data domain decomposition. Initial optimization efforts centered on minimizing the impact of substantial static and weakly-dynamic load imbalances among processors through load redistribution schemes. Recent optimization efforts have centered on single-node optimization. Strategies employed include loop unrolling, both manually and through the compiler, the use of an optimized assembler-code library for special function calls, and restructuring of parts of the code to improve data locality. Data exchanges and synchronizations involved in coupling different data-distributed models can account for a significant fraction of the running time. Therefore, the required scattering and gathering of data must be optimized. In systems such as the T3E, there is much more aggregate bandwidth in the total system than in any particular processor. This suggests a distributed design. The design and implementation of a such distributed 'Data Broker' as a means to efficiently couple the components of our climate system model is described.

  14. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Wucherl; Sim, Alex

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  15. Time-Series Forecast Modeling on High-Bandwidth Network Measurements

    DOE PAGES

    Yoo, Wucherl; Sim, Alex

    2016-06-24

    With the increasing number of geographically distributed scientific collaborations and the growing sizes of scientific data, it has become challenging for users to achieve the best possible network performance on a shared network. In this paper, we have developed a model to forecast expected bandwidth utilization on high-bandwidth wide area networks. The forecast model can improve the efficiency of the resource utilization and scheduling of data movements on high-bandwidth networks to accommodate ever increasing data volume for large-scale scientific data applications. A univariate time-series forecast model is developed with the Seasonal decomposition of Time series by Loess (STL) and themore » AutoRegressive Integrated Moving Average (ARIMA) on Simple Network Management Protocol (SNMP) path utilization measurement data. Compared with the traditional approach such as Box-Jenkins methodology to train the ARIMA model, our forecast model reduces computation time up to 92.6 %. It also shows resilience against abrupt network usage changes. Finally, our forecast model conducts the large number of multi-step forecast, and the forecast errors are within the mean absolute deviation (MAD) of the monitored measurements.« less

  16. Space information technologies: future agenda

    NASA Astrophysics Data System (ADS)

    Flournoy, Don M.

    2005-11-01

    Satellites will operate more like wide area broadband computer networks in the 21st Century. Space-based information and communication technologies will therefore be a lot more accessible and functional for the individual user. These developments are the result of earth-based telecommunication and computing innovations being extended to space. The author predicts that the broadband Internet will eventually be available on demand to users of terrestrial networks wherever they are. Earth and space communication assets will be managed as a single network. Space networks will assure that online access is ubiquitous. No matter whether users are located in cities or in remote locations, they will always be within reach of a node on the Internet. Even today, scalable bandwidth can be delivered to active users when moving around in vehicles on the ground, or aboard ships at sea or in the air. Discussion of the innovative technologies produced by NASA's Advanced Communications Technology Satellite (1993-2004) demonstrates future capabilities of satellites that make them uniquely suited to serve as nodes on the broadband Internet.

  17. Lightweight filter architecture for energy efficient mobile vehicle localization based on a distributed acoustic sensor network.

    PubMed

    Kim, Keonwook

    2013-08-23

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably.

  18. Intelligent self-organization methods for wireless ad hoc sensor networks based on limited resources

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2006-05-01

    A wireless ad hoc sensor network (WSN) is a configuration for area surveillance that affords rapid, flexible deployment in arbitrary threat environments. There is no infrastructure support and sensor nodes communicate with each other only when they are in transmission range. To a greater degree than the terminals found in mobile ad hoc networks (MANETs) for communications, sensor nodes are resource-constrained, with limited computational processing, bandwidth, memory, and power, and are typically unattended once in operation. Consequently, the level of information exchange among nodes, to support any complex adaptive algorithms to establish network connectivity and optimize throughput, not only deplete those limited resources and creates high overhead in narrowband communications, but also increase network vulnerability to eavesdropping by malicious nodes. Cooperation among nodes, critical to the mission of sensor networks, can thus be disrupted by the inappropriate choice of the method for self-organization. Recent published contributions to the self-configuration of ad hoc sensor networks, e.g., self-organizing mapping and swarm intelligence techniques, have been based on the adaptive control of the cross-layer interactions found in MANET protocols to achieve one or more performance objectives: connectivity, intrusion resistance, power control, throughput, and delay. However, few studies have examined the performance of these algorithms when implemented with the limited resources of WSNs. In this paper, self-organization algorithms for the initiation, operation and maintenance of a network topology from a collection of wireless sensor nodes are proposed that improve the performance metrics significant to WSNs. The intelligent algorithm approach emphasizes low computational complexity, energy efficiency and robust adaptation to change, allowing distributed implementation with the actual limited resources of the cooperative nodes of the network. Extensions of the algorithms from flat topologies to two-tier hierarchies of sensor nodes are presented. Results from a few simulations of the proposed algorithms are compared to the published results of other approaches to sensor network self-organization in common scenarios. The estimated network lifetime and extent under static resource allocations are computed.

  19. A Reliable Data Transmission Model for IEEE 802.15.4e Enabled Wireless Sensor Network under WiFi Interference.

    PubMed

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-06-07

    The IEEE 802.15.4e standard proposes Medium Access Control (MAC) to support collision-free wireless channel access mechanisms for industrial, commercial and healthcare applications. However, unnecessary wastage of energy and bandwidth consumption occur due to inefficient backoff management and collisions. In this paper, a new channel access mechanism is designed for the buffer constraint sensor devices to reduce the packet drop rate, energy consumption and collisions. In order to avoid collision due to the hidden terminal problem, a new frame structure is designed for the data transmission. A new superframe structure is proposed to mitigate the problems due to WiFi and ZigBee interference. A modified superframe structure with a new retransmission opportunity for failure devices is proposed to reduce the collisions and retransmission delay with high reliability. Performance evaluation and validation of our scheme indicate that the packet drop rate, throughput, reliability, energy consumption and average delay of the nodes can be improved significantly.

  20. A network-based training environment: a medical image processing paradigm.

    PubMed

    Costaridou, L; Panayiotakis, G; Sakellaropoulos, P; Cavouras, D; Dimopoulos, J

    1998-01-01

    The capability of interactive multimedia and Internet technologies is investigated with respect to the implementation of a distance learning environment. The system is built according to a client-server architecture, based on the Internet infrastructure, composed of server nodes conceptually modelled as WWW sites. Sites are implemented by customization of available components. The environment integrates network-delivered interactive multimedia courses, network-based tutoring, SIG support, information databases of professional interest, as well as course and tutoring management. This capability has been demonstrated by means of an implemented system, validated with digital image processing content, specifically image enhancement. Image enhancement methods are theoretically described and applied to mammograms. Emphasis is given to the interactive presentation of the effects of algorithm parameters on images. The system end-user access depends on available bandwidth, so high-speed access can be achieved via LAN or local ISDN connections. Network based training offers new means of improved access and sharing of learning resources and expertise, as promising supplements in training.

  1. DECHADE: DEtecting slight Changes with HArd DEcisions in Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Ciuonzo, D.; Salvo Rossi, P.

    2018-07-01

    This paper focuses on the problem of change detection through a Wireless Sensor Network (WSN) whose nodes report only binary decisions (on the presence/absence of a certain event to be monitored), due to bandwidth/energy constraints. The resulting problem can be modelled as testing the equality of samples drawn from independent Bernoulli probability mass functions, when the bit probabilities under both hypotheses are not known. Both One-Sided (OS) and Two-Sided (TS) tests are considered, with reference to: (i) identical bit probability (a homogeneous scenario), (ii) different per-sensor bit probabilities (a non-homogeneous scenario) and (iii) regions with identical bit probability (a block-homogeneous scenario) for the observed samples. The goal is to provide a systematic framework collecting a plethora of viable detectors (designed via theoretically founded criteria) which can be used for each instance of the problem. Finally, verification of the derived detectors in two relevant WSN-related problems is provided to show the appeal of the proposed framework.

  2. A Reliable Data Transmission Model for IEEE 802.15.4e Enabled Wireless Sensor Network under WiFi Interference

    PubMed Central

    Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin

    2017-01-01

    The IEEE 802.15.4e standard proposes Medium Access Control (MAC) to support collision-free wireless channel access mechanisms for industrial, commercial and healthcare applications. However, unnecessary wastage of energy and bandwidth consumption occur due to inefficient backoff management and collisions. In this paper, a new channel access mechanism is designed for the buffer constraint sensor devices to reduce the packet drop rate, energy consumption and collisions. In order to avoid collision due to the hidden terminal problem, a new frame structure is designed for the data transmission. A new superframe structure is proposed to mitigate the problems due to WiFi and ZigBee interference. A modified superframe structure with a new retransmission opportunity for failure devices is proposed to reduce the collisions and retransmission delay with high reliability. Performance evaluation and validation of our scheme indicate that the packet drop rate, throughput, reliability, energy consumption and average delay of the nodes can be improved significantly. PMID:28590434

  3. Data oriented job submission scheme for the PHENIX user analysis in CCJ

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; En'yo, H.; Ichihara, T.; Watanabe, Y.; Yokkaichi, S.

    2011-12-01

    The RIKEN Computing Center in Japan (CCJ) has been developed to make it possible analyzing huge amount of data corrected by the PHENIX experiment at RHIC. The corrected raw data or reconstructed data are transferred via SINET3 with 10 Gbps bandwidth from Brookheaven National Laboratory (BNL) by using GridFTP. The transferred data are once stored in the hierarchical storage management system (HPSS) prior to the user analysis. Since the size of data grows steadily year by year, concentrations of the access request to data servers become one of the serious bottlenecks. To eliminate this I/O bound problem, 18 calculating nodes with total 180 TB local disks were introduced to store the data a priori. We added some setup in a batch job scheduler (LSF) so that user can specify the requiring data already distributed to the local disks. The locations of data are automatically obtained from a database, and jobs are dispatched to the appropriate node which has the required data. To avoid the multiple access to a local disk from several jobs in a node, techniques of lock file and access control list are employed. As a result, each job can handle a local disk exclusively. Indeed, the total throughput was improved drastically as compared to the preexisting nodes in CCJ, and users can analyze about 150 TB data within 9 hours. We report this successful job submission scheme and the feature of the PC cluster.

  4. 93-133 GHz Band InP High-Electron-Mobility Transistor Amplifier with Gain-Enhanced Topology

    NASA Astrophysics Data System (ADS)

    Sato, Masaru; Shiba, Shoichi; Matsumura, Hiroshi; Takahashi, Tsuyoshi; Nakasha, Yasuhiro; Suzuki, Toshihide; Hara, Naoki

    2013-04-01

    In this study, we developed a new type of high-frequency amplifier topology using 75-nm-gate-length InP-based high-electron-mobility transistors (InP HEMTs). To enhance the gain for a wide frequency range, a common-source common-gate hybrid amplifier topology was proposed. A transformer-based balun placed at the input of the amplifier generates differential signals, which are fed to the gate and source terminals of the transistor. The amplified signal is outputted at the drain node. The simulation results show that the hybrid topology exhibits a higher gain from 90 to 140 GHz than that of the conventional common-source or common-gate amplifier. The two-stage amplifier fabricated using the topology exhibits a small signal gain of 12 dB and a 3-dB bandwidth of 40 GHz (93-133 GHz), which is the largest bandwidth and the second highest gain reported among those of published 120-GHz-band amplifiers. In addition, the measured noise figure was 5 dB from 90 to 100 GHz.

  5. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  6. Accelerating DNA analysis applications on GPU clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste

    DNA analysis is an emerging application of high performance bioinformatic. Modern sequencing machinery are able to provide, in few hours, large input streams of data which needs to be matched against exponentially growing databases known fragments. The ability to recognize these patterns effectively and fastly may allow extending the scale and the reach of the investigations performed by biology scientists. Aho-Corasick is an exact, multiple pattern matching algorithm often at the base of this application. High performance systems are a promising platform to accelerate this algorithm, which is computationally intensive but also inherently parallel. Nowadays, high performance systems also includemore » heterogeneous processing elements, such as Graphic Processing Units (GPUs), to further accelerate parallel algorithms. Unfortunately, the Aho-Corasick algorithm exhibits large performance variabilities, depending on the size of the input streams, on the number of patterns to search and on the number of matches, and poses significant challenges on current high performance software and hardware implementations. An adequate mapping of the algorithm on the target architecture, coping with the limit of the underlining hardware, is required to reach the desired high throughputs. Load balancing also plays a crucial role when considering the limited bandwidth among the nodes of these systems. In this paper we present an efficient implementation of the Aho-Corasick algorithm for high performance clusters accelerated with GPUs. We discuss how we partitioned and adapted the algorithm to fit the Tesla C1060 GPU and then present a MPI based implementation for a heterogeneous high performance cluster. We compare this implementation to MPI and MPI with pthreads based implementations for a homogeneous cluster of x86 processors, discussing the stability vs. the performance and the scaling of the solutions, taking into consideration aspects such as the bandwidth among the different nodes.« less

  7. Routing and wavelength assignment based on normalized resource and constraints for all-optical network

    NASA Astrophysics Data System (ADS)

    Joo, Seong-Soon; Nam, Hyun-Soon; Lim, Chang-Kyu

    2003-08-01

    With the rapid growth of the Optical Internet, high capacity pipes is finally destined to support end-to-end IP on the WDM optical network. Newly launched 2D MEMS optical switching module in the market supports that expectations of upcoming a transparent optical cross-connect in the network have encouraged the field applicable research on establishing real all-optical transparent network. To open up a customer-driven bandwidth services, design of the optical transport network becomes more challenging task in terms of optimal network resource usage. This paper presents a practical approach to finding a route and wavelength assignment for wavelength routed all-optical network, which has λ-plane OXC switches and wavelength converters, and supports that optical paths are randomly set up and released by dynamic wavelength provisioning to create bandwidth between end users with timescales on the order of seconds or milliseconds. We suggest three constraints to make the RWA problem become more practical one on deployment for wavelength routed all-optical network in network view: limitation on maximum hop of a route within bearable optical network impairments, limitation on minimum hops to travel before converting a wavelength, and limitation on calculation time to find all routes for connections requested at once. We design the NRCD (Normalized Resource and Constraints for All-Optical Network RWA Design) algorithm for the Tera OXC: network resource for a route is calculated by the number of internal switching paths established in each OXC nodes on the route, and is normalized by ratio of number of paths established and number of paths equipped in a node. We show that it fits for the RWA algorithm of the wavelength routed all-optical network through real experiments on the distributed objects platform.

  8. A potassium Faraday anomalous dispersion optical filter

    NASA Technical Reports Server (NTRS)

    Yin, B.; Shay, T. M.

    1992-01-01

    The characteristics of a potassium Faraday anomalous dispersion optical filter operating on the blue and near infrared transitions are calculated. The results show that the filter can be designed to provide high transmission, very narrow pass bandwidth, and low equivalent noise bandwidth. The Faraday anomalous dispersion optical filter (FADOF) provides a narrow pass bandwidth (about GHz) optical filter for laser communications, remote sensing, and lidar. The general theoretical model for the FADOF has been established in our previous paper. In this paper, we have identified the optimum operational conditions for a potassium FADOF operating on the blue and infrared transitions. The signal transmission, bandwidth, and equivalent noise bandwidth (ENBW) are also calculated.

  9. Optimal cube-connected cube multiprocessors

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Wu, Jie

    1993-01-01

    Many CFD (computational fluid dynamics) and other scientific applications can be partitioned into subproblems. However, in general the partitioned subproblems are very large. They demand high performance computing power themselves, and the solutions of the subproblems have to be combined at each time step. The cube-connect cube (CCCube) architecture is studied. The CCCube architecture is an extended hypercube structure with each node represented as a cube. It requires fewer physical links between nodes than the hypercube, and provides the same communication support as the hypercube does on many applications. The reduced physical links can be used to enhance the bandwidth of the remaining links and, therefore, enhance the overall performance. The concept and the method to obtain optimal CCCubes, which are the CCCubes with a minimum number of links under a given total number of nodes, are proposed. The superiority of optimal CCCubes over standard hypercubes was also shown in terms of the link usage in the embedding of a binomial tree. A useful computation structure based on a semi-binomial tree for divide-and-conquer type of parallel algorithms was identified. It was shown that this structure can be implemented in optimal CCCubes without performance degradation compared with regular hypercubes. The result presented should provide a useful approach to design of scientific parallel computers.

  10. Purple L1 Milestone Review Panel GPFS Functionality and Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loewe, W E

    2006-12-01

    The GPFS deliverable for the Purple system requires the functionality and performance necessary for ASC I/O needs. The functionality includes POSIX and MPIIO compatibility, and multi-TB file capability across the entire machine. The bandwidth performance required is 122.15 GB/s, as necessary for productive and defensive I/O requirements, and the metadata performance requirement is 5,000 file stats per second. To determine success for this deliverable, several tools are employed. For functionality testing of POSIX, 10TB-files, and high-node-count capability, the parallel file system bandwidth performance test IOR is used. IOR is an MPI-coordinated application that can write and then read to amore » single shared file or to an individual file per process and check the data integrity of the file(s). The MPIIO functionality is tested with the MPIIO test suite from the MPICH library. Bandwidth performance is tested using IOR for the required 122.15 GB/s sustained write. All IOR tests are performanced with data checking enabled. Metadata performance is tested after ''aging'' the file system with 80% data block usage and 20% inode usage. The fdtree metadata test is expected to create/remove a large directory/file structure in under 20 minutes time, akin to interactive metadata usage. Multiple (10) instances of ''ls -lR'', each performing over 100K stats, are run concurrently in different large directories to demonstrate 5,000 stats/sec.« less

  11. Cross-phase modulation bandwidth in ultrafast fiber wavelength converters

    NASA Astrophysics Data System (ADS)

    Luís, Ruben S.; Monteiro, Paulo; Teixeira, António

    2006-12-01

    We propose a novel analytical model for the characterization of fiber cross-phase modulation (XPM) in ultrafast all-optical fiber wavelength converters, operating at modulation frequencies higher than 1THz. The model is used to compare the XPM frequency limitations of a conventional and a highly nonlinear dispersion shifted fiber (HN-DSF) and a bismuth oxide-based fiber, introducing the XPM bandwidth as a design parameter. It is shown that the HN-DSF presents the highest XPM bandwidth, above 1THz, making it the most appropriate for ultrafast wavelength conversion.

  12. Underwater Communications for Video Surveillance Systems at 2.4 GHz

    PubMed Central

    Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J.P.C.

    2016-01-01

    Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves’ behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible. PMID:27782095

  13. Underwater Communications for Video Surveillance Systems at 2.4 GHz.

    PubMed

    Sendra, Sandra; Lloret, Jaime; Jimenez, Jose Miguel; Rodrigues, Joel J P C

    2016-10-23

    Video surveillance is needed to control many activities performed in underwater environments. The use of wired media can be a problem since the material specially designed for underwater environments is very expensive. In order to transmit the images and videos wirelessly under water, three main technologies can be used: acoustic waves, which do not provide high bandwidth, optical signals, although the effect of light dispersion in water severely penalizes the transmitted signals and therefore, despite offering high transfer rates, the maximum distance is very small, and electromagnetic (EM) waves, which can provide enough bandwidth for video delivery. In the cases where the distance between transmitter and receiver is short, the use of EM waves would be an interesting option since they provide high enough data transfer rates to transmit videos with high resolution. This paper presents a practical study of the behavior of EM waves at 2.4 GHz in freshwater underwater environments. First, we discuss the minimum requirements of a network to allow video delivery. From these results, we measure the maximum distance between nodes and the round trip time (RTT) value depending on several parameters such as data transfer rate, signal modulations, working frequency, and water temperature. The results are statistically analyzed to determine their relation. Finally, the EM waves' behavior is modeled by a set of equations. The results show that there are some combinations of working frequency, modulation, transfer rate and temperature that offer better results than others. Our work shows that short communication distances with high data transfer rates is feasible.

  14. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  15. Wireless Data-Acquisition System for Testing Rocket Engines

    NASA Technical Reports Server (NTRS)

    Lin, Chujen; Lonske, Ben; Hou, Yalin; Xu, Yingjiu; Gang, Mei

    2007-01-01

    A prototype wireless data-acquisition system has been developed as a potential replacement for a wired data-acquisition system heretofore used in testing rocket engines. The traditional use of wires to connect sensors, signal-conditioning circuits, and data acquisition circuitry is time-consuming and prone to error, especially when, as is often the case, many sensors are used in a test. The system includes one master and multiple slave nodes. The master node communicates with a computer via an Ethernet connection. The slave nodes are powered by rechargeable batteries and are packaged in weatherproof enclosures. The master unit and each of the slave units are equipped with a time-modulated ultra-wide-band (TMUWB) radio transceiver, which spreads its RF energy over several gigahertz by transmitting extremely low-power and super-narrow pulses. In this prototype system, each slave node can be connected to as many as six sensors: two sensors can be connected directly to analog-to-digital converters (ADCs) in the slave node and four sensors can be connected indirectly to the ADCs via signal conditioners. The maximum sampling rate for streaming data from any given sensor is about 5 kHz. The bandwidth of one channel of the TM-UWB radio communication system is sufficient to accommodate streaming of data from five slave nodes when they are fully loaded with data collected through all possible sensor connections. TM-UWB radios have a much higher spatial capacity than traditional sinusoidal wave-based radios. Hence, this TM-UWB wireless data-acquisition can be scaled to cover denser sensor setups for rocket engine test stands. Another advantage of TM-UWB radios is that it will not interfere with existing wireless transmission. The maximum radio-communication range between the master node and a slave node for this prototype system is about 50 ft (15 m) when the master and slave transceivers are equipped with small dipole antennas. The range can be increased by changing to larger antennas and/or greater transmission power. The battery life of a slave node ranges from about six hours during operation at full capacity to as long as three days when the system is in a "sleep" mode used to conserve battery charge during times between setup and rocket-engine testing. Batteries can be added to prolong operational lifetimes. The radio transceiver dominates the power consumption.

  16. Resource Optimization Scheme for Multimedia-Enabled Wireless Mesh Networks

    PubMed Central

    Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md. Jalil; Suh, Doug Young

    2014-01-01

    Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment. PMID:25111241

  17. Resource optimization scheme for multimedia-enabled wireless mesh networks.

    PubMed

    Ali, Amjad; Ahmed, Muhammad Ejaz; Piran, Md Jalil; Suh, Doug Young

    2014-08-08

    Wireless mesh networking is a promising technology that can support numerous multimedia applications. Multimedia applications have stringent quality of service (QoS) requirements, i.e., bandwidth, delay, jitter, and packet loss ratio. Enabling such QoS-demanding applications over wireless mesh networks (WMNs) require QoS provisioning routing protocols that lead to the network resource underutilization problem. Moreover, random topology deployment leads to have some unused network resources. Therefore, resource optimization is one of the most critical design issues in multi-hop, multi-radio WMNs enabled with multimedia applications. Resource optimization has been studied extensively in the literature for wireless Ad Hoc and sensor networks, but existing studies have not considered resource underutilization issues caused by QoS provisioning routing and random topology deployment. Finding a QoS-provisioned path in wireless mesh networks is an NP complete problem. In this paper, we propose a novel Integer Linear Programming (ILP) optimization model to reconstruct the optimal connected mesh backbone topology with a minimum number of links and relay nodes which satisfies the given end-to-end QoS demands for multimedia traffic and identification of extra resources, while maintaining redundancy. We further propose a polynomial time heuristic algorithm called Link and Node Removal Considering Residual Capacity and Traffic Demands (LNR-RCTD). Simulation studies prove that our heuristic algorithm provides near-optimal results and saves about 20% of resources from being wasted by QoS provisioning routing and random topology deployment.

  18. Security clustering algorithm based on reputation in hierarchical peer-to-peer network

    NASA Astrophysics Data System (ADS)

    Chen, Mei; Luo, Xin; Wu, Guowen; Tan, Yang; Kita, Kenji

    2013-03-01

    For the security problems of the hierarchical P2P network (HPN), the paper presents a security clustering algorithm based on reputation (CABR). In the algorithm, we take the reputation mechanism for ensuring the security of transaction and use cluster for managing the reputation mechanism. In order to improve security, reduce cost of network brought by management of reputation and enhance stability of cluster, we select reputation, the historical average online time, and the network bandwidth as the basic factors of the comprehensive performance of node. Simulation results showed that the proposed algorithm improved the security, reduced the network overhead, and enhanced stability of cluster.

  19. Scaling induced performance challenges/limitations of on-chip metal interconnects and comparisons with optical interconnects

    NASA Astrophysics Data System (ADS)

    Kapur, Pawan

    The miniaturization paradigm for silicon integrated circuits has resulted in a tremendous cost and performance advantage. Aggressive shrinking of devices provides faster transistors and a greater functionality for circuit design. However, scaling induced smaller wire cross-sections coupled with longer lengths owing to larger chip areas, result in a steady deterioration of interconnects. This degradation in interconnect trends threatens to slow down the rapid growth along Moore's law. This work predicts that the situation is worse than anticipated. It shows that in the light of technology and reliability constraints, scaling induced increase in electron surface scattering, fractional cross section area occupied by the highly resistive barrier, and realistic interconnect operation temperature will lead to a significant rise in effective resistivity of modern copper based interconnects. We start by discussing various technology factors affecting copper resistivity. We, next, develop simulation tools to model these effects. Using these tools, we quantify the increase in realistic copper resistivity as a function of future technology nodes, under various technology assumptions. Subsequently, we evaluate the impact of these technology effects on delay and power dissipation of global signaling interconnects. Modern long on-chip wires use repeaters, which dramatically improves their delay and bandwidth. We quantify the repeated wire delays and power dissipation using realistic resistance trends at future nodes. With the motivation of reducing power, we formalize a methodology, which trades power with delay very efficiently for repeated wires. Using this method, we find that although the repeater power comes down, the total power dissipation due to wires is still found to be very large at future nodes. Finally, we explore optical interconnects as a possible substitute, for specific interconnect applications. We model an optical receiver and waveguides. Using this we assess future optical system performance. Finally, we compare the delay and power of future metal interconnects with that of optical interconnects for global signaling application. We also compare the power dissipation of the two approaches for an upper level clock distribution application. We find that for long on-chip communication links, optical interconnects have lower latencies than future metal interconnects at comparable levels of power dissipation.

  20. Unleashing spatially distributed ecohydrology modeling using Big Data tools

    NASA Astrophysics Data System (ADS)

    Miles, B.; Idaszak, R.

    2015-12-01

    Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well as point time series of arbitrary variables at arbitrary points in space within a watershed or river basin. By treating ecohydrology modeling as a Big Data problem, we hope to provide a platform for answering transformative science and management questions related to water quantity and quality in a world of non-stationary climate.

  1. Innovative research of AD HOC network mobility model

    NASA Astrophysics Data System (ADS)

    Chen, Xin

    2017-08-01

    It is difficult for researchers of AD HOC network to conduct actual deployment during experimental stage as the network topology is changeable and location of nodes is unfixed. Thus simulation still remains the main research method of the network. Mobility model is an important component of AD HOC network simulation. It is used to describe the movement pattern of nodes in AD HOC network (including location and velocity, etc.) and decides the movement trail of nodes, playing as the abstraction of the movement modes of nodes. Therefore, mobility model which simulates node movement is an important foundation for simulation research. In AD HOC network research, mobility model shall reflect the movement law of nodes as truly as possible. In this paper, node generally refers to the wireless equipment people carry. The main research contents include how nodes avoid obstacles during movement process and the impacts of obstacles on the mutual relation among nodes, based on which a Node Self Avoiding Obstacle, i.e. NASO model is established in AD HOC network.

  2. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs).

    PubMed

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-05-12

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime.

  3. DOW-PR DOlphin and Whale Pods Routing Protocol for Underwater Wireless Sensor Networks (UWSNs)

    PubMed Central

    Wadud, Zahid; Ullah, Khadem; Hussain, Sajjad; Yang, Xiaodong; Qazi, Abdul Baseer

    2018-01-01

    Underwater Wireless Sensor Networks (UWSNs) have intrinsic challenges that include long propagation delays, high mobility of sensor nodes due to water currents, Doppler spread, delay variance, multipath, attenuation and geometric spreading. The existing Weighting Depth and Forwarding Area Division Depth Based Routing (WDFAD-DBR) protocol considers the weighting depth of the two hops in order to select the next Potential Forwarding Node (PFN). To improve the performance of WDFAD-DBR, we propose DOlphin and Whale Pod Routing protocol (DOW-PR). In this scheme, we divide the transmission range into a number of transmission power levels and at the same time select the next PFNs from forwarding and suppressed zones. In contrast to WDFAD-DBR, our scheme not only considers the packet upward advancement, but also takes into account the number of suppressed nodes and number of PFNs at the first and second hops. Consequently, reasonable energy reduction is observed while receiving and transmitting packets. Moreover, our scheme also considers the hops count of the PFNs from the sink. In the absence of PFNs, the proposed scheme will select the node from the suppressed region for broadcasting and thus ensures minimum loss of data. Besides this, we also propose another routing scheme (whale pod) in which multiple sinks are placed at water surface, but one sink is embedded inside the water and is physically connected with the surface sink through high bandwidth connection. Simulation results show that the proposed scheme has high Packet Delivery Ratio (PDR), low energy tax, reduced Accumulated Propagation Distance (APD) and increased the network lifetime. PMID:29757208

  4. Amplifying modeling for broad bandwidth pulse in Nd:glass based on hybrid-broaden mechanism

    NASA Astrophysics Data System (ADS)

    Su, J.; Liu, L.; Luo, B.; Wang, W.; Jing, F.; Wei, X.; Zhang, X.

    2008-05-01

    In this paper, the cross relaxation time is proposed to combine the homogeneous and inhomogeneous broaden mechanism for broad bandwidth pulse amplification model. The corresponding velocity equation, which can describe the response of inverse population on upper and low energy level of gain media to different frequency of pulse, is also put forward. The gain saturation and energy relaxation effect are also included in the velocity equation. Code named CPAP has been developed to simulate the amplifying process of broad bandwidth pulse in multi-pass laser system. The amplifying capability of multi-pass laser system is evaluated and gain narrowing and temporal shape distortion are also investigated when bandwidth of pulse and cross relaxation time of gain media are different. Results can benefit the design of high-energy PW laser system in LFRC, CAEP.

  5. Quantifying the effect of finite spectral bandwidth on extinction coefficient of species in laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Singh, Manjeet; Singh, Jaswant; Singh, Baljit; Ghanshyam, C.

    2016-11-01

    The aim of this study is to quantify the finite spectral bandwidth effect on laser absorption spectroscopy for a wide-band laser source. Experimental analysis reveals that the extinction coefficient of an analyte is affected by the bandwidth of the spectral source, which may result in the erroneous conclusions. An approximate mathematical model has been developed for optical intensities having Gaussian line shape, which includes the impact of source's spectral bandwidth in the equation for spectroscopic absorption. This is done by introducing a suitable first order and second order bandwidth approximation in the Beer-Lambert law equation for finite bandwidth case. The derived expressions were validated using spectroscopic analysis with higher SBW on a test sample, Rhodamine B. The concentrations calculated using proposed approximation, were in significant agreement with the true values when compared with those calculated with conventional approach.

  6. Overcoming the detection bandwidth limit in precision spectroscopy: The analytical apparatus function for a stepped frequency scan

    NASA Astrophysics Data System (ADS)

    Rohart, François

    2017-01-01

    In a previous paper [Rohart et al., Phys Rev A 2014;90(042506)], the influence of detection-bandwidth properties on observed line-shapes in precision spectroscopy was theoretically modeled for the first time using the basic model of a continuous sweeping of the laser frequency. Specific experiments confirmed general theoretical trends but also revealed several insufficiencies of the model in case of stepped frequency scans. As a consequence in as much as up-to-date experiments use step-by-step frequency-swept lasers, a new model of the influence of the detection-bandwidth is developed, including a realistic timing of signal sampling and frequency changes. Using Fourier transform techniques, the resulting time domain apparatus function gets a simple analytical form that can be easily implemented in line-shape fitting codes without any significant increase of computation durations. This new model is then considered in details for detection systems characterized by 1st and 2nd order bandwidths, underlining the importance of the ratio of detection time constant to frequency step duration, namely for the measurement of line frequencies. It also allows a straightforward analysis of corresponding systematic deviations on retrieved line frequencies and broadenings. Finally, a special attention is paid to consequences of a finite detection-bandwidth in Doppler Broadening Thermometry, namely to experimental adjustments required for a spectroscopic determination of the Boltzmann constant at the 1-ppm level of accuracy. In this respect, the interest of implementing a Butterworth 2nd order filter is emphasized.

  7. A mathematical prediction model incorporating molecular subtype for risk of non-sentinel lymph node metastasis in sentinel lymph node-positive breast cancer patients: a retrospective analysis and nomogram development.

    PubMed

    Wang, Na-Na; Yang, Zheng-Jun; Wang, Xue; Chen, Li-Xuan; Zhao, Hong-Meng; Cao, Wen-Feng; Zhang, Bin

    2018-04-25

    Molecular subtype of breast cancer is associated with sentinel lymph node status. We sought to establish a mathematical prediction model that included breast cancer molecular subtype for risk of positive non-sentinel lymph nodes in breast cancer patients with sentinel lymph node metastasis and further validate the model in a separate validation cohort. We reviewed the clinicopathologic data of breast cancer patients with sentinel lymph node metastasis who underwent axillary lymph node dissection between June 16, 2014 and November 16, 2017 at our hospital. Sentinel lymph node biopsy was performed and patients with pathologically proven sentinel lymph node metastasis underwent axillary lymph node dissection. Independent risks for non-sentinel lymph node metastasis were assessed in a training cohort by multivariate analysis and incorporated into a mathematical prediction model. The model was further validated in a separate validation cohort, and a nomogram was developed and evaluated for diagnostic performance in predicting the risk of non-sentinel lymph node metastasis. Moreover, we assessed the performance of five different models in predicting non-sentinel lymph node metastasis in training cohort. Totally, 495 cases were eligible for the study, including 291 patients in the training cohort and 204 in the validation cohort. Non-sentinel lymph node metastasis was observed in 33.3% (97/291) patients in the training cohort. The AUC of MSKCC, Tenon, MDA, Ljubljana, and Louisville models in training cohort were 0.7613, 0.7142, 0.7076, 0.7483, and 0.671, respectively. Multivariate regression analysis indicated that tumor size (OR = 1.439; 95% CI 1.025-2.021; P = 0.036), sentinel lymph node macro-metastasis versus micro-metastasis (OR = 5.063; 95% CI 1.111-23.074; P = 0.036), the number of positive sentinel lymph nodes (OR = 2.583, 95% CI 1.714-3.892; P < 0.001), and the number of negative sentinel lymph nodes (OR = 0.686, 95% CI 0.575-0.817; P < 0.001) were independent statistically significant predictors of non-sentinel lymph node metastasis. Furthermore, luminal B (OR = 3.311, 95% CI 1.593-6.884; P = 0.001) and HER2 overexpression (OR = 4.308, 95% CI 1.097-16.912; P = 0.036) were independent and statistically significant predictor of non-sentinel lymph node metastasis versus luminal A. A regression model based on the results of multivariate analysis was established to predict the risk of non-sentinel lymph node metastasis, which had an AUC of 0.8188. The model was validated in the validation cohort and showed excellent diagnostic performance. The mathematical prediction model that incorporates five variables including breast cancer molecular subtype demonstrates excellent diagnostic performance in assessing the risk of non-sentinel lymph node metastasis in sentinel lymph node-positive patients. The prediction model could be of help surgeons in evaluating the risk of non-sentinel lymph node involvement for breast cancer patients; however, the model requires further validation in prospective studies.

  8. Orientation masking and cross-orientation suppression (XOS): implications for estimates of filter bandwidth.

    PubMed

    Meese, Tim S; Holmes, David J

    2010-10-01

    Most contemporary models of spatial vision include a cross-oriented route to suppression (masking from a broadly tuned inhibitory pool), which is most potent at low spatial and high temporal frequencies (T. S. Meese & D. J. Holmes, 2007). The influence of this pathway can elevate orientation-masking functions without exciting the target mechanism, and because early psychophysical estimates of filter bandwidth did not accommodate this, it is likely that they have been overestimated for this corner of stimulus space. Here we show that a transient 40% contrast mask causes substantial binocular threshold elevation for a transient vertical target, and this declines from a mask orientation of 0° to about 40° (indicating tuning), and then more gently to 90°, where it remains at a factor of ∼4. We also confirm that cross-orientation masking is diminished or abolished at high spatial frequencies and for sustained temporal modulation. We fitted a simple model of pedestal masking and cross-orientation suppression (XOS) to our data and those of G. C. Phillips and H. R. Wilson (1984) and found the dependency of orientation bandwidth on spatial frequency to be much less than previously supposed. An extension of our linear spatial pooling model of contrast gain control and dilution masking (T. S. Meese & R. J. Summers, 2007) is also shown to be consistent with our results using filter bandwidths of ±20°. Both models include tightly and broadly tuned components of divisive suppression. More generally, because XOS and/or dilution masking can affect the shape of orientation-masking curves, we caution that variations in bandwidth estimates might reflect variations in processes that have nothing to do with filter bandwidth.

  9. Lightweight Filter Architecture for Energy Efficient Mobile Vehicle Localization Based on a Distributed Acoustic Sensor Network

    PubMed Central

    Kim, Keonwook

    2013-01-01

    The generic properties of an acoustic signal provide numerous benefits for localization by applying energy-based methods over a deployed wireless sensor network (WSN). However, the signal generated by a stationary target utilizes a significant amount of bandwidth and power in the system without providing further position information. For vehicle localization, this paper proposes a novel proximity velocity vector estimator (PVVE) node architecture in order to capture the energy from a moving vehicle and reject the signal from motionless automobiles around the WSN node. A cascade structure between analog envelope detector and digital exponential smoothing filter presents the velocity vector-sensitive output with low analog circuit and digital computation complexity. The optimal parameters in the exponential smoothing filter are obtained by analytical and mathematical methods for maximum variation over the vehicle speed. For stationary targets, the derived simulation based on the acoustic field parameters demonstrates that the system significantly reduces the communication requirements with low complexity and can be expected to extend the operation time considerably. PMID:23979482

  10. Standards-Based Wireless Sensor Networking Protocols for Spaceflight Applications

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.; Wagner, Raymond S.

    2009-01-01

    Wireless sensor networks (WSNs) have the capacity to revolutionize data gathering in both spaceflight and terrestrial applications. WSNs provide a huge advantage over traditional, wired instrumentation since they do not require wiring trunks to connect sensors to a central hub. This allows for easy sensor installation in hard to reach locations, easy expansion of the number of sensors or sensing modalities, and reduction in both system cost and weight. While this technology offers unprecedented flexibility and adaptability, implementing it in practice is not without its difficulties. Any practical WSN deployment must contend with a number of difficulties in its radio frequency (RF) environment. Multi-path reflections can distort signals, limit data rates, and cause signal fades that prevent nodes from having clear access to channels, especially in a closed environment such as a spacecraft. Other RF signal sources, such as wireless internet, voice, and data systems may contend with the sensor nodes for bandwidth. Finally, RF noise from electrical systems and periodic scattering from moving objects such as crew members will all combine to give an incredibly unpredictable, time-varying communication environment.

  11. Support Vector Machines Model of Computed Tomography for Assessing Lymph Node Metastasis in Esophageal Cancer with Neoadjuvant Chemotherapy.

    PubMed

    Wang, Zhi-Long; Zhou, Zhi-Guo; Chen, Ying; Li, Xiao-Ting; Sun, Ying-Shi

    The aim of this study was to diagnose lymph node metastasis of esophageal cancer by support vector machines model based on computed tomography. A total of 131 esophageal cancer patients with preoperative chemotherapy and radical surgery were included. Various indicators (tumor thickness, tumor length, tumor CT value, total number of lymph nodes, and long axis and short axis sizes of largest lymph node) on CT images before and after neoadjuvant chemotherapy were recorded. A support vector machines model based on these CT indicators was built to predict lymph node metastasis. Support vector machines model diagnosed lymph node metastasis better than preoperative short axis size of largest lymph node on CT. The area under the receiver operating characteristic curves were 0.887 and 0.705, respectively. The support vector machine model of CT images can help diagnose lymph node metastasis in esophageal cancer with preoperative chemotherapy.

  12. Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs

    PubMed Central

    Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor

    2017-01-01

    Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network. PMID:28763014

  13. Two Hop Adaptive Vector Based Quality Forwarding for Void Hole Avoidance in Underwater WSNs.

    PubMed

    Javaid, Nadeem; Ahmed, Farwa; Wadud, Zahid; Alrajeh, Nabil; Alabed, Mohamad Souheil; Ilahi, Manzoor

    2017-08-01

    Underwater wireless sensor networks (UWSNs) facilitate a wide range of aquatic applications in various domains. However, the harsh underwater environment poses challenges like low bandwidth, long propagation delay, high bit error rate, high deployment cost, irregular topological structure, etc. Node mobility and the uneven distribution of sensor nodes create void holes in UWSNs. Void hole creation has become a critical issue in UWSNs, as it severely affects the network performance. Avoiding void hole creation benefits better coverage over an area, less energy consumption in the network and high throughput. For this purpose, minimization of void hole probability particularly in local sparse regions is focused on in this paper. The two-hop adaptive hop by hop vector-based forwarding (2hop-AHH-VBF) protocol aims to avoid the void hole with the help of two-hop neighbor node information. The other protocol, quality forwarding adaptive hop by hop vector-based forwarding (QF-AHH-VBF), selects an optimal forwarder based on the composite priority function. QF-AHH-VBF improves network good-put because of optimal forwarder selection. QF-AHH-VBF aims to reduce void hole probability by optimally selecting next hop forwarders. To attain better network performance, mathematical problem formulation based on linear programming is performed. Simulation results show that by opting these mechanisms, significant reduction in end-to-end delay and better throughput are achieved in the network.

  14. EDOVE: Energy and Depth Variance-Based Opportunistic Void Avoidance Scheme for Underwater Acoustic Sensor Networks.

    PubMed

    Bouk, Safdar Hussain; Ahmed, Syed Hassan; Park, Kyung-Joon; Eun, Yongsoon

    2017-09-26

    Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node's depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15 % packet delivery ratio, propagates 50 % less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes.

  15. Aspect Ratio of Receiver Node Geometry based Indoor WLAN Propagation Model

    NASA Astrophysics Data System (ADS)

    Naik, Udaykumar; Bapat, Vishram N.

    2017-08-01

    This paper presents validation of indoor wireless local area network (WLAN) propagation model for varying rectangular receiver node geometry. The rectangular client node configuration is a standard node arrangement in computer laboratories of academic institutes and research organizations. The model assists to install network nodes for the better signal coverage. The proposed model is backed by wide ranging real time received signal strength measurements at 2.4 GHz. The shadow fading component of signal propagation under realistic indoor environment is modelled with the dependency on varying aspect ratio of the client node geometry. The developed new model is useful in predicting indoor path loss for IEEE 802.11b/g WLAN. The new model provides better performance in comparison to well known International Telecommunication Union and free space propagation models. It is shown that the proposed model is simple and can be a useful tool for indoor WLAN node deployment planning and quick method for the best utilisation of the office space.

  16. Resource utilization model for the algorithm to architecture mapping model

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Patel, Rakesh R.

    1993-01-01

    The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.

  17. Optimal Bandwidth for High Efficiency Thermoelectrics

    NASA Astrophysics Data System (ADS)

    Zhou, Jun; Yang, Ronggui; Chen, Gang; Dresselhaus, Mildred S.

    2011-11-01

    The thermoelectric figure of merit (ZT) in narrow conduction bands of different material dimensionalities is investigated for different carrier scattering models. When the bandwidth is zero, the transport distribution function (TDF) is finite, not infinite as previously speculated by Mahan and Sofo [Proc. Natl. Acad. Sci. U.S.A. 93, 7436 (1996)PNASA60027-842410.1073/pnas.93.15.7436], even though the carrier density of states goes to infinity. Such a finite TDF results in a zero electrical conductivity and thus a zero ZT. We point out that the optimal ZT cannot be found in an extremely narrow conduction band. The existence of an optimal bandwidth for a maximal ZT depends strongly on the scattering models and the dimensionality of the material. A nonzero optimal bandwidth for maximizing ZT also depends on the lattice thermal conductivity. A larger maximum ZT can be obtained for materials with a smaller lattice thermal conductivity.

  18. Investigating the influence of chromatic aberration and optical illumination bandwidth on fundus imaging in rats

    NASA Astrophysics Data System (ADS)

    Li, Hao; Liu, Wenzhong; Zhang, Hao F.

    2015-10-01

    Rodent models are indispensable in studying various retinal diseases. Noninvasive, high-resolution retinal imaging of rodent models is highly desired for longitudinally investigating the pathogenesis and therapeutic strategies. However, due to severe aberrations, the retinal image quality in rodents can be much worse than that in humans. We numerically and experimentally investigated the influence of chromatic aberration and optical illumination bandwidth on retinal imaging. We confirmed that the rat retinal image quality decreased with increasing illumination bandwidth. We achieved the retinal image resolution of 10 μm using a 19 nm illumination bandwidth centered at 580 nm in a home-built fundus camera. Furthermore, we observed higher chromatic aberration in albino rat eyes than in pigmented rat eyes. This study provides a design guide for high-resolution fundus camera for rodents. Our method is also beneficial to dispersion compensation in multiwavelength retinal imaging applications.

  19. Multi-granularity Bandwidth Allocation for Large-Scale WDM/TDM PON

    NASA Astrophysics Data System (ADS)

    Gao, Ziyue; Gan, Chaoqin; Ni, Cuiping; Shi, Qiongling

    2017-12-01

    WDM (wavelength-division multiplexing)/TDM (time-division multiplexing) PON (passive optical network) is being viewed as a promising solution for delivering multiple services and applications, such as high-definition video, video conference and data traffic. Considering the real-time transmission, QoS (quality of services) requirements and differentiated services model, a multi-granularity dynamic bandwidth allocation (DBA) in both domains of wavelengths and time for large-scale hybrid WDM/TDM PON is proposed in this paper. The proposed scheme achieves load balance by using the bandwidth prediction. Based on the bandwidth prediction, the wavelength assignment can be realized fairly and effectively to satisfy the different demands of various classes. Specially, the allocation of residual bandwidth further augments the DBA and makes full use of bandwidth resources in the network. To further improve the network performance, two schemes named extending the cycle of one free wavelength (ECoFW) and large bandwidth shrinkage (LBS) are proposed, which can prevent transmission from interruption when the user employs more than one wavelength. The simulation results show the effectiveness of the proposed scheme.

  20. Mixed integer nonlinear programming model of wireless pricing scheme with QoS attribute of bandwidth and end-to-end delay

    NASA Astrophysics Data System (ADS)

    Irmeilyana, Puspita, Fitri Maya; Indrawati

    2016-02-01

    The pricing for wireless networks is developed by considering linearity factors, elasticity price and price factors. Mixed Integer Nonlinear Programming of wireless pricing model is proposed as the nonlinear programming problem that can be solved optimally using LINGO 13.0. The solutions are expected to give some information about the connections between the acceptance factor and the price. Previous model worked on the model that focuses on bandwidth as the QoS attribute. The models attempt to maximize the total price for a connection based on QoS parameter. The QoS attributes used will be the bandwidth and the end to end delay that affect the traffic. The maximum goal to maximum price is achieved when the provider determine the requirement for the increment or decrement of price change due to QoS change and amount of QoS value.

  1. Design and Implementation of Secure Area Expansion Scheme for Public Wireless LAN Services

    NASA Astrophysics Data System (ADS)

    Watanabe, Ryu; Tanaka, Toshiaki

    Recently, wireless LAN (WLAN) technology has become a major wireless communication method. The communication bandwidth is increasing and speeds have attained rates exceeding 100 Mbps. Therefore, WLAN technology is regarded as one of the promising communication methods for future networks. In addition, public WLAN connection services can be used in many locations. However, the number of the access points (AP) is insufficient for seamless communication and it cannot be said that users can use the service ubiquitously. An ad-hoc network style connection can be used to expand the coverage area of a public WLAN service. By relaying the user messages among the user nodes, a node can obtain an Internet connection via an AP, even though the node is located outside the AP's direct wireless connection area. Such a coverage area extending technology has many advantages thanks to the feature that no additional infrastructure is required. Therefore, there is a strong demand for this technology as it allows the cost-effective construction of future networks. When a secure ad-hoc routing protocol is used for message exchange in the WLAN service, the message routes are protected from malicious behavior such as route forging and can be maintained appropriately. To do this, however, a new node that wants to join the WLAN service has to obtain information such as the public key certificate and IP address in order to start secure ad-hoc routing. In other words, an initial setup is required for every network node to join the WLAN service properly. Ordinarily, such information should be assigned from the AP. However, new nodes cannot always contact an AP directly. Therefore, there are problems about information delivery in the initial setup of a network node. These problems originate in the multi hop connection based on the ad-hoc routing protocols. In order to realize an expanded area WLAN service, in this paper, the authors propose a secure public key certificate and address provision scheme during the initial setup phase on mobile nodes for the service. The proposed scheme also considers the protection of user privacy. Accordingly, none of the user nodes has to reveal their unique and persistent information to other nodes. Instead of using such information, temporary values are sent by an AP to mobile nodes and used for secure ad-hoc routing operations. Therefore, our proposed scheme prevents tracking by malicious parties by avoiding the use of unique information. Moreover, a test bed was also implemented based on the proposal and an evaluation was carried out in order to confirm performance. In addition, the authors describe a countermeasure against denial of service (DoS) attacks based on the approach to privacy protection described in our proposal.

  2. Achieving increased bandwidth for 4 degree of freedom self-tuning energy harvester

    NASA Astrophysics Data System (ADS)

    Staaf, L. G. H.; Smith, A. D.; Köhler, E.; Lundgren, P.; Folkow, P. D.; Enoksson, P.

    2018-04-01

    The frequency response of a self-tuning energy harvester composed of two piezoelectric cantilevers connected by a middle beam with a sliding mass is investigated. Measurements show that incorporation of a free-sliding mass increases the bandwidth. Using an analytical model, the system is explained through close investigation of the resonance modes. Resonance mode behavior further suggests that, by breaking the symmetry of the system, even broader bandwidths are achievable.

  3. Proposal for optimal placement platform of bikes using queueing networks.

    PubMed

    Mizuno, Shinya; Iwamoto, Shogo; Seki, Mutsumi; Yamaki, Naokazu

    2016-01-01

    In recent social experiments, rental motorbikes and rental bicycles have been arranged at nodes, and environments where users can ride these bikes have been improved. When people borrow bikes, they return them to nearby nodes. Some experiments have been conducted using the models of Hamachari of Yokohama, the Niigata Rental Cycle, and Bicing. However, from these experiments, the effectiveness of distributing bikes was unclear, and many models were discontinued midway. Thus, we need to consider whether these models are effectively designed to represent the distribution system. Therefore, we construct a model to arrange the nodes for distributing bikes using a queueing network. To adopt realistic values for our model, we use the Google Maps application program interface. Thus, we can easily obtain values of distance and transit time between nodes in various places in the world. Moreover, we apply the distribution of a population to a gravity model and we compute the effective transition probability for this queueing network. If the arrangement of the nodes and number of bikes at each node is known, we can precisely design the system. We illustrate our system using convenience stores as nodes and optimize the node configuration. As a result, we can optimize simultaneously the number of nodes, node places, and number of bikes for each node, and we can construct a base for a rental cycle business to use our system.

  4. 3D Model of Cytokinetic Contractile Ring Assembly: Node-Mediated and Backup Pathways

    NASA Astrophysics Data System (ADS)

    Bidone, Tamara; Vavylonis, Dimitrios

    Cytokinetic ring assembly in model organism fission yeast is a dynamic process, involving condensation of a network of actin filaments and myosin motors bound to the cell membrane through cortical nodes. A 3D computational model of ring assembly illustrates how the combined activities of myosin motors, filament crosslinkers and actin turnover lead to robust ring formation [Bidone et al. Biophys. J, 2014]. We modeled the importance of the physical properties of node movement along the cell membrane and of myosin recruitment to nodes. Experiments by D. Zhang (Temasek Life Sciences) show that tethering of the cortical endoplasmic reticulum (ER) to the plasma membrane modulates the speed of node condensation and the degree of node clumping. We captured the trend observed in these experiments by changes in the node drag coefficient and initial node distribution in simulations PM. The model predicted that reducing crosslinking activities in ER tethering mutants with faster node speed enhances actomyosin clumping. We developed a model of how tilted and/or misplaced rings assemble in cells that lack the node structural component anillin-like Mid1 and thus fail to recruit myosin II to nodes independently of actin. If actin-dependent binding of diffusive myosin to the cortex is incorporated into the model, it generates progressively elongating cortical actomyosin strands with fluctuating actin bundles at the tails. These stands often close into a ring, similar to observations by the group of J.Q. Wu (The Ohio State University). NIH R01GM098430.

  5. Bandwidth auction for SVC streaming in dynamic multi-overlay

    NASA Astrophysics Data System (ADS)

    Xiong, Yanting; Zou, Junni; Xiong, Hongkai

    2010-07-01

    In this paper, we study the optimal bandwidth allocation for scalable video coding (SVC) streaming in multiple overlays. We model the whole bandwidth request and distribution process as a set of decentralized auction games between the competing peers. For the upstream peer, a bandwidth allocation mechanism is introduced to maximize the aggregate revenue. For the downstream peer, a dynamic bidding strategy is proposed. It achieves maximum utility and efficient resource usage by collaborating with a content-aware layer dropping/adding strategy. Also, the convergence of the proposed auction games is theoretically proved. Experimental results show that the auction strategies can adapt to dynamic join of competing peers and video layers.

  6. Accelerating lattice QCD simulations with 2 flavors of staggered fermions on multiple GPUs using OpenACC-A first attempt

    NASA Astrophysics Data System (ADS)

    Gupta, Sourendu; Majumdar, Pushan

    2018-07-01

    We present the results of an effort to accelerate a Rational Hybrid Monte Carlo (RHMC) program for lattice quantum chromodynamics (QCD) simulation for 2 flavors of staggered fermions on multiple Kepler K20X GPUs distributed on different nodes of a Cray XC30. We do not use CUDA but adopt a higher level directive based programming approach using the OpenACC platform. The lattice QCD algorithm is known to be bandwidth bound; our timing results illustrate this clearly, and we discuss how this limits the parallelization gains. We achieve more than a factor three speed-up compared to the CPU only MPI program.

  7. Traffic engineering and regenerator placement in GMPLS networks with restoration

    NASA Astrophysics Data System (ADS)

    Yetginer, Emre; Karasan, Ezhan

    2002-07-01

    In this paper we study regenerator placement and traffic engineering of restorable paths in Generalized Multipro-tocol Label Switching (GMPLS) networks. Regenerators are necessary in optical networks due to transmission impairments. We study a network architecture where there are regenerators at selected nodes and we propose two heuristic algorithms for the regenerator placement problem. Performances of these algorithms in terms of required number of regenerators and computational complexity are evaluated. In this network architecture with sparse regeneration, offline computation of working and restoration paths is studied with bandwidth reservation and path rerouting as the restoration scheme. We study two approaches for selecting working and restoration paths from a set of candidate paths and formulate each method as an Integer Linear Programming (ILP) prob-lem. Traffic uncertainty model is developed in order to compare these methods based on their robustness with respect to changing traffic patterns. Traffic engineering methods are compared based on number of additional demands due to traffic uncertainty that can be carried. Regenerator placement algorithms are also evaluated from a traffic engineering point of view.

  8. Unified study of Quality of Service (QoS) in OPS/OBS networks

    NASA Astrophysics Data System (ADS)

    Hailu, Dawit Hadush; Lema, Gebrehiwet Gebrekrstos; Yekun, Ephrem Admasu; Kebede, Samrawit Haylu

    2017-07-01

    With the growth of Internet traffic, an inevitable use of optical networks provide a large bandwidth, fast data transmission rates and Quality of Service (QoS) support. Currently, Optical Burst Switched (OBS)/Optical Packet Switched (OPS) networks are under study as future solutions for addressing the increase demand of Internet traffic. However, due to their high blocking probability in the intermediate nodes they have been delayed in the industries. Packet loss in OBS/OPS networks is mainly occur due to contention. Hence, the contribution of this study is to analyze the file loss ratio (FLR), packet overhead and number of disjoint paths, and processing delay over Coded Packet Transport (CPT) scheme for OBS/OPS network using simulation. The simulations show that CPT scheme reduces the FLR in OBS/OPS network for the evaluated scenarios since the data packets are chopped off into blocks of the data packet for transmission over a network. Simulation results for secrecy and survivability are verified with the help of the analytical model to define the operational range of CPT scheme.

  9. Characteristic analysis of diaphragm-type transducer that is thick relative to its size

    NASA Astrophysics Data System (ADS)

    Ishiguro, Yuya; Zhu, Jing; Tagawa, Norio; Okubo, Tsuyoshi; Okubo, Kan

    2017-07-01

    In recent years, high-performance piezoelectric micromachined ultrasonic transducers (PMUTs) have been fabricated by micro electro mechanical systems (MEMS) technology. For high-resolution imaging, it is important to broaden the frequency bandwidth. By reducing the diaphragm size to increase the resonance frequency, the film thickness becomes relatively larger and hence the transmitting and receiving characteristics may different from those of a usual thin diaphragm. In this study, we examine the performance of a square-diaphragm-type lead zirconate titanate (PZT) transducer through simulations. To realize the desired resonance frequency of 20 MHz, firstly, the diaphragm size and the thickness of the layers of PZT and Si constituting a PMUT are examined, and then, three PZT/Si models with different thicknesses are selected. Subsequently, using the models, we analyze the transmitting efficiency, transmitting bandwidth, receiving sensitivity (piezoelectric voltage/electric charge), and receiving bandwidth using an FEM simulator. It is found that the proposed models can transmit ultrasound independently of the diaphragm vibration and have wide bandwidth of the receiving frequency as compared with that of a typical PMUT.

  10. Protocol for Communication Networking for Formation Flying

    NASA Technical Reports Server (NTRS)

    Jennings, Esther; Okino, Clayton; Gao, Jay; Clare, Loren

    2009-01-01

    An application-layer protocol and a network architecture have been proposed for data communications among multiple autonomous spacecraft that are required to fly in a precise formation in order to perform scientific observations. The protocol could also be applied to other autonomous vehicles operating in formation, including robotic aircraft, robotic land vehicles, and robotic underwater vehicles. A group of spacecraft or other vehicles to which the protocol applies could be characterized as a precision-formation- flying (PFF) network, and each vehicle could be characterized as a node in the PFF network. In order to support precise formation flying, it would be necessary to establish a corresponding communication network, through which the vehicles could exchange position and orientation data and formation-control commands. The communication network must enable communication during early phases of a mission, when little positional knowledge is available. Particularly during early mission phases, the distances among vehicles may be so large that communication could be achieved only by relaying across multiple links. The large distances and need for omnidirectional coverage would limit communication links to operation at low bandwidth during these mission phases. Once the vehicles were in formation and distances were shorter, the communication network would be required to provide high-bandwidth, low-jitter service to support tight formation-control loops. The proposed protocol and architecture, intended to satisfy the aforementioned and other requirements, are based on a standard layered-reference-model concept. The proposed application protocol would be used in conjunction with conventional network, data-link, and physical-layer protocols. The proposed protocol includes the ubiquitous Institute of Electrical and Electronics Engineers (IEEE) 802.11 medium access control (MAC) protocol to be used in the datalink layer. In addition to its widespread and proven use in diverse local-area networks, this protocol offers both (1) a random- access mode needed for the early PFF deployment phase and (2) a time-bounded-services mode needed during PFF-maintenance operations. Switching between these two modes could be controlled by upper-layer entities using standard link-management mechanisms. Because the early deployment phase of a PFF mission can be expected to involve multihop relaying to achieve network connectivity (see figure), the proposed protocol includes the open shortest path first (OSPF) network protocol that is commonly used in the Internet. Each spacecraft in a PFF network would be in one of seven distinct states as the mission evolved from initial deployment, through coarse formation, and into precise formation. Reconfiguration of the formation to perform different scientific observations would also cause state changes among the network nodes. The application protocol provides for recognition and tracking of the seven states for each node and for protocol changes under specified conditions to adapt the network and satisfy communication requirements associated with the current PFF mission phase. Except during early deployment, when peer-to-peer random access discovery methods would be used, the application protocol provides for operation in a centralized manner.

  11. Central FPGA-based destination and load control in the LHCb MHz event readout

    NASA Astrophysics Data System (ADS)

    Jacobsson, R.

    2012-10-01

    The readout strategy of the LHCb experiment is based on complete event readout at 1 MHz. A set of 320 sub-detector readout boards transmit event fragments at total rate of 24.6 MHz at a bandwidth usage of up to 70 GB/s over a commercial switching network based on Gigabit Ethernet to a distributed event building and high-level trigger processing farm with 1470 individual multi-core computer nodes. In the original specifications, the readout was based on a pure push protocol. This paper describes the proposal, implementation, and experience of a non-conventional mixture of a push and a pull protocol, akin to credit-based flow control. An FPGA-based central master module, partly operating at the LHC bunch clock frequency of 40.08 MHz and partly at a double clock speed, is in charge of the entire trigger and readout control from the front-end electronics up to the high-level trigger farm. One FPGA is dedicated to controlling the event fragment packing in the readout boards, the assignment of the farm node destination for each event, and controls the farm load based on an asynchronous pull mechanism from each farm node. This dynamic readout scheme relies on generic event requests and the concept of node credit allowing load control and trigger rate regulation as a function of the global farm load. It also allows the vital task of fast central monitoring and automatic recovery in-flight of failing nodes while maintaining dead-time and event loss at a minimum. This paper demonstrates the strength and suitability of implementing this real-time task for a very large distributed system in an FPGA where no random delays are introduced, and where extreme reliability and accurate event accounting are fundamental requirements. It was in use during the entire commissioning phase of LHCb and has been in faultless operation during the first two years of physics luminosity data taking.

  12. TreeMAC: Localized TDMA MAC protocol for real-time high-data-rate sensor networks

    USGS Publications Warehouse

    Song, W.-Z.; Huang, R.; Shirazi, B.; LaHusen, R.

    2009-01-01

    Earlier sensor network MAC protocols focus on energy conservation in low-duty cycle applications, while some recent applications involve real-time high-data-rate signals. This motivates us to design an innovative localized TDMA MAC protocol to achieve high throughput and low congestion in data collection sensor networks, besides energy conservation. TreeMAC divides a time cycle into frames and each frame into slots. A parent node determines the children's frame assignment based on their relative bandwidth demand, and each node calculates its own slot assignment based on its hop-count to the sink. This innovative 2-dimensional frame-slot assignment algorithm has the following nice theory properties. First, given any node, at any time slot, there is at most one active sender in its neighborhood (including itself). Second, the packet scheduling with TreeMAC is bufferless, which therefore minimizes the probability of network congestion. Third, the data throughput to the gateway is at least 1/3 of the optimum assuming reliable links. Our experiments on a 24-node testbed show that TreeMAC protocol significantly improves network throughput, fairness, and energy efficiency compared to TinyOS's default CSMA MAC protocol and a recent TDMA MAC protocol Funneling-MAC. Partial results of this paper were published in Song, Huang, Shirazi and Lahusen [W.-Z. Song, R. Huang, B. Shirazi, and R. Lahusen, TreeMAC: Localized TDMA MAC protocol for high-throughput and fairness in sensor networks, in: The 7th Annual IEEE International Conference on Pervasive Computing and Communications, PerCom, March 2009]. Our new contributions include analyses of the performance of TreeMAC from various aspects. We also present more implementation detail and evaluate TreeMAC from other aspects. ?? 2009 Elsevier B.V.

  13. Tracking trade transactions in water resource systems: A node-arc optimization formulation

    NASA Astrophysics Data System (ADS)

    Erfani, Tohid; Huskova, Ivana; Harou, Julien J.

    2013-05-01

    We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).

  14. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Christopher J; Ahrens, James P; Wang, Jun

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less

  15. Analysis of a dynamic model of guard cell signaling reveals the stability of signal propagation

    NASA Astrophysics Data System (ADS)

    Gan, Xiao; Albert, RéKa

    Analyzing the long-term behaviors (attractors) of dynamic models of biological systems can provide valuable insight into biological phenotypes and their stability. We identified the long-term behaviors of a multi-level, 70-node discrete dynamic model of the stomatal opening process in plants. We reduce the model's huge state space by reducing unregulated nodes and simple mediator nodes, and by simplifying the regulatory functions of selected nodes while keeping the model consistent with experimental observations. We perform attractor analysis on the resulting 32-node reduced model by two methods: 1. converting it into a Boolean model, then applying two attractor-finding algorithms; 2. theoretical analysis of the regulatory functions. We conclude that all nodes except two in the reduced model have a single attractor; and only two nodes can admit oscillations. The multistability or oscillations do not affect the stomatal opening level in any situation. This conclusion applies to the original model as well in all the biologically meaningful cases. We further demonstrate the robustness of signal propagation by showing that a large percentage of single-node knockouts does not affect the stomatal opening level. Thus, we conclude that the complex structure of this signal transduction network provides multiple information propagation pathways while not allowing extensive multistability or oscillations, resulting in robust signal propagation. Our innovative combination of methods offers a promising way to analyze multi-level models.

  16. Analysis of blocking probability for OFDM-based variable bandwidth optical network

    NASA Astrophysics Data System (ADS)

    Gong, Lei; Zhang, Jie; Zhao, Yongli; Lin, Xuefeng; Wu, Yuyao; Gu, Wanyi

    2011-12-01

    Orthogonal Frequency Division Multiplexing (OFDM) has recently been proposed as a modulation technique. For optical networks, because of its good spectral efficiency, flexibility, and tolerance to impairments, optical OFDM is much more flexible compared to traditional WDM systems, enabling elastic bandwidth transmissions, and optical networking is the future trend of development. In OFDM-based optical network the research of blocking rate has very important significance for network assessment. Current research for WDM network is basically based on a fixed bandwidth, in order to accommodate the future business and the fast-changing development of optical network, our study is based on variable bandwidth OFDM-based optical networks. We apply the mathematical analysis and theoretical derivation, based on the existing theory and algorithms, research blocking probability of the variable bandwidth of optical network, and then we will build a model for blocking probability.

  17. Amplified spontaneous emission in N2 lasers: Saturation and bandwidth study

    NASA Astrophysics Data System (ADS)

    Hariri, A.; Sarikhani, S.

    2014-05-01

    A complete ASE analysis in a 3-level laser system based on the model of the geometrically dependent gain coefficient (GDGC) is presented. For the study, the photon density/intensity rate equation in the saturated and unsaturated conditions, along with reported experimental measurements on the ASE output energy and spectral bandwidth for N2-lasers were utilized. It was found that the GDGC model is able to explain the ASE output energy behavior and gain profiles correctly. In addition, the model was used to predict the spontaneous emission bandwidth Δν0 and consequently the stimulated emission cross-section for the C→B transition of nitrogen molecule at 337.1 nm. In this work, for example, Δν0 was found to be 766 GHz (2.9 Å) which is consistent with the earliest experimental observation on the ASE bandwidth reduction in a N2-laser as reported to be ~3. This is the first theoretical result that explains the spontaneous emission bandwidth which is different from the commonly used value of ~1 Å obtained from measurements of N2-lasers output spectra. The method was also applied for a filament N2 laser for the C→B transition produced in atmosphere, and a good consistency between the laboratory and filament lasers was obtained. Details of the calculations for this study are presented. The results obtained from 3-level systems confirm further the potential of applying the GDGC model for the ASE study in different laser systems and is unifying lasers of the same active medium.

  18. A Computer Model of a Phase Lock Loop

    NASA Technical Reports Server (NTRS)

    Shelton, Ralph Paul

    1973-01-01

    A computer model is reported of a PLL (phase-lock loop), preceded by a bandpass filter, which is valid when the bandwidth of the bandpass filter is of the same order of magnitude as the natural frequency of the PLL. New results for the PLL natural frequency equal to the bandpass filter bandwidth are presented for a second order PLL operating with carrier plus noise as the input. However, it is shown that extensions to higher order loops, and to the case of a modulated carrier are straightforward. The new results presented give the cycle skipping rate of the PLL as a function of the input carrier to noise ratio when the PLL natural frequency is equal to the bandpass filter bandwidth. Preliminary results showing the variation of the output noise power and cycle skipping rates of the PLL as a function of the loop damping ratio for the PLL natural frequency equal to the bandpass filter bandwidth are also included.

  19. Design of 2.5 GHz broad bandwidth microwave bandpass filter at operating frequency of 10 GHz using HFSS

    NASA Astrophysics Data System (ADS)

    Jasim, S. E.; Jusoh, M. A.; Mahmud, S. N. S.; Zamani, A. H.

    2018-04-01

    Development of low losses, small size and broad bandwidth microwave bandpass filter operating at higher frequencies is an active area of research. This paper presents a new route used to design and simulate microwave bandpass filter using finite element modelling and realized broad bandwidth, low losses, small dimension microwave bandpass filter operating at 10 GHz frequency using return loss method. The filter circuit has been carried out using Computer Aid Design (CAD), Ansoft HFSS software and designed with four parallel couple line model and small dimension (10 × 10 mm2) using LaAlO3 substrate. The response of the microwave filter circuit showed high return loss -50 dB at operating frequency at 10.4 GHz and broad bandwidth of 2.5 GHz from 9.5 to 12 GHz. The results indicate the filter design and simulation using HFSS is reliable and have the opportunity to transfer from lab potential experiments to the industry.

  20. Energy Efficient, Cross-Layer Enabled, Dynamic Aggregation Networks for Next Generation Internet

    NASA Astrophysics Data System (ADS)

    Wang, Michael S.

    Today, the Internet traffic is growing at a near exponential rate, driven predominately by data center-based applications and Internet-of-Things services. This fast-paced growth in Internet traffic calls into question the ability of the existing optical network infrastructure to support this continued growth. The overall optical networking equipment efficiency has not been able to keep up with the traffic growth, creating a energy gap that makes energy and cost expenditures scale linearly with the traffic growth. The implication of this energy gap is that it is infeasible to continue using existing networking equipment to meet the growing bandwidth demand. A redesign of the optical networking platform is needed. The focus of this dissertation is on the design and implementation of energy efficient, cross-layer enabled, dynamic optical networking platforms, which is a promising approach to address the exponentially growing Internet bandwidth demand. Chapter 1 explains the motivation for this work by detailing the huge Internet traffic growth and the unsustainable energy growth of today's networking equipment. Chapter 2 describes the challenges and objectives of enabling agile, dynamic optical networking platforms and the vision of the Center for Integrated Access Networks (CIAN) to realize these objectives; the research objectives of this dissertation and the large body of related work in this field is also summarized. Chapter 3 details the design and implementation of dynamic networking platforms that support wavelength switching granularity. The main contribution of this work involves the experimental validation of deep cross-layer communication across the optical performance monitoring (OPM), data, and control planes. The first experiment shows QoS-aware video streaming over a metro-scale test-bed through optical power monitoring of the transmission wavelength and cross-layer feedback control of the power level. The second experiment extends the performance monitoring capabilities to include real-time monitoring of OSNR and polarization mode dispersion (PMD) to enable dynamic wavelength switching and selective restoration. Chapter 4 explains the author?s contributions in designing dynamic networking at the sub-wavelength switching granularity, which can provide greater network efficiency due to its finer granularity. To support dynamic switching, regeneration, adding/dropping, and control decisions on each individual packet, the cross-layer enabled node architecture is enhanced with a FPGA controller that brings much more precise timing and control to the switching, OPM, and control planes. Furthermore, QoS-aware packet protection and dynamic switching, dropping, and regeneration functionalities were experimentally demonstrated in a multi-node network. Chapter 5 describes a technique to perform optical grooming, a process of optically combining multiple incoming data streams into a single data stream, which can simultaneously achieve greater bandwidth utilization and increased spectral efficiency. In addition, an experimental demonstration highlighting a fully functioning multi-node, agile optical networking platform is detailed. Finally, a summary and discussion of future work is provided in Chapter 6. The future of the Internet is very exciting, filled with not-yet-invented applications and services driven by cloud computing and Internet-of-Things. The author is cautiously optimistic that agile, dynamically reconfigurable optical networking is the solution to realizing this future.

  1. Flexible embedding of networks

    NASA Astrophysics Data System (ADS)

    Fernandez-Gracia, Juan; Buckee, Caroline; Onnela, Jukka-Pekka

    We introduce a model for embedding one network into another, focusing on the case where network A is much bigger than network B. Nodes from network A are assigned to the nodes in network B using an algorithm where we control the extent of localization of node placement in network B using a single parameter. Starting from an unassigned node in network A, called the source node, we first map this node to a randomly chosen node in network B, called the target node. We then assign the neighbors of the source node to the neighborhood of the target node using a random walk based approach. To assign each neighbor of the source node to one of the nodes in network B, we perform a random walk starting from the target node with stopping probability α. We repeat this process until all nodes in network A have been mapped to the nodes of network B. The simplicity of the model allows us to calculate key quantities of interest in closed form. By varying the parameter α, we are able to produce embeddings from very local (α = 1) to very global (α --> 0). We show how our calculations fit the simulated results, and we apply the model to study how social networks are embedded in geography and how the neurons of C. Elegans are embedded in the surrounding volume.

  2. A k-space method for acoustic propagation using coupled first-order equations in three dimensions.

    PubMed

    Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C

    2009-09-01

    A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.

  3. Using Trust to Establish a Secure Routing Model in Cognitive Radio Network.

    PubMed

    Zhang, Guanghua; Chen, Zhenguo; Tian, Liqin; Zhang, Dongwen

    2015-01-01

    Specific to the selective forwarding attack on routing in cognitive radio network, this paper proposes a trust-based secure routing model. Through monitoring nodes' forwarding behaviors, trusts of nodes are constructed to identify malicious nodes. In consideration of that routing selection-based model must be closely collaborative with spectrum allocation, a route request piggybacking available spectrum opportunities is sent to non-malicious nodes. In the routing decision phase, nodes' trusts are used to construct available path trusts and delay measurement is combined for making routing decisions. At the same time, according to the trust classification, different responses are made specific to their service requests. By adopting stricter punishment on malicious behaviors from non-trusted nodes, the cooperation of nodes in routing can be stimulated. Simulation results and analysis indicate that this model has good performance in network throughput and end-to-end delay under the selective forwarding attack.

  4. Three-Dimensional Computer Model of the Right Atrium Including the Sinoatrial and Atrioventricular Nodes Predicts Classical Nodal Behaviours

    PubMed Central

    Li, Jue; Inada, Shin; Schneider, Jurgen E.; Zhang, Henggui; Dobrzynski, Halina; Boyett, Mark R.

    2014-01-01

    The aim of the study was to develop a three-dimensional (3D) anatomically-detailed model of the rabbit right atrium containing the sinoatrial and atrioventricular nodes to study the electrophysiology of the nodes. A model was generated based on 3D images of a rabbit heart (atria and part of ventricles), obtained using high-resolution magnetic resonance imaging. Segmentation was carried out semi-manually. A 3D right atrium array model (∼3.16 million elements), including eighteen objects, was constructed. For description of cellular electrophysiology, the Rogers-modified FitzHugh-Nagumo model was further modified to allow control of the major characteristics of the action potential with relatively low computational resource requirements. Model parameters were chosen to simulate the action potentials in the sinoatrial node, atrial muscle, inferior nodal extension and penetrating bundle. The block zone was simulated as passive tissue. The sinoatrial node, crista terminalis, main branch and roof bundle were considered as anisotropic. We have simulated normal and abnormal electrophysiology of the two nodes. In accordance with experimental findings: (i) during sinus rhythm, conduction occurs down the interatrial septum and into the atrioventricular node via the fast pathway (conduction down the crista terminalis and into the atrioventricular node via the slow pathway is slower); (ii) during atrial fibrillation, the sinoatrial node is protected from overdrive by its long refractory period; and (iii) during atrial fibrillation, the atrioventricular node reduces the frequency of action potentials reaching the ventricles. The model is able to simulate ventricular echo beats. In summary, a 3D anatomical model of the right atrium containing the cardiac conduction system is able to simulate a wide range of classical nodal behaviours. PMID:25380074

  5. On Channel-Discontinuity-Constraint Routing in Wireless Networks☆

    PubMed Central

    Sankararaman, Swaminathan; Efrat, Alon; Ramasubramanian, Srinivasan; Agarwal, Pankaj K.

    2011-01-01

    Multi-channel wireless networks are increasingly deployed as infrastructure networks, e.g. in metro areas. Network nodes frequently employ directional antennas to improve spatial throughput. In such networks, between two nodes, it is of interest to compute a path with a channel assignment for the links such that the path and link bandwidths are the same. This is achieved when any two consecutive links are assigned different channels, termed as “Channel-Discontinuity-Constraint” (CDC). CDC-paths are also useful in TDMA systems, where, preferably, consecutive links are assigned different time-slots. In the first part of this paper, we develop a t-spanner for CDC-paths using spatial properties; a sub-network containing O(n/θ) links, for any θ > 0, such that CDC-paths increase in cost by at most a factor t = (1−2 sin (θ/2))−2. We propose a novel distributed algorithm to compute the spanner using an expected number of O(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using O(n2) fixed-size messages, by developing an extension of Edmonds’ algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in O(n2) time improving the previous best algorithm which requires O(n3) running time. Moreover, this running time improves to O(n/θ) when used in conjunction with the spanner developed. PMID:24443646

  6. An adaptive neural swarm approach for intrusion defense in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Cannady, James

    2011-06-01

    Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.

  7. Sam2bam: High-Performance Framework for NGS Data Preprocessing Tools

    PubMed Central

    Cheng, Yinhe; Tzeng, Tzy-Hwa Kathy

    2016-01-01

    This paper introduces a high-throughput software tool framework called sam2bam that enables users to significantly speed up pre-processing for next-generation sequencing data. The sam2bam is especially efficient on single-node multi-core large-memory systems. It can reduce the runtime of data pre-processing in marking duplicate reads on a single node system by 156–186x compared with de facto standard tools. The sam2bam consists of parallel software components that can fully utilize multiple processors, available memory, high-bandwidth storage, and hardware compression accelerators, if available. The sam2bam provides file format conversion between well-known genome file formats, from SAM to BAM, as a basic feature. Additional features such as analyzing, filtering, and converting input data are provided by using plug-in tools, e.g., duplicate marking, which can be attached to sam2bam at runtime. We demonstrated that sam2bam could significantly reduce the runtime of next generation sequencing (NGS) data pre-processing from about two hours to about one minute for a whole-exome data set on a 16-core single-node system using up to 130 GB of memory. The sam2bam could reduce the runtime of NGS data pre-processing from about 20 hours to about nine minutes for a whole-genome sequencing data set on the same system using up to 711 GB of memory. PMID:27861637

  8. Computational lymphatic node models in pediatric and adult hybrid phantoms for radiation dosimetry

    NASA Astrophysics Data System (ADS)

    Lee, Choonsik; Lamart, Stephanie; Moroz, Brian E.

    2013-03-01

    We developed models of lymphatic nodes for six pediatric and two adult hybrid computational phantoms to calculate the lymphatic node dose estimates from external and internal radiation exposures. We derived the number of lymphatic nodes from the recommendations in International Commission on Radiological Protection (ICRP) Publications 23 and 89 at 16 cluster locations for the lymphatic nodes: extrathoracic, cervical, thoracic (upper and lower), breast (left and right), mesentery (left and right), axillary (left and right), cubital (left and right), inguinal (left and right) and popliteal (left and right), for different ages (newborn, 1-, 5-, 10-, 15-year-old and adult). We modeled each lymphatic node within the voxel format of the hybrid phantoms by assuming that all nodes have identical size derived from published data except narrow cluster sites. The lymph nodes were generated by the following algorithm: (1) selection of the lymph node site among the 16 cluster sites; (2) random sampling of the location of the lymph node within a spherical space centered at the chosen cluster site; (3) creation of the sphere or ovoid of tissue representing the node based on lymphatic node characteristics defined in ICRP Publications 23 and 89. We created lymph nodes until the pre-defined number of lymphatic nodes at the selected cluster site was reached. This algorithm was applied to pediatric (newborn, 1-, 5-and 10-year-old male, and 15-year-old males) and adult male and female ICRP-compliant hybrid phantoms after voxelization. To assess the performance of our models for internal dosimetry, we calculated dose conversion coefficients, called S values, for selected organs and tissues with Iodine-131 distributed in six lymphatic node cluster sites using MCNPX2.6, a well validated Monte Carlo radiation transport code. Our analysis of the calculations indicates that the S values were significantly affected by the location of the lymph node clusters and that the values increased for smaller phantoms due to the shorter inter-organ distances compared to the bigger phantoms. By testing sensitivity of S values to random sampling and voxel resolution, we confirmed that the lymph node model is reasonably stable and consistent for different random samplings and voxel resolutions.

  9. Adaptive Broadcasting Mechanism for Bandwidth Allocation in Mobile Services

    PubMed Central

    Horng, Gwo-Jiun; Wang, Chi-Hsuan; Chou, Chih-Lun

    2014-01-01

    This paper proposes a tree-based adaptive broadcasting (TAB) algorithm for data dissemination to improve data access efficiency. The proposed TAB algorithm first constructs a broadcast tree to determine the broadcast frequency of each data and splits the broadcast tree into some broadcast wood to generate the broadcast program. In addition, this paper develops an analytical model to derive the mean access latency of the generated broadcast program. In light of the derived results, both the index channel's bandwidth and the data channel's bandwidth can be optimally allocated to maximize bandwidth utilization. This paper presents experiments to help evaluate the effectiveness of the proposed strategy. From the experimental results, it can be seen that the proposed mechanism is feasible in practice. PMID:25057509

  10. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources.

    PubMed

    Tang, M X; Zhang, Y Y; E, J C; Luo, S N

    2018-05-01

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic-plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of the diffraction patterns is discussed.

  11. Simulations of X-ray diffraction of shock-compressed single-crystal tantalum with synchrotron undulator sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, M. X.; Zhang, Y. Y.; E, J. C.

    Polychromatic synchrotron undulator X-ray sources are useful for ultrafast single-crystal diffraction under shock compression. Here, simulations of X-ray diffraction of shock-compressed single-crystal tantalum with realistic undulator sources are reported, based on large-scale molecular dynamics simulations. Purely elastic deformation, elastic–plastic two-wave structure, and severe plastic deformation under different impact velocities are explored, as well as an edge release case. Transmission-mode diffraction simulations consider crystallographic orientation, loading direction, incident beam direction, X-ray spectrum bandwidth and realistic detector size. Diffraction patterns and reciprocal space nodes are obtained from atomic configurations for different loading (elastic and plastic) and detection conditions, and interpretation of themore » diffraction patterns is discussed.« less

  12. Design and Performance of the Acts Gigabit Satellite Network High Data-Rate Ground Station

    NASA Technical Reports Server (NTRS)

    Hoder, Doug; Kearney, Brian

    1995-01-01

    The ACTS High Data-Rate Ground stations were built to support the ACTS Gigabit Satellite Network (GSN). The ACTS GSN was designed to provide fiber-compatible SONET service to remote nodes and networks through a wideband satellite system. The ACTS satellite is unique in its extremely wide bandwidth, and electronically controlled spot beam antennas. This paper discusses the requirements, design and performance of the RF section of the ACTS High Data-Rate Ground Stations and constituent hardware. The ACTS transponder systems incorporate highly nonlinear hard limiting. This introduced a major complexity in to the design and subsequent modification of the ground stations. A discussion of the peculiarities of the A CTS spacecraft transponder system and their impact is included.

  13. Use of High Frequency Ultrasound to Monitor Cervical Lymph Node Alterations in Mice

    PubMed Central

    Walk, Elyse L.; McLaughlin, Sarah; Coad, James; Weed, Scott A.

    2014-01-01

    Cervical lymph node evaluation by clinical ultrasound is a non-invasive procedure used in diagnosing nodal status, and when combined with fine-needle aspiration cytology (FNAC), provides an effective method to assess nodal pathologies. Development of high-frequency ultrasound (HF US) allows real-time monitoring of lymph node alterations in animal models. While HF US is frequently used in animal models of tumor biology, use of HF US for studying cervical lymph nodes alterations associated with murine models of head and neck cancer, or any other model of lymphadenopathy, is lacking. Here we utilize HF US to monitor cervical lymph nodes changes in mice following exposure to the oral cancer-inducing carcinogen 4-nitroquinoline-1-oxide (4-NQO) and in mice with systemic autoimmunity. 4-NQO induces tumors within the mouse oral cavity as early as 19 wks that recapitulate HNSCC. Monitoring of cervical (mandibular) lymph nodes by gray scale and power Doppler sonography revealed changes in lymph node size eight weeks after 4-NQO treatment, prior to tumor formation. 4-NQO causes changes in cervical node blood flow resulting from oral tumor progression. Histological evaluation indicated that the early 4-NQO induced changes in lymph node volume were due to specific hyperproliferation of T-cell enriched zones in the paracortex. We also show that HF US can be used to perform image-guided fine needle aspirate (FNA) biopsies on mice with enlarged mandibular lymph nodes due to genetic mutation of Fas ligand (Fasl). Collectively these studies indicate that HF US is an effective technique for the non-invasive study of cervical lymph node alterations in live mouse models of oral cancer and other mouse models containing cervical lymphadenopathy. PMID:24955984

  14. Use of high frequency ultrasound to monitor cervical lymph node alterations in mice.

    PubMed

    Walk, Elyse L; McLaughlin, Sarah; Coad, James; Weed, Scott A

    2014-01-01

    Cervical lymph node evaluation by clinical ultrasound is a non-invasive procedure used in diagnosing nodal status, and when combined with fine-needle aspiration cytology (FNAC), provides an effective method to assess nodal pathologies. Development of high-frequency ultrasound (HF US) allows real-time monitoring of lymph node alterations in animal models. While HF US is frequently used in animal models of tumor biology, use of HF US for studying cervical lymph nodes alterations associated with murine models of head and neck cancer, or any other model of lymphadenopathy, is lacking. Here we utilize HF US to monitor cervical lymph nodes changes in mice following exposure to the oral cancer-inducing carcinogen 4-nitroquinoline-1-oxide (4-NQO) and in mice with systemic autoimmunity. 4-NQO induces tumors within the mouse oral cavity as early as 19 wks that recapitulate HNSCC. Monitoring of cervical (mandibular) lymph nodes by gray scale and power Doppler sonography revealed changes in lymph node size eight weeks after 4-NQO treatment, prior to tumor formation. 4-NQO causes changes in cervical node blood flow resulting from oral tumor progression. Histological evaluation indicated that the early 4-NQO induced changes in lymph node volume were due to specific hyperproliferation of T-cell enriched zones in the paracortex. We also show that HF US can be used to perform image-guided fine needle aspirate (FNA) biopsies on mice with enlarged mandibular lymph nodes due to genetic mutation of Fas ligand (Fasl). Collectively these studies indicate that HF US is an effective technique for the non-invasive study of cervical lymph node alterations in live mouse models of oral cancer and other mouse models containing cervical lymphadenopathy.

  15. High Bandwidth Communications Links Between Heterogeneous Autonomous Vehicles Using Sensor Network Modeling and Extremum Control Approaches

    DTIC Science & Technology

    2008-12-01

    In future network-centric warfare environments, teams of autonomous vehicles will be deployed in a coorperative manner to conduct wide-area...of data back to the command station, autonomous vehicles configured with high bandwidth communication system are positioned between the command

  16. Lymph node detection in IASLC-defined zones on PET/CT images

    NASA Astrophysics Data System (ADS)

    Song, Yihua; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2016-03-01

    Lymph node detection is challenging due to the low contrast between lymph nodes as well as surrounding soft tissues and the variation in nodal size and shape. In this paper, we propose several novel ideas which are combined into a system to operate on positron emission tomography/ computed tomography (PET/CT) images to detect abnormal thoracic nodes. First, our previous Automatic Anatomy Recognition (AAR) approach is modified where lymph node zones predominantly following International Association for the Study of Lung Cancer (IASLC) specifications are modeled as objects arranged in a hierarchy along with key anatomic anchor objects. This fuzzy anatomy model built from diagnostic CT images is then deployed on PET/CT images for automatically recognizing the zones. A novel globular filter (g-filter) to detect blob-like objects over a specified range of sizes is designed to detect the most likely locations and sizes of diseased nodes. Abnormal nodes within each automatically localized zone are subsequently detected via combined use of different items of information at various scales: lymph node zone model poses found at recognition indicating the geographic layout at the global level of node clusters, g-filter response which hones in on and carefully selects node-like globular objects at the node level, and CT and PET gray value but within only the most plausible nodal regions for node presence at the voxel level. The models are built from 25 diagnostic CT scans and refined for an object hierarchy based on a separate set of 20 diagnostic CT scans. Node detection is tested on an additional set of 20 PET/CT scans. Our preliminary results indicate node detection sensitivity and specificity at around 90% and 85%, respectively.

  17. Design of a Single Channel Modulated Wideband Converter for Wideband Spectrum Sensing: Theory, Architecture and Hardware Implementation

    PubMed Central

    Liu, Weisong; Huang, Zhitao; Wang, Xiang; Sun, Weichao

    2017-01-01

    In a cognitive radio sensor network (CRSN), wideband spectrum sensing devices which aims to effectively exploit temporarily vacant spectrum intervals as soon as possible are of great importance. However, the challenge of increasingly high signal frequency and wide bandwidth requires an extremely high sampling rate which may exceed today’s best analog-to-digital converters (ADCs) front-end bandwidth. Recently, the newly proposed architecture called modulated wideband converter (MWC), is an attractive analog compressed sensing technique that can highly reduce the sampling rate. However, the MWC has high hardware complexity owing to its parallel channel structure especially when the number of signals increases. In this paper, we propose a single channel modulated wideband converter (SCMWC) scheme for spectrum sensing of band-limited wide-sense stationary (WSS) signals. With one antenna or sensor, this scheme can save not only sampling rate but also hardware complexity. We then present a new, SCMWC based, single node CR prototype System, on which the spectrum sensing algorithm was tested. Experiments on our hardware prototype show that the proposed architecture leads to successful spectrum sensing. And the total sampling rate as well as hardware size is only one channel’s consumption of MWC. PMID:28471410

  18. Gbps wireless transceivers for high bandwidth interconnections in distributed cyber physical systems

    NASA Astrophysics Data System (ADS)

    Saponara, Sergio; Neri, Bruno

    2015-05-01

    In Cyber Physical Systems there is a growing use of high speed sensors like photo and video camera, radio and light detection and ranging (Radar/Lidar) sensors. Hence Cyber Physical Systems can benefit from the high communication data rate, several Gbps, that can be provided by mm-wave wireless transceivers. At such high frequency the wavelength is few mm and hence the whole transceiver including the antenna can be integrated in a single chip. To this aim this paper presents the design of 60 GHz transceiver architecture to ensure connection distances up to 10 m and data rate up to 4 Gbps. At 60 GHz there are more than 7 GHz of unlicensed bandwidth (available for free for development of new services). By using a CMOS SOI technology RF, analog and digital baseband circuitry can be integrated in the same chip minimizing noise coupling. Even the antenna is integrated on chip reducing cost and size vs. classic off-chip antenna solutions. Therefore the proposed transceiver can enable at physical layer the implementation of low cost nodes for a Cyber Physical System with data rates of several Gbps and with a communication distance suitable for home/office scenarios, or on-board vehicles such as cars, trains, ships, airplanes

  19. Hybrid WDM/TDM PON Using the AWG FSR and Featuring Centralized Light Generation and Dynamic Bandwidth Allocation

    NASA Astrophysics Data System (ADS)

    Bock, Carlos; Prat, Josep; Walker, Stuart D.

    2005-12-01

    A novel time/space/wavelength division multiplexing (TDM/WDM) architecture using the free spectral range (FSR) periodicity of the arrayed waveguide grating (AWG) is presented. A shared tunable laser and a photoreceiver stack featuring dynamic bandwidth allocation (DBA) and remote modulation are used for transmission and reception. Transmission tests show correct operation at 2.5 Gb/s to a 30-km reach, and network performance calculations using queue modeling demonstrate that a high-bandwidth-demanding application could be deployed on this network.

  20. Multiport Circular Polarized RFID-Tag Antenna for UHF Sensor Applications.

    PubMed

    Zaid, Jamal; Abdulhadi, Abdulhadi; Kesavan, Arun; Belaizi, Yassin; Denidni, Tayeb A

    2017-07-05

    A circular polarized patch antenna for UHF RFID tag-based sensor applications is presented, with the circular polarization (CP) generated by a new antenna shape, an asymmetric stars shaped slotted microstrip patch antenna (CP-ASSSMP). Four stars etched on the patch allow the antenna's size to be reduced by close to 20%. The proposed antenna is matched with two RFID chips via inductive-loop matching. The first chip is connected to a resistive sensor and acts as a sensor node, and the second is used as a reference node. The proposed antenna is used for two targets, serving as both reference and sensor simultaneously, thereby eliminating the need for a second antenna. Its reader can read the RFID chips at any orientation of the tag due to the CP. The measured reading range is about 25 m with mismatch polarization. The operating frequency band is 902-929 MHz for the two ports, which is covered by the US RFID band, and the axial-ratio bandwidth is about 7 MHz. In addition, the reader can also detect temperature, based on the minimum difference in the power required by the reference and sensor.

  1. Multiport Circular Polarized RFID-Tag Antenna for UHF Sensor Applications

    PubMed Central

    Zaid, Jamal; Abdulhadi, Abdulhadi; Kesavan, Arun; Belaizi, Yassin; Denidni, Tayeb A.

    2017-01-01

    A circular polarized patch antenna for UHF RFID tag-based sensor applications is presented, with the circular polarization (CP) generated by a new antenna shape, an asymmetric stars shaped slotted microstrip patch antenna (CP-ASSSMP). Four stars etched on the patch allow the antenna’s size to be reduced by close to 20%. The proposed antenna is matched with two RFID chips via inductive-loop matching. The first chip is connected to a resistive sensor and acts as a sensor node, and the second is used as a reference node. The proposed antenna is used for two targets, serving as both reference and sensor simultaneously, thereby eliminating the need for a second antenna. Its reader can read the RFID chips at any orientation of the tag due to the CP. The measured reading range is about 25 m with mismatch polarization. The operating frequency band is 902–929 MHz for the two ports, which is covered by the US RFID band, and the axial-ratio bandwidth is about 7 MHz. In addition, the reader can also detect temperature, based on the minimum difference in the power required by the reference and sensor. PMID:28678178

  2. An optimization method of VON mapping for energy efficiency and routing in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Liu, Huanlin; Xiong, Cuilian; Chen, Yong; Li, Changping; Chen, Derun

    2018-03-01

    To improve resources utilization efficiency, network virtualization in elastic optical networks has been developed by sharing the same physical network for difference users and applications. In the process of virtual nodes mapping, longer paths between physical nodes will consume more spectrum resources and energy. To address the problem, we propose a virtual optical network mapping algorithm called genetic multi-objective optimize virtual optical network mapping algorithm (GM-OVONM-AL), which jointly optimizes the energy consumption and spectrum resources consumption in the process of virtual optical network mapping. Firstly, a vector function is proposed to balance the energy consumption and spectrum resources by optimizing population classification and crowding distance sorting. Then, an adaptive crossover operator based on hierarchical comparison is proposed to improve search ability and convergence speed. In addition, the principle of the survival of the fittest is introduced to select better individual according to the relationship of domination rank. Compared with the spectrum consecutiveness-opaque virtual optical network mapping-algorithm and baseline-opaque virtual optical network mapping algorithm, simulation results show the proposed GM-OVONM-AL can achieve the lowest bandwidth blocking probability and save the energy consumption.

  3. Mesh Network Architecture for Enabling Inter-Spacecraft Communication

    NASA Technical Reports Server (NTRS)

    Becker, Christopher; Merrill, Garrick

    2017-01-01

    To enable communication between spacecraft operating in a formation or small constellation, a mesh network architecture was developed and tested using a time division multiple access (TDMA) communication scheme. The network is designed to allow for the exchange of telemetry and other data between spacecraft to enable collaboration between small spacecraft. The system uses a peer-to-peer topology with no central router, so that it does not have a single point of failure. The mesh network is dynamically configurable to allow for addition and subtraction of new spacecraft into the communication network. Flight testing was performed using an unmanned aerial system (UAS) formation acting as a spacecraft analogue and providing a stressing environment to prove mesh network performance. The mesh network was primarily devised to provide low latency, high frequency communication but is flexible and can also be configured to provide higher bandwidth for applications desiring high data throughput. The network includes a relay functionality that extends the maximum range between spacecraft in the network by relaying data from node to node. The mesh network control is implemented completely in software making it hardware agnostic, thereby allowing it to function with a wide variety of existing radios and computing platforms..

  4. Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks

    NASA Astrophysics Data System (ADS)

    Luo, Hongbin; Li, Lemin; Yu, Hongfang

    2006-12-01

    Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.

  5. Capacity planning of link restorable optical networks under dynamic change of traffic

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2005-11-01

    Future backbone networks shall require full-survivability and support dynamic changes of traffic demands. The Generalized Survivable Networks (GSN) was proposed to meet these challenges. GSN is fully-survivable under dynamic traffic demand changes, so it offers a practical and guaranteed characterization framework for ASTN / ASON survivable network planning and bandwidth-on-demand resource allocation 4. The basic idea of GSN is to incorporate the non-blocking network concept into the survivable network models. In GSN, each network node must specify its I/O capacity bound which is taken as constraints for any allowable traffic demand matrix. In this paper, we consider the following generic GSN network design problem: Given the I/O bounds of each network node, find a routing scheme (and the corresponding rerouting scheme under failure) and the link capacity assignment (both working and spare) which minimize the cost, such that any traffic matrix consistent with the given I/O bounds can be feasibly routed and it is single-fault tolerant under the link restoration scheme. We first show how the initial, infeasible formal mixed integer programming formulation can be transformed into a more feasible problem using the duality transformation of the linear program. Then we show how the problem can be simplified using the Lagrangian Relaxation approach. Previous work has outlined a two-phase approach for solving this problem where the first phase optimizes the working capacity assignment and the second phase optimizes the spare capacity assignment. In this paper, we present a jointly optimized framework for dimensioning the survivable optical network with the GSN model. Experiment results show that the jointly optimized GSN can bring about on average of 3.8% cost savings when compared with the separate, two-phase approach. Finally, we perform a cost comparison and show that GSN can be deployed with a reasonable cost.

  6. Roles of Formin Nodes and Myosin Motor Activity in Mid1p-dependent Contractile-Ring Assembly during Fission Yeast Cytokinesis

    PubMed Central

    Coffman, Valerie C.; Nile, Aaron H.; Lee, I-Ju; Liu, Huayang

    2009-01-01

    Two prevailing models have emerged to explain the mechanism of contractile-ring assembly during cytokinesis in the fission yeast Schizosaccharomyces pombe: the spot/leading cable model and the search, capture, pull, and release (SCPR) model. We tested some of the basic assumptions of the two models. Monte Carlo simulations of the SCPR model require that the formin Cdc12p is present in >30 nodes from which actin filaments are nucleated and captured by myosin-II in neighboring nodes. The force produced by myosin motors pulls the nodes together to form a compact contractile ring. Live microscopy of cells expressing Cdc12p fluorescent fusion proteins shows for the first time that Cdc12p localizes to a broad band of 30–50 dynamic nodes, where actin filaments are nucleated in random directions. The proposed progenitor spot, essential for the spot/leading cable model, usually disappears without nucleating actin filaments. α-Actinin ain1 deletion cells form a normal contractile ring through nodes in the absence of the spot. Myosin motor activity is required to condense the nodes into a contractile ring, based on slower or absent node condensation in myo2-E1 and UCS rng3-65 mutants. Taken together, these data provide strong support for the SCPR model of contractile-ring formation in cytokinesis. PMID:19864459

  7. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 1: Dynamic models and computer simulations for the ERBE nonscanner, scanner and solar monitor sensors

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.

    1987-01-01

    Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.

  8. Network structure exploration in networks with node attributes

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Wang, Xiaolong; Bu, Junzhao; Tang, Buzhou; Xiang, Xin

    2016-05-01

    Complex networks provide a powerful way to represent complex systems and have been widely studied during the past several years. One of the most important tasks of network analysis is to detect structures (also called structural regularities) embedded in networks by determining group number and group partition. Most of network structure exploration models only consider network links. However, in real world networks, nodes may have attributes that are useful for network structure exploration. In this paper, we propose a novel Bayesian nonparametric (BNP) model to explore structural regularities in networks with node attributes, called Bayesian nonparametric attribute (BNPA) model. This model does not only take full advantage of both links between nodes and node attributes for group partition via shared hidden variables, but also determine group number automatically via the Bayesian nonparametric theory. Experiments conducted on a number of real and synthetic networks show that our BNPA model is able to automatically explore structural regularities in networks with node attributes and is competitive with other state-of-the-art models.

  9. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.

    PubMed

    Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania

    2015-01-01

    This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.

  10. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    NASA Technical Reports Server (NTRS)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  11. AHaH computing-from metastable switches to attractors to machine learning.

    PubMed

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.

  12. AHaH Computing–From Metastable Switches to Attractors to Machine Learning

    PubMed Central

    Nugent, Michael Alexander; Molter, Timothy Wesley

    2014-01-01

    Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315

  13. A comparative signaling cost analysis of Macro Mobility scheme in NEMO (MM-NEMO) with mobility management protocol

    NASA Astrophysics Data System (ADS)

    Islam, Shayla; Abdalla, Aisha H.; Habaebi, Mohamed H.; Latif, Suhaimi A.; Hassan, Wan H.; Hasan, Mohammad K.; Ramli, H. A. M.; Khalifa, Othman O.

    2013-12-01

    NEMO BSP is an upgraded addition to Mobile IPv6 (MIPv6). As MIPv6 and its enhancements (i.e. HMIPv6) possess some limitations like higher handoff latency, packet loss, NEMO BSP also faces all these shortcomings by inheritance. Network Mobility (NEMO) is involved to handle the movement of Mobile Router (MR) and it's Mobile Network Nodes (MNNs) during handoff. Hence it is essential to upgrade the performance of mobility management protocol to obtain continuous session connectivity with lower delay and packet loss in NEMO environment. The completion of handoff process in NEMO BSP usually takes longer period since MR needs to register its single primary care of address (CoA) with home network that may cause performance degradation of the applications running on Mobile Network Nodes. Moreover, when a change in point of attachment of the mobile network is accompanied by a sudden burst of signaling messages, "Signaling Storm" occurs which eventually results in temporary congestion, packet delays or even packet loss. This effect is particularly significant for wireless environment where a wireless link is not as steady as a wired link since bandwidth is relatively limited in wireless link. Hence, providing continuous Internet connection without any interruption through applying multihoming technique and route optimization mechanism in NEMO are becoming the center of attention to the current researchers. In this paper, we propose a handoff cost model to compare the signaling cost of MM-NEMO with NEMO Basic Support Protocol (NEMO BSP) and HMIPv6.The numerical results shows that the signaling cost for the MM-NEMO scheme is about 69.6 % less than the NEMO-BSP and HMIPv6.

  14. Model validation of untethered, ultrasonic neural dust motes for cortical recording.

    PubMed

    Seo, Dongjin; Carmena, Jose M; Rabaey, Jan M; Maharbiz, Michel M; Alon, Elad

    2015-04-15

    A major hurdle in brain-machine interfaces (BMI) is the lack of an implantable neural interface system that remains viable for a substantial fraction of the user's lifetime. Recently, sub-mm implantable, wireless electromagnetic (EM) neural interfaces have been demonstrated in an effort to extend system longevity. However, EM systems do not scale down in size well due to the severe inefficiency of coupling radio-waves at those scales within tissue. This paper explores fundamental system design trade-offs as well as size, power, and bandwidth scaling limits of neural recording systems built from low-power electronics coupled with ultrasonic power delivery and backscatter communication. Such systems will require two fundamental technology innovations: (1) 10-100 μm scale, free-floating, independent sensor nodes, or neural dust, that detect and report local extracellular electrophysiological data via ultrasonic backscattering and (2) a sub-cranial ultrasonic interrogator that establishes power and communication links with the neural dust. We provide experimental verification that the predicted scaling effects follow theory; (127 μm)(3) neural dust motes immersed in water 3 cm from the interrogator couple with 0.002064% power transfer efficiency and 0.04246 ppm backscatter, resulting in a maximum received power of ∼0.5 μW with ∼1 nW of change in backscatter power with neural activity. The high efficiency of ultrasonic transmission can enable the scaling of the sensing nodes down to 10s of micrometer. We conclude with a brief discussion of the application of neural dust for both central and peripheral nervous system recordings, and perspectives on future research directions. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Linear and Nonlinear Analysis of Magnetic Bearing Bandwidth Due to Eddy Current Limitations

    NASA Technical Reports Server (NTRS)

    Kenny, Andrew; Palazzolo, Alan

    2000-01-01

    Finite element analysis was used to study the bandwidth of alloy hyperco50a and silicon iron laminated rotors and stators in magnetic bearings. A three dimensional model was made of a heteropolar bearing in which all the flux circulated in the plane of the rotor and stator laminate. A three dimensional model of a plate similar to the region of a pole near the gap was also studied with a very fine mesh. Nonlinear time transient solutions for the net flux carried by the plate were compared to steady state time harmonic solutions. Both linear and quasi-nonlinear steady state time harmonic solutions were calculated and compared. The finite element solutions for power loss and flux bandwidth were compared to those determined from classical analytical solutions to Maxwell's equations.

  16. On the bandwidth of the plenoptic function.

    PubMed

    Do, Minh N; Marchand-Maillet, Davy; Vetterli, Martin

    2012-02-01

    The plenoptic function (POF) provides a powerful conceptual tool for describing a number of problems in image/video processing, vision, and graphics. For example, image-based rendering is shown as sampling and interpolation of the POF. In such applications, it is important to characterize the bandwidth of the POF. We study a simple but representative model of the scene where band-limited signals (e.g., texture images) are "painted" on smooth surfaces (e.g., of objects or walls). We show that, in general, the POF is not band limited unless the surfaces are flat. We then derive simple rules to estimate the essential bandwidth of the POF for this model. Our analysis reveals that, in addition to the maximum and minimum depths and the maximum frequency of painted signals, the bandwidth of the POF also depends on the maximum surface slope. With a unifying formalism based on multidimensional signal processing, we can verify several key results in POF processing, such as induced filtering in space and depth-corrected interpolation, and quantify the necessary sampling rates. © 2011 IEEE

  17. Information spreading in Delay Tolerant Networks based on nodes' behaviors

    NASA Astrophysics Data System (ADS)

    Wu, Yahui; Deng, Su; Huang, Hongbin

    2014-07-01

    Information spreading in DTNs (Delay Tolerant Networks) adopts a store-carry-forward method, and nodes receive the message from others directly. However, it is hard to judge whether the information is safe in this communication mode. In this case, a node may observe other nodes' behaviors. At present, there is no theoretical model to describe the varying rule of the nodes' trusting level. In addition, due to the uncertainty of the connectivity in DTN, a node is hard to get the global state of the network. Therefore, a rational model about the node's trusting level should be a function of the node's own observing result. For example, if a node finds k nodes carrying a message, it may trust the information with probability p(k). This paper does not explore the real distribution of p(k), but instead presents a unifying theoretical framework to evaluate the performance of the information spreading in above case. This framework is an extension of the traditional SI (susceptible-infected) model, and is useful when p(k) conforms to any distribution. Simulations based on both synthetic and real motion traces show the accuracy of the framework. Finally, we explore the impact of the nodes' behaviors based on certain special distributions through numerical results.

  18. Dual-window dual-bandwidth spectroscopic optical coherence tomography metric for qualitative scatterer size differentiation in tissues.

    PubMed

    Tay, Benjamin Chia-Meng; Chow, Tzu-Hao; Ng, Beng-Koon; Loh, Thomas Kwok-Seng

    2012-09-01

    This study investigates the autocorrelation bandwidths of dual-window (DW) optical coherence tomography (OCT) k-space scattering profile of different-sized microspheres and their correlation to scatterer size. A dual-bandwidth spectroscopic metric defined as the ratio of the 10% to 90% autocorrelation bandwidths is found to change monotonically with microsphere size and gives the best contrast enhancement for scatterer size differentiation in the resulting spectroscopic image. A simulation model supports the experimental results and revealed a tradeoff between the smallest detectable scatterer size and the maximum scatterer size in the linear range of the dual-window dual-bandwidth (DWDB) metric, which depends on the choice of the light source optical bandwidth. Spectroscopic OCT (SOCT) images of microspheres and tonsil tissue samples based on the proposed DWDB metric showed clear differentiation between different-sized scatterers as compared to those derived from conventional short-time Fourier transform metrics. The DWDB metric significantly improves the contrast in SOCT imaging and can aid the visualization and identification of dissimilar scatterer size in a sample. Potential applications include the early detection of cell nuclear changes in tissue carcinogenesis, the monitoring of healing tendons, and cell proliferation in tissue scaffolds.

  19. Negative inductance circuits for metamaterial bandwidth enhancement

    NASA Astrophysics Data System (ADS)

    Avignon-Meseldzija, Emilie; Lepetit, Thomas; Ferreira, Pietro Maris; Boust, Fabrice

    2017-12-01

    Passive metamaterials have yet to be translated into applications on a large scale due in large part to their limited bandwidth. To overcome this limitation many authors have suggested coupling metamaterials to non-Foster circuits. However, up to now, the number of convincing demonstrations based on non-Foster metamaterials has been very limited. This paper intends to clarify why progress has been so slow, i.e., the fundamental difficulty in making a truly broadband and efficient non-Foster metamaterial. To this end, we consider two families of metamaterials, namely Artificial Magnetic Media and Artificial Magnetic Conductors. In both cases, it turns out that bandwidth enhancement requires negative inductance with almost zero resistance. To estimate bandwidth enhancement with actual non-Foster circuits, we consider two classes of such circuits, namely Linvill and gyrator. The issue of stability being critical, both metamaterial families are studied with equivalent circuits that include advanced models of these non-Foster circuits. Conclusions are different for Artificial Magnetic Media coupled to Linvill circuits and Artificial Magnetic Conductors coupled to gyrator circuits. In the first case, requirements for bandwidth enhancement and stability are very hard to meet simultaneously whereas, in the second case, an adjustment of the transistor gain does significantly increase bandwidth.

  20. Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm

    PubMed Central

    Sun, Baoliang; Jiang, Chunlan; Li, Ming

    2016-01-01

    An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271

  1. Directed Diffusion Modelling for Tesso Nilo National Parks Case Study

    NASA Astrophysics Data System (ADS)

    Yasri, Indra; Safrianti, Ery

    2018-01-01

    — Directed Diffusion (DD has ability to achieve energy efficiency in Wireless Sensor Network (WSN). This paper proposes Directed Diffusion (DD) modelling for Tesso Nilo National Parks (TNNP) case study. There are 4 stages of scenarios involved in this modelling. It’s started by appointing of sampling area through GPS coordinate. The sampling area is determined by optimization processes from 500m x 500m up to 1000m x 1000m with 100m increment in between. The next stage is sensor node placement. Sensor node is distributed in sampling area with three different quantities i.e. 20 nodes, 30 nodes and 40 nodes. One of those quantities is choose as an optimized sensor node placement. The third stage is to implement all scenarios in stages 1 and stages 2 on DD modelling. In the last stage, the evaluation process to achieve most energy efficient in the combination of optimized sampling area and optimized sensor node placement on Direct Diffusion (DD) routing protocol. The result shows combination between sampling area 500m x 500m and 20 nodes able to achieve energy efficient to support a forest preventive fire system at Tesso Nilo National Parks.

  2. Vulnerability of networks of interacting Markov chains.

    PubMed

    Kocarev, L; Zlatanov, N; Trajanov, D

    2010-05-13

    The concept of vulnerability is introduced for a model of random, dynamical interactions on networks. In this model, known as the influence model, the nodes are arranged in an arbitrary network, while the evolution of the status at a node is according to an internal Markov chain, but with transition probabilities that depend not only on the current status of that node but also on the statuses of the neighbouring nodes. Vulnerability is treated analytically and numerically for several networks with different topological structures, as well as for two real networks--the network of infrastructures and the EU power grid--identifying the most vulnerable nodes of these networks.

  3. Identification of flexible structures by frequency-domain observability range context

    NASA Astrophysics Data System (ADS)

    Hopkins, M. A.

    2013-04-01

    The well known frequency-domain observability range space extraction (FORSE) algorithm provides a powerful multivariable system-identification tool with inherent flexibility, to create state-space models from frequency-response data (FRD). This paper presents a method of using FORSE to create "context models" of a lightly damped system, from which models of individual resonant modes can be extracted. Further, it shows how to combine the extracted models of many individual modes into one large state-space model. Using this method, the author has created very high-order state-space models that accurately match measured FRD over very broad bandwidths, i.e., resonant peaks spread across five orders-of-magnitude of frequency bandwidth.

  4. Achieving Agreement in Three Rounds With Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2015-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K (nodes) greater than or equal to 3F (faults) +1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport et al. and is scalable with respect to the number of nodes in the system and applies equally to the traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  5. Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network

    PubMed Central

    Lin, Kai; Wang, Di; Hu, Long

    2016-01-01

    With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302

  6. [Glossary of terms used by radiologists in image processing].

    PubMed

    Rolland, Y; Collorec, R; Bruno, A; Ramée, A; Morcet, N; Haigron, P

    1995-01-01

    We give the definition of 166 words used in image processing. Adaptivity, aliazing, analog-digital converter, analysis, approximation, arc, artifact, artificial intelligence, attribute, autocorrelation, bandwidth, boundary, brightness, calibration, class, classification, classify, centre, cluster, coding, color, compression, contrast, connectivity, convolution, correlation, data base, decision, decomposition, deconvolution, deduction, descriptor, detection, digitization, dilation, discontinuity, discretization, discrimination, disparity, display, distance, distorsion, distribution dynamic, edge, energy, enhancement, entropy, erosion, estimation, event, extrapolation, feature, file, filter, filter floaters, fitting, Fourier transform, frequency, fusion, fuzzy, Gaussian, gradient, graph, gray level, group, growing, histogram, Hough transform, Houndsfield, image, impulse response, inertia, intensity, interpolation, interpretation, invariance, isotropy, iterative, JPEG, knowledge base, label, laplacian, learning, least squares, likelihood, matching, Markov field, mask, matching, mathematical morphology, merge (to), MIP, median, minimization, model, moiré, moment, MPEG, neural network, neuron, node, noise, norm, normal, operator, optical system, optimization, orthogonal, parametric, pattern recognition, periodicity, photometry, pixel, polygon, polynomial, prediction, pulsation, pyramidal, quantization, raster, reconstruction, recursive, region, rendering, representation space, resolution, restoration, robustness, ROC, thinning, transform, sampling, saturation, scene analysis, segmentation, separable function, sequential, smoothing, spline, split (to), shape, threshold, tree, signal, speckle, spectrum, spline, stationarity, statistical, stochastic, structuring element, support, syntaxic, synthesis, texture, truncation, variance, vision, voxel, windowing.

  7. A multi-node model for transient heat transfer analysis of stratospheric airships

    NASA Astrophysics Data System (ADS)

    Alam, Mohammad Irfan; Pant, Rajkumar S.

    2017-06-01

    This paper describes a seven-node thermal model for transient heat transfer analysis of a solar powered stratospheric airship in floating condition. The solar array is modeled as a three node system, viz., outer layer, solar cell and substrate. The envelope is also modeled in three nodes, and the contained gas is considered as the seventh node. The heat transfer equations involving radiative, infra-red and conductive heat are solved simultaneously using a fourth order Runge-Kutta Method. The model can be used to study the effect of solar radiation, ambient wind, altitude and location of deployment of the airship on the temperature of the solar array. The model has been validated against some experimental data and numerical results quoted in literature. The effect of change in the value of some operational parameters on temperature of the solar array, and hence on its power output is also discussed.

  8. Heuristic approaches for energy-efficient shared restoration in WDM networks

    NASA Astrophysics Data System (ADS)

    Alilou, Shahab

    In recent years, there has been ongoing research on the design of energy-efficient Wavelength Division Multiplexing (WDM) networks. The explosive growth of Internet traffic has led to increased power consumption of network components. Network survivability has also been a relevant research topic, as it plays a crucial role in assuring continuity of service with no disruption, regardless of network component failure. Network survivability mechanisms tend to utilize considerable resources such as spare capacity in order to protect and restore information. This thesis investigates techniques for reducing energy demand and enhancing energy efficiency in the context of network survivability. We propose two novel heuristic energy-efficient shared protection approaches for WDM networks. These approaches intend to save energy by setting on sleep mode devices that are not being used while providing shared backup paths to satisfy network survivability. The first approach exploits properties of a math series in order to assign weight to the network links. It aims at reducing power consumption at the network indirectly by aggregating traffic on a set of nodes and links with high traffic load level. Routing traffic on links and nodes that are already under utilization makes it possible for the links and nodes with no load to be set on sleep mode. The second approach is intended to dynamically route traffic through nodes and links with high traffic load level. Similar to the first approach, this approach computes a pair of paths for every newly arrived demand. It computes these paths for every new demand by comparing the power consumption of nodes and links in the network before the demand arrives with their potential power consumption if they are chosen along the paths of this demand. Simulations of two different networks were used to compare the total network power consumption obtained using the proposed techniques against a standard shared-path restoration scheme. Shared-path restoration is a network survivability method in which a link-disjoint backup path and wavelength is reserved at the time of call setup for a working path. However, in order to reduce spare capacity consumption, this reserved backup path and wavelength may be shared with other backup paths. Pool Sharing Scheme (PSS) is employed to implement shared-path restoration scheme [1]. In an optical network, the failure of a single link leads to the failure of all the lightpaths that pass through that particular link. PSS ensures that the amount of backup bandwidth required on a link to restore the failed connections will not be more than the total amount of reserved backup bandwidth on that link. Simulation results indicate that the proposed approaches lead to up to 35% power savings in WDM networks when traffic load is low. However, power saving decreases to 14% at high traffic load level. Furthermore, in terms of the total capacity consumption for working paths, PSS outperforms the two proposed approaches, as expected. In terms of total capacity consumption all the approaches behave similarly. In general, at low traffic load level, the two proposed approaches behave similar to PSS in terms of average link load, and the ratio of block demands. Nevertheless, at high traffic load, the proposed approaches result in higher ratio of blocked demands than PSS. They also lead to higher average link load than PSS for the equal number of generated demands.

  9. Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth

    NASA Technical Reports Server (NTRS)

    Wilson, Jeffrey D.

    2001-01-01

    A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.

  10. Partially pre-calculated weights for the backpropagation learning regime and high accuracy function mapping using continuous input RAM-based sigma-pi nets.

    PubMed

    Neville, R S; Stonham, T J; Glover, R J

    2000-01-01

    In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma-pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital "Higher Order" sigma-pi nodes and studies continuous input RAM-based sigma-pi units. The units are trained with the backpropagation learning regime to learn functions to a high accuracy. The neural model is the sigma-pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for mapping accurate real-valued functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output functions in the range Y epsilon [0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma-pi node which enables the provision of high accuracy outputs. The sigma-pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma-pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma-pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma-pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of mapping them to RAM-based hardware using 'ring memories'. Finally, we study the sigma-pi units' ability to generalise once they are trained to map real-valued, high accuracy, continuous functions. We use sigma-pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289-303).

  11. A novel PON based UMTS broadband wireless access network architecture with an algorithm to guarantee end to end QoS

    NASA Astrophysics Data System (ADS)

    Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir

    2007-09-01

    In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness among error-free flows,(3)long term fairness among errored and error-free flows,(4)graceful degradation for leading flows and graceful compensation for lagging flows.

  12. Performance prediction of high Tc superconducting small antennas using a two-fluid-moment method model

    NASA Astrophysics Data System (ADS)

    Cook, G. G.; Khamas, S. K.; Kingsley, S. P.; Woods, R. C.

    1992-01-01

    The radar cross section and Q factors of electrically small dipole and loop antennas made with a YBCO high Tc superconductor are predicted using a two-fluid-moment method model, in order to determine the effects of finite conductivity on the performances of such antennas. The results compare the useful operating bandwidths of YBCO antennas exhibiting varying degrees of impurity with their copper counterparts at 77 K, showing a linear relationship between bandwidth and impurity level.

  13. Intersatellite communications optoelectronics research at the Goddard Space Flight Center

    NASA Technical Reports Server (NTRS)

    Krainak, Michael A.

    1992-01-01

    A review is presented of current optoelectronics research and development at the NASA Goddard Space Flight Center for high-power, high-bandwidth laser transmitters; high-bandwidth, high-sensitivity optical receivers; pointing, acquisition, and tracking components; and experimental and theoretical system modeling at the NASA Goddard Space Flight Center. Program hardware and space flight opportunities are presented.

  14. Modelling Time-of-Arrival Ambiguities in a Combined Acousto-Optic and Crystal Video Receiver

    DTIC Science & Technology

    1995-11-01

    The probability of pulses overlapping in time being received by a combined acousto - optic /crystal video receiver is investigated. Theoretical analysis...number of pulses in that bandwidth. The number of frequency subbands with crystal detectors required to cover the acousto - optic receiver bandwidth is therefore a compromise between cost and complexity of implementation.

  15. VENI, video, VICI: The merging of computer and video technologies

    NASA Technical Reports Server (NTRS)

    Horowitz, Jay G.

    1993-01-01

    The topics covered include the following: High Definition Television (HDTV) milestones; visual information bandwidth; television frequency allocation and bandwidth; horizontal scanning; workstation RGB color domain; NTSC color domain; American HDTV time-table; HDTV image size; digital HDTV hierarchy; task force on digital image architecture; open architecture model; future displays; and the ULTIMATE imaging system.

  16. Service-Oriented Node Scheduling Scheme for Wireless Sensor Networks Using Markov Random Field Model

    PubMed Central

    Cheng, Hongju; Su, Zhihuang; Lloret, Jaime; Chen, Guolong

    2014-01-01

    Future wireless sensor networks are expected to provide various sensing services and energy efficiency is one of the most important criterions. The node scheduling strategy aims to increase network lifetime by selecting a set of sensor nodes to provide the required sensing services in a periodic manner. In this paper, we are concerned with the service-oriented node scheduling problem to provide multiple sensing services while maximizing the network lifetime. We firstly introduce how to model the data correlation for different services by using Markov Random Field (MRF) model. Secondly, we formulate the service-oriented node scheduling issue into three different problems, namely, the multi-service data denoising problem which aims at minimizing the noise level of sensed data, the representative node selection problem concerning with selecting a number of active nodes while determining the services they provide, and the multi-service node scheduling problem which aims at maximizing the network lifetime. Thirdly, we propose a Multi-service Data Denoising (MDD) algorithm, a novel multi-service Representative node Selection and service Determination (RSD) algorithm, and a novel MRF-based Multi-service Node Scheduling (MMNS) scheme to solve the above three problems respectively. Finally, extensive experiments demonstrate that the proposed scheme efficiently extends the network lifetime. PMID:25384005

  17. Design and use of multisine signals for Li-ion battery equivalent circuit modelling. Part 1: Signal design

    NASA Astrophysics Data System (ADS)

    Widanage, W. D.; Barai, A.; Chouchelamane, G. H.; Uddin, K.; McGordon, A.; Marco, J.; Jennings, P.

    2016-08-01

    The Pulse Power Current (PPC) profile is often the signal of choice for obtaining the parameters of a Lithium-ion (Li-ion) battery Equivalent Circuit Model (ECM). Subsequently, a drive-cycle current profile is used as a validation signal. Such a profile, in contrast to a PPC, is more dynamic in both the amplitude and frequency bandwidth. Modelling errors can occur when using PPC data for parametrisation since the model is optimised over a narrower bandwidth than the validation profile. A signal more representative of a drive-cycle, while maintaining a degree of generality, is needed to reduce such modelling errors. In Part 1 of this 2-part paper a signal design technique defined as a pulse-multisine is presented. This superimposes a signal known as a multisine to a discharge, rest and charge base signal to achieve a profile more dynamic in amplitude and frequency bandwidth, and thus more similar to a drive-cycle. The signal improves modelling accuracy and reduces the experimentation time, per state-of-charge (SoC) and temperature, to several minutes compared to several hours for an PPC experiment.

  18. Implementing Molecular Dynamics on Hybrid High Performance Computers - Three-Body Potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Yamada, Masako

    The use of coprocessors or accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power re- quirements. Hybrid high-performance computers, defined as machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. Although there has been extensive research into methods to efficiently use accelerators to improve the performance of molecular dynamics (MD) employing pairwise potential energy models, little is reported in the literature for models that includemore » many-body effects. 3-body terms are required for many popular potentials such as MEAM, Tersoff, REBO, AIREBO, Stillinger-Weber, Bond-Order Potentials, and others. Because the per-atom simulation times are much higher for models incorporating 3-body terms, there is a clear need for efficient algo- rithms usable on hybrid high performance computers. Here, we report a shared-memory force-decomposition for 3-body potentials that avoids memory conflicts to allow for a deterministic code with substantial performance improvements on hybrid machines. We describe modifications necessary for use in distributed memory MD codes and show results for the simulation of water with Stillinger-Weber on the hybrid Titan supercomputer. We compare performance of the 3-body model to the SPC/E water model when using accelerators. Finally, we demonstrate that our approach can attain a speedup of 5.1 with acceleration on Titan for production simulations to study water droplet freezing on a surface.« less

  19. Tumor implantation model for rapid testing of lymphatic dye uptake from paw to node in small animals

    NASA Astrophysics Data System (ADS)

    DSouza, Alisha V.; Elliott, Jonathan T.; Gunn, Jason R.; Barth, Richard J.; Samkoe, Kimberley S.; Tichauer, Kenneth M.; Pogue, Brian W.

    2015-03-01

    Morbidity and complexity involved in lymph node staging via surgical resection and biopsy calls for staging techniques that are less invasive. While visible blue dyes are commonly used in locating sentinel lymph nodes, since they follow tumor-draining lymphatic vessels, they do not provide a metric to evaluate presence of cancer. An area of active research is to use fluorescent dyes to assess tumor burden of sentinel and secondary lymph nodes. The goal of this work was to successfully deploy and test an intra-nodal cancer-cell injection model to enable planar fluorescence imaging of a clinically relevant blue dye, specifically methylene blue - used in the sentinel lymph node procedure - in normal and tumor-bearing animals, and subsequently segregate tumor-bearing from normal lymph nodes. This direct-injection based tumor model was employed in athymic rats (6 normal, 4 controls, 6 cancer-bearing), where luciferase-expressing breast cancer cells were injected into axillary lymph nodes. Tumor presence in nodes was confirmed by bioluminescence imaging before and after fluorescence imaging. Lymphatic uptake from the injection site (intradermal on forepaw) to lymph node was imaged at approximately 2 frames/minute. Large variability was observed within each cohort.

  20. Model of myosin node aggregation into a contractile ring: the effect of local alignment

    NASA Astrophysics Data System (ADS)

    Ojkic, Nikola; Wu, Jian-Qiu; Vavylonis, Dimitrios

    2011-09-01

    Actomyosin bundles frequently form through aggregation of membrane-bound myosin clusters. One such example is the formation of the contractile ring in fission yeast from a broad band of cortical nodes. Nodes are macromolecular complexes containing several dozens of myosin-II molecules and a few formin dimers. The condensation of a broad band of nodes into the contractile ring has been previously described by a search, capture, pull and release (SCPR) model. In SCPR, a random search process mediated by actin filaments nucleated by formins leads to transient actomyosin connections among nodes that pull one another into a ring. The SCPR model reproduces the transport of nodes over long distances and predicts observed clump-formation instabilities in mutants. However, the model does not generate transient linear elements and meshwork structures as observed in some wild-type and mutant cells during ring assembly. As a minimal model of node alignment, we added short-range aligning forces to the SCPR model representing currently unresolved mechanisms that may involve structural components, cross-linking and bundling proteins. We studied the effect of the local node alignment mechanism on ring formation numerically. We varied the new parameters and found viable rings for a realistic range of values. Morphologically, transient structures that form during ring assembly resemble those observed in experiments with wild-type and cdc25-22 cells. Our work supports a hierarchical process of ring self-organization involving components drawn together from distant parts of the cell followed by progressive stabilization.

  1. On using multiple routing metrics with destination sequenced distance vector protocol for MultiHop wireless ad hoc networks

    NASA Astrophysics Data System (ADS)

    Mehic, M.; Fazio, P.; Voznak, M.; Partila, P.; Komosny, D.; Tovarek, J.; Chmelikova, Z.

    2016-05-01

    A mobile ad hoc network is a collection of mobile nodes which communicate without a fixed backbone or centralized infrastructure. Due to the frequent mobility of nodes, routes connecting two distant nodes may change. Therefore, it is not possible to establish a priori fixed paths for message delivery through the network. Because of its importance, routing is the most studied problem in mobile ad hoc networks. In addition, if the Quality of Service (QoS) is demanded, one must guarantee the QoS not only over a single hop but over an entire wireless multi-hop path which may not be a trivial task. In turns, this requires the propagation of QoS information within the network. The key to the support of QoS reporting is QoS routing, which provides path QoS information at each source. To support QoS for real-time traffic one needs to know not only minimum delay on the path to the destination but also the bandwidth available on it. Therefore, throughput, end-to-end delay, and routing overhead are traditional performance metrics used to evaluate the performance of routing protocol. To obtain additional information about the link, most of quality-link metrics are based on calculation of the lost probabilities of links by broadcasting probe packets. In this paper, we address the problem of including multiple routing metrics in existing routing packets that are broadcasted through the network. We evaluate the efficiency of such approach with modified version of DSDV routing protocols in ns-3 simulator.

  2. A study of IEEE 802.15.4 security framework for wireless body area networks.

    PubMed

    Saleem, Shahnaz; Ullah, Sana; Kwak, Kyung Sup

    2011-01-01

    A Wireless Body Area Network (WBAN) is a collection of low-power and lightweight wireless sensor nodes that are used to monitor the human body functions and the surrounding environment. It supports a number of innovative and interesting applications, including ubiquitous healthcare and Consumer Electronics (CE) applications. Since WBAN nodes are used to collect sensitive (life-critical) information and may operate in hostile environments, they require strict security mechanisms to prevent malicious interaction with the system. In this paper, we first highlight major security requirements and Denial of Service (DoS) attacks in WBAN at Physical, Medium Access Control (MAC), Network, and Transport layers. Then we discuss the IEEE 802.15.4 security framework and identify the security vulnerabilities and major attacks in the context of WBAN. Different types of attacks on the Contention Access Period (CAP) and Contention Free Period (CFP) parts of the superframe are analyzed and discussed. It is observed that a smart attacker can successfully corrupt an increasing number of GTS slots in the CFP period and can considerably affect the Quality of Service (QoS) in WBAN (since most of the data is carried in CFP period). As we increase the number of smart attackers the corrupted GTS slots are eventually increased, which prevents the legitimate nodes to utilize the bandwidth efficiently. This means that the direct adaptation of IEEE 802.15.4 security framework for WBAN is not totally secure for certain WBAN applications. New solutions are required to integrate high level security in WBAN.

  3. A Study of IEEE 802.15.4 Security Framework for Wireless Body Area Networks

    PubMed Central

    Saleem, Shahnaz; Ullah, Sana; Kwak, Kyung Sup

    2011-01-01

    A Wireless Body Area Network (WBAN) is a collection of low-power and lightweight wireless sensor nodes that are used to monitor the human body functions and the surrounding environment. It supports a number of innovative and interesting applications, including ubiquitous healthcare and Consumer Electronics (CE) applications. Since WBAN nodes are used to collect sensitive (life-critical) information and may operate in hostile environments, they require strict security mechanisms to prevent malicious interaction with the system. In this paper, we first highlight major security requirements and Denial of Service (DoS) attacks in WBAN at Physical, Medium Access Control (MAC), Network, and Transport layers. Then we discuss the IEEE 802.15.4 security framework and identify the security vulnerabilities and major attacks in the context of WBAN. Different types of attacks on the Contention Access Period (CAP) and Contention Free Period (CFP) parts of the superframe are analyzed and discussed. It is observed that a smart attacker can successfully corrupt an increasing number of GTS slots in the CFP period and can considerably affect the Quality of Service (QoS) in WBAN (since most of the data is carried in CFP period). As we increase the number of smart attackers the corrupted GTS slots are eventually increased, which prevents the legitimate nodes to utilize the bandwidth efficiently. This means that the direct adaptation of IEEE 802.15.4 security framework for WBAN is not totally secure for certain WBAN applications. New solutions are required to integrate high level security in WBAN. PMID:22319358

  4. Topology reduction in deep convolutional feature extraction networks

    NASA Astrophysics Data System (ADS)

    Wiatowski, Thomas; Grohs, Philipp; Bölcskei, Helmut

    2017-08-01

    Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail significant computational complexity due to the large number of convolutions that need to be carried out; in addition, a large number of parameters needs to be learned and stored. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This paper aims at understanding the impact of CNN topology, specifically depth and width, on the network's feature extraction capabilities. We address this question for the class of scattering networks that employ either Weyl-Heisenberg filters or wavelets, the modulus non-linearity, and no pooling. The exponential feature map energy decay results in Wiatowski et al., 2017, are generalized to O(a-N), where an arbitrary decay factor a > 1 can be realized through suitable choice of the Weyl-Heisenberg prototype function or the mother wavelet. We then show how networks of fixed (possibly small) depth N can be designed to guarantee that ((1 - ɛ) · 100)% of the input signal's energy are contained in the feature vector. Based on the notion of operationally significant nodes, we characterize, partly rigorously and partly heuristically, the topology-reducing effects of (effectively) band-limited input signals, band-limited filters, and feature map symmetries. Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes - for fixed network depth N - the average number of operationally significant nodes per layer.

  5. Opinion formation driven by PageRank node influence on directed networks

    NASA Astrophysics Data System (ADS)

    Eom, Young-Ho; Shepelyansky, Dima L.

    2015-10-01

    We study a two states opinion formation model driven by PageRank node influence and report an extensive numerical study on how PageRank affects collective opinion formations in large-scale empirical directed networks. In our model the opinion of a node can be updated by the sum of its neighbor nodes' opinions weighted by the node influence of the neighbor nodes at each step. We consider PageRank probability and its sublinear power as node influence measures and investigate evolution of opinion under various conditions. First, we observe that all networks reach steady state opinion after a certain relaxation time. This time scale is decreasing with the heterogeneity of node influence in the networks. Second, we find that our model shows consensus and non-consensus behavior in steady state depending on types of networks: Web graph, citation network of physics articles, and LiveJournal social network show non-consensus behavior while Wikipedia article network shows consensus behavior. Third, we find that a more heterogeneous influence distribution leads to a more uniform opinion state in the cases of Web graph, Wikipedia, and Livejournal. However, the opposite behavior is observed in the citation network. Finally we identify that a small number of influential nodes can impose their own opinion on significant fraction of other nodes in all considered networks. Our study shows that the effects of heterogeneity of node influence on opinion formation can be significant and suggests further investigations on the interplay between node influence and collective opinion in networks.

  6. Real Time Global Tests of the ALICE High Level Trigger Data Transport Framework

    NASA Astrophysics Data System (ADS)

    Becker, B.; Chattopadhyay, S.; Cicalo, C.; Cleymans, J.; de Vaux, G.; Fearick, R. W.; Lindenstruth, V.; Richter, M.; Rohrich, D.; Staley, F.; Steinbeck, T. M.; Szostak, A.; Tilsner, H.; Weis, R.; Vilakazi, Z. Z.

    2008-04-01

    The High Level Trigger (HLT) system of the ALICE experiment is an online event filter and trigger system designed for input bandwidths of up to 25 GB/s at event rates of up to 1 kHz. The system is designed as a scalable PC cluster, implementing several hundred nodes. The transport of data in the system is handled by an object-oriented data flow framework operating on the basis of the publisher-subscriber principle, being designed fully pipelined with lowest processing overhead and communication latency in the cluster. In this paper, we report the latest measurements where this framework has been operated on five different sites over a global north-south link extending more than 10,000 km, processing a ldquoreal-timerdquo data flow.

  7. A Study of an Optical Lunar Surface Communications Network with High Bandwidth Direct to Earth Link

    NASA Technical Reports Server (NTRS)

    Wilson, K.; Biswas, A.; Schoolcraft, J.

    2011-01-01

    Analyzed optical DTE (direct to earth) and lunar relay satellite link analyses, greater than 200 Mbps downlink to 1-m Earth receiver and greater than 1 Mbps uplink achieved with mobile 5-cm lunar transceiver, greater than 1Gbps downlink and greater than 10 Mpbs uplink achieved with 10-cm stationary lunar transceiver, MITLL (MIT Lincoln Laboratory) 2013 LLCD (Lunar Laser Communications Demonstration) plans to demonstrate 622 Mbps downlink with 20 Mbps uplink between lunar orbiter and ground station; Identified top five technology challenges to deploying lunar optical network, Performed preliminary experiments on two of challenges: (i) lunar dust removal and (ii)DTN over optical carrier, Exploring opportunities to evaluate DTN (delay-tolerant networking) over optical link in a multi-node network e.g. Desert RATS.

  8. PREOPERATIVE MRI IMPROVES PREDICTION OF EXTENSIVE OCCULT AXILLARY LYMPH NODE METASTASES IN BREAST CANCER PATIENTS WITH A POSITIVE SENTINEL LYMPH NODE BIOPSY

    PubMed Central

    Loiselle, Christopher; Eby, Peter R.; Kim, Janice N.; Calhoun, Kristine E.; Allison, Kimberly H.; Gadi, Vijayakrishna K.; Peacock, Sue; Storer, Barry; Mankoff, David A.; Partridge, Savannah C.; Lehman, Constance D.

    2014-01-01

    Rationale and Objectives To test the ability of quantitative measures from preoperative Dynamic Contrast Enhanced MRI (DCE-MRI) to predict, independently and/or with the Katz pathologic nomogram, which breast cancer patients with a positive sentinel lymph node biopsy will have ≥ 4 positive axillary lymph nodes upon completion axillary dissection. Methods and Materials A retrospective review was conducted to identify clinically node-negative invasive breast cancer patients who underwent preoperative DCE-MRI, followed by sentinel node biopsy with positive findings and complete axillary dissection (6/2005 – 1/2010). Clinical/pathologic factors, primary lesion size and quantitative DCE-MRI kinetics were collected from clinical records and prospective databases. DCE-MRI parameters with univariate significance (p < 0.05) to predict ≥ 4 positive axillary nodes were modeled with stepwise regression and compared to the Katz nomogram alone and to a combined MRI-Katz nomogram model. Results Ninety-eight patients with 99 positive sentinel biopsies met study criteria. Stepwise regression identified DCE-MRI total persistent enhancement and volume adjusted peak enhancement as significant predictors of ≥4 metastatic nodes. Receiver operating characteristic (ROC) curves demonstrated an area under the curve (AUC) of 0.78 for the Katz nomogram, 0.79 for the DCE-MRI multivariate model, and 0.87 for the combined MRI-Katz model. The combined model was significantly more predictive than the Katz nomogram alone (p = 0.003). Conclusion Integration of DCE-MRI primary lesion kinetics significantly improved the Katz pathologic nomogram accuracy to predict presence of metastases in ≥ 4 nodes. DCE-MRI may help identify sentinel node positive patients requiring further localregional therapy. PMID:24331270

  9. Gain and power optimization of the wireless optical system with multilevel modulation.

    PubMed

    Liu, Xian

    2008-06-01

    When used in an outdoor environment to expedite networking access, the performance of wireless optical communication systems is affected by transmitter sway. In the design of such systems, much attention has been paid to developing power-efficient schemes. However, the bandwidth efficiency is also an important issue. One of the most natural approaches to promote bandwidth efficiency is to use multilevel modulation. This leads to multilevel pulse amplitude modulation in the context of intensity modulation and direct detection. We develop a model based on the four-level pulse amplitude modulation. We show that the model can be formulated as an optimization problem in terms of the transmitter power, bit error probability, transmitter gain, and receiver gain. The technical challenges raised by modeling and solving the problem include the analytical and numerical treatments for the improper integrals of the Gaussian functions coupled with the erfc function. The results demonstrate that, at the optimal points, the power penalty paid to the doubled bandwidth efficiency is around 3 dB.

  10. Modeling and experimental parametric study of a tri-leg compliant orthoplanar spring based multi-mode piezoelectric energy harvester

    NASA Astrophysics Data System (ADS)

    Dhote, Sharvari; Yang, Zhengbao; Zu, Jean

    2018-01-01

    This paper presents the modeling and experimental parametric study of a nonlinear multi-frequency broad bandwidth piezoelectric vibration-based energy harvester. The proposed harvester consists of a tri-leg compliant orthoplanar spring (COPS) and multiple masses with piezoelectric plates attached at three different locations. The vibration modes, resonant frequencies, and strain distributions are studied using the finite element analysis. The prototype is manufactured and experimentally investigated to study the effect of single as well as multiple light-weight masses on the bandwidth. The dynamic behavior of the harvester with a mass at the center is modeled numerically and characterized experimentally. The simulation and experimental results are in good agreement. A wide bandwidth with three close nonlinear vibration modes is observed during the experiments when four masses are added to the proposed harvester. The current generator with four masses shows a significant performance improvement with multiple nonlinear peaks under both forward and reverse frequency sweeps.

  11. Fine pointing of the Solar Optical Telescope in the Space Shuttle environment

    NASA Astrophysics Data System (ADS)

    Gowrinathan, S.

    Instruments requiring fine (i.e., sub-arcsecond) pointing, such as the Solar Optical Telescope (SOT), must be equipped with two-stage pointing devices, coarse and fine. Coarse pointing will be performed by a gimbal system, such as the Instrument Pointing System, while the image motion compensation (IMC) will provide fine pointing. This paper describes work performed on the SOT concept design that illustrates IMC as applied to SOT. The SOT control system was modeled in the frequency domain to evaluate performance, stability, and bandwidth requirements. The two requirements of the pointing control, i.e., the 2 arcsecond reproducibility and 0.03 arcsecond rms pointing jitter, can be satisfied by use of IMC at about 20 Hz bandwidth. The need for this high bandwidth is related to Shuttle-induced disturbances that arise primarily from man push-offs and vernier thruster firings. A block diagram of SOT model/stability analysis, schematic illustrations of the SOT pointing system, and a structural model summary are included.

  12. What is an expert? A systems perspective on expertise.

    PubMed

    Caley, Michael Julian; O'Leary, Rebecca A; Fisher, Rebecca; Low-Choy, Samantha; Johnson, Sandra; Mengersen, Kerrie

    2014-02-01

    Expert knowledge is a valuable source of information with a wide range of research applications. Despite the recent advances in defining expert knowledge, little attention has been given to how to view expertise as a system of interacting contributory factors for quantifying an individual's expertise. We present a systems approach to expertise that accounts for many contributing factors and their inter-relationships and allows quantification of an individual's expertise. A Bayesian network (BN) was chosen for this purpose. For illustration, we focused on taxonomic expertise. The model structure was developed in consultation with taxonomists. The relative importance of the factors within the network was determined by a second set of taxonomists (supra-experts) who also provided validation of the model structure. Model performance was assessed by applying the model to hypothetical career states of taxonomists designed to incorporate known differences in career states for model testing. The resulting BN model consisted of 18 primary nodes feeding through one to three higher-order nodes before converging on the target node (Taxonomic Expert). There was strong consistency among node weights provided by the supra-experts for some nodes, but not others. The higher-order nodes, "Quality of work" and "Total productivity", had the greatest weights. Sensitivity analysis indicated that although some factors had stronger influence in the outer nodes of the network, there was relatively equal influence of the factors leading directly into the target node. Despite the differences in the node weights provided by our supra-experts, there was good agreement among assessments of our hypothetical experts that accurately reflected differences we had specified. This systems approach provides a way of assessing the overall level of expertise of individuals, accounting for multiple contributory factors, and their interactions. Our approach is adaptable to other situations where it is desirable to understand components of expertise.

  13. Numerical modelling of flow through foam's node.

    PubMed

    Anazadehsayed, Abdolhamid; Rezaee, Nastaran; Naser, Jamal

    2017-10-15

    In this work, for the first time, a three-dimensional model to describe the dynamics of flow through geometric Plateau border and node components of foam is presented. The model involves a microscopic-scale structure of one interior node and four Plateau borders with an angle of 109.5 from each other. The majority of the surfaces in the model make a liquid-gas interface where the boundary condition of stress balance between the surface and bulk is applied. The three-dimensional Navier-Stoke equation, along with continuity equation, is solved using the finite volume approach. The numerical results are validated against the available experimental results for the flow velocity and resistance in the interior nodes and Plateau borders. A qualitative illustration of flow in a node in different orientations is shown. The scaled resistance against the flow for different liquid-gas interface mobility is studied and the geometrical characteristics of the node and Plateau border components of the system are compared to investigate the Plateau border and node dominated flow regimes numerically. The findings show the values of the resistance in each component, in addition to the exact point where the flow regimes switch. Furthermore, a more accurate effect of the liquid-gas interface on the foam flow, particularly in the presence of a node in the foam network is obtained. The comparison of the available numerical results with our numerical results shows that the velocity of the node-PB system is lower than the velocity of single PB system for mobile interfaces. That is owing to the fact that despite the more relaxed geometrical structure of the node, constraining effect of merging and mixing of flow and increased viscous damping in the node component result in the node-dominated regime. Moreover, we obtain an accurate updated correlation for the dependence of the scaled average velocity of the node-Plateau border system on the liquid-gas interface mobility described by Boussinesq number. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Applying simulation model to uniform field space charge distribution measurements by the PEA method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Salama, M.M.A.

    1996-12-31

    Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less

  15. Determination of the key parameters affecting historic communications satellite trends

    NASA Technical Reports Server (NTRS)

    Namkoong, D.

    1984-01-01

    Data representing 13 series of commercial communications satellites procured between 1968 and 1982 were analyzed to determine the factors that have contributed to the general reduction over time of the per circuit cost of communications satellites. The model by which the data were analyzed was derived from a general telecommunications application and modified to be more directly applicable for communications satellites. In this model satellite mass, bandwidth-years, and technological change were the variable parameters. A linear, least squares, multiple regression routine was used to obtain the measure of significance of the model. Correlation was measured by coefficient of determination (R super 2) and t-statistic. The results showed that no correlation could be established with satellite mass. Bandwidth-year however, did show a significant correlation. Technological change in the bandwidth-year case was a significant factor in the model. This analysis and the conclusions derived are based on mature technologies, i.e., satellite designs that are evolutions of earlier designs rather than the first of a new generation. The findings, therefore, are appropriate to future satellites only if they are a continuation of design evolution.

  16. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    PubMed

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  17. Quantum statistics in complex networks

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra

    The Barabasi-Albert (BA) model for a complex network shows a characteristic power law connectivity distribution typical of scale free systems. The Ising model on the BA network shows that the ferromagnetic phase transition temperature depends logarithmically on its size. We have introduced a fitness parameter for the BA network which describes the different abilities of nodes to compete for links. This model predicts the formation of a scale free network where each node increases its connectivity in time as a power-law with an exponent depending on its fitness. This model includes the fact that the node connectivity and growth rate do not depend on the node age alone and it reproduces non trivial correlation properties of the Internet. We have proposed a model of bosonic networks by a generalization of the BA model where the properties of quantum statistics can be applied. We have introduced a fitness eta i = e-bei where the temperature T = 1/ b is determined by the noise in the system and the energy ei accounts for qualitative differences of each node for acquiring links. The results of this work show that a power law network with exponent gamma = 2 can give a Bose condensation where a single node grabs a finite fraction of all the links. In order to address the connection with self-organized processes we have introduced a model for a growing Cayley tree that generalizes the dynamics of invasion percolation. At each node we associate a parameter ei (called energy) such that the probability to grow for each node is given by pii ∝ ebei where T = 1/ b is a statistical parameter of the system determined by the noise called the temperature. This model has been solved analytically with a similar mathematical technique as the bosonic scale-free networks and it shows the self organization of the low energy nodes at the interface. In the thermodynamic limit the Fermi distribution describes the probability of the energy distribution at the interface.

  18. AISIM (Automated Interactive Simulation Modeling System) VAX Version Training Manual.

    DTIC Science & Technology

    1985-02-01

    node to which the link is to run, a-nd-(3) a user-given name of the link. To pi’ace a link called " LINKI " from NODE1 to NODE2, type CON NODE1,NODE2...example, to eliminate the connection between NODEI and NODE2 type DELETE LINKI The result on the screen would be that the link named "LINK1" would...the user should now enter the command: DEFINE PATH,NODE2 ,NODE4, LINKI ,LINK4 not only would the path from NODE2 to NODE4 be established, but the path

  19. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  20. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network

    PubMed Central

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish–Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection. PMID:26447696

  1. An Integrated Intrusion Detection Model of Cluster-Based Wireless Sensor Network.

    PubMed

    Sun, Xuemei; Yan, Bo; Zhang, Xinzhong; Rong, Chuitian

    2015-01-01

    Considering wireless sensor network characteristics, this paper combines anomaly and mis-use detection and proposes an integrated detection model of cluster-based wireless sensor network, aiming at enhancing detection rate and reducing false rate. Adaboost algorithm with hierarchical structures is used for anomaly detection of sensor nodes, cluster-head nodes and Sink nodes. Cultural-Algorithm and Artificial-Fish-Swarm-Algorithm optimized Back Propagation is applied to mis-use detection of Sink node. Plenty of simulation demonstrates that this integrated model has a strong performance of intrusion detection.

  2. An Interference Mitigation Scheme of Device-to-Device Communications for Sensor Networks Underlying LTE-A

    PubMed Central

    Kim, Jeehyeong; Karim, Nzabanita Abdoul; Cho, Sunghyun

    2017-01-01

    Device-to-Device (D2D) communication technology has become a key factor in wireless sensor networks to form autonomous communication links among sensor nodes. Many research results for D2D have been presented to resolve different technical issues of D2D. Nevertheless, the previous works have not resolved the shortage of data rate and limited coverage of wireless sensor networks. Due to bandwidth shortages and limited communication coverage, 3rd Generation Partnership Project (3GPP) has introduced a new Device-to-Device (D2D) communication technique underlying cellular networks, which can improve spectral efficiencies by enabling the direct communication of devices in proximity without passing through enhanced-NodeB (eNB). However, to enable D2D communication in a cellular network presents a challenge with regard to radio resource management since D2D links reuse the uplink radio resources of cellular users and it can cause interference to the receiving channels of D2D user equipment (DUE). In this paper, a hybrid mechanism is proposed that uses Fractional Frequency Reuse (FFR) and Almost Blank Sub-frame (ABS) schemes to handle inter-cell interference caused by cellular user equipments (CUEs) to D2D receivers (DUE-Rxs), reusing the same resources at the cell edge area. In our case, DUE-Rxs are considered as victim nodes and CUEs as aggressor nodes, since our primary target is to minimize inter-cell interference in order to increase the signal to interference and noise ratio (SINR) of the target DUE-Rx at the cell edge area. The numerical results show that the interference level of the target D2D receiver (DUE-Rx) decreases significantly compared to the conventional FFR at the cell edge. In addition, the system throughput of the proposed scheme can be increased up to 60% compared to the conventional FFR. PMID:28489064

  3. An Interference Mitigation Scheme of Device-to-Device Communications for Sensor Networks Underlying LTE-A.

    PubMed

    Kim, Jeehyeong; Karim, Nzabanita Abdoul; Cho, Sunghyun

    2017-05-10

    Device-to-Device (D2D) communication technology has become a key factor in wireless sensor networks to form autonomous communication links among sensor nodes. Many research results for D2D have been presented to resolve different technical issues of D2D. Nevertheless, the previous works have not resolved the shortage of data rate and limited coverage of wireless sensor networks. Due to bandwidth shortages and limited communication coverage, 3rd Generation Partnership Project (3GPP) has introduced a new Device-to-Device (D2D) communication technique underlying cellular networks, which can improve spectral efficiencies by enabling the direct communication of devices in proximity without passing through enhanced-NodeB (eNB). However, to enable D2D communication in a cellular network presents a challenge with regard to radio resource management since D2D links reuse the uplink radio resources of cellular users and it can cause interference to the receiving channels of D2D user equipment (DUE). In this paper, a hybrid mechanism is proposed that uses Fractional Frequency Reuse (FFR) and Almost Blank Sub-frame (ABS) schemes to handle inter-cell interference caused by cellular user equipments (CUEs) to D2D receivers (DUE-Rxs), reusing the same resources at the cell edge area. In our case, DUE-Rxs are considered as victim nodes and CUEs as aggressor nodes, since our primary target is to minimize inter-cell interference in order to increase the signal to interference and noise ratio (SINR) of the target DUE-Rx at the cell edge area. The numerical results show that the interference level of the target D2D receiver (DUE-Rx) decreases significantly compared to the conventional FFR at the cell edge. In addition, the system throughput of the proposed scheme can be increased up to 60% compared to the conventional FFR.

  4. Static and dynamic protein impact on electronic properties of light-harvesting complex LH2.

    PubMed

    Zerlauskiene, O; Trinkunas, G; Gall, A; Robert, B; Urboniene, V; Valkunas, L

    2008-12-11

    A comparative analysis of the temperature dependence of the absorption spectra of the LH2 complexes from different species of photosynthetic bacteria, i.e., Rhodobacter sphaeroides, Rhodoblastus acidophilus, and Phaeospirillum molischianum, was performed in the temperature range from 4 to 300 K. Qualitatively, the temperature dependence is similar for all of the species studied. The spectral bandwidths of both B800 and B850 bands increases with temperature while the band positions shift in opposite directions: the B800 band shifts slightly to the red while the B850 band to the blue. These results were analyzed using the modified Redfield theory based on the exciton model. The main conclusion drawn from the analysis was that the spectral density function (SDF) is the main factor underlying the strength of the temperature dependence of the bandwidths for the B800 and B850 electronic transitions, while the bandwidths themselves are defined by the corresponding inhomogeneous distribution function (IDF). Slight variation of the slope of the temperature dependence of the bandwidths between species can be attributed to the changes of the values of the reorganization energies and characteristic frequencies determining the SDF. To explain the shift of the B850 band position with temperature, which is unusual for the conventional exciton model, a temperature dependence of the IDF must be postulated. This dependence can be achieved within the framework of the modified (dichotomous) exciton model. The slope of the temperature dependence of the B850 bandwidth is then defined by the value of the reorganization energy and by the difference between the transition energies of the dichotomous states of the pigment molecules. The equilibration factor between these dichotomous states mainly determines the temperature dependence of the peak shift.

  5. Use of a Hybrid Edge Node-Centroid Node Approach to Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2010-01-01

    A recent proposal submitted for an ESA mission required that models be delivered in ESARAD/ESAT AN formats. ThermalDesktop was the preferable analysis code to be used for model development with a conversion done as the final step before delivery. However, due to some differences between the capabilities of the two codes, a unique approach was developed to take advantage of the edge node capability of ThermalDesktop while maintaining the centroid node approach used by ESARAD. In essence, two separate meshes were used: one for conduction and one for radiation. The conduction calculations were eliminated from the radiation surfaces and the capacitance and radiative calculations were eliminated from the conduction surfaces. The resulting conduction surface nodes were coincident with all nodes of the radiation surface and were subsequently merged, while the nodes along the edges remained free. Merging of nodes on the edges of adjacent surfaces provided the conductive links between surfaces. Lastly, all nodes along edges were placed into the subnetwork and the resulting supernetwork included only the nodes associated with radiation surfaces. This approach had both benefits and disadvantages. The use of centroid, surface based radiation reduces the overall size of the radiation network, which is often the most computationally intensive part of the modeling process. Furthermore, using the conduction surfaces and allowing ThermalDesktop to calculate the conduction network can save significant time by not having to manually generate the couplings. Lastly, the resulting GMM/TMM models can be exported to formats which do not support edge nodes. One drawback, however, is the necessity to maintain two sets of surfaces. This requires additional care on the part of the analyst to ensure communication between the conductive and radiative surfaces in the resulting overall network. However, with more frequent use of this technique, the benefits of this approach can far outweigh the additional effort.

  6. Use of a Hybrid Edge Node-Centroid Node Approach to Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2010-01-01

    A recent proposal submitted for an ESA mission required that models be delivered in ESARAD/ESATAN formats. ThermalDesktop was the preferable analysis code to be used for model development with a conversion done as the final step before delivery. However, due to some differences between the capabilities of the two codes, a unique approach was developed to take advantage of the edge node capability of ThermalDesktop while maintaining the centroid node approach used by ESARAD. In essence, two separate meshes were used: one for conduction and one for radiation. The conduction calculations were eliminated from the radiation surfaces and the capacitance and radiative calculations were eliminated from the conduction surfaces. The resulting conduction surface nodes were coincident with all nodes of the radiation surface and were subsequently merged, while the nodes along the edges remained free. Merging of nodes on the edges of adjacent surfaces provided the conductive links between surfaces. Lastly, all nodes along edges were placed into the subnetwork and the resulting supernetwork included only the nodes associated with radiation surfaces. This approach had both benefits and disadvantages. The use of centroid, surface based radiation reduces the overall size of the radiation network, which is often the most computationally intensive part of the modeling process. Furthermore, using the conduction surfaces and allowing ThermalDesktop to calculate the conduction network can save significant time by not having to manually generate the couplings. Lastly, the resulting GMM/TMM models can be exported to formats which do not support edge nodes. One drawback, however, is the necessity to maintain two sets of surfaces. This requires additional care on the part of the analyst to ensure communication between the conductive and radiative surfaces in the resulting overall network. However, with more frequent use of this technique, the benefits of this approach can far outweigh the additional effort.

  7. Spin-torque diode with tunable sensitivity and bandwidth by out-of-plane magnetic field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, X.; Zheng, C.; Pong, Philip W. T.

    Spin-torque diodes based on nanosized magnetic tunnel junctions are novel microwave detectors with high sensitivity and wide frequency bandwidth. While previous reports mainly focus on improving the sensitivity, the approaches to extend the bandwidth are limited. This work experimentally demonstrates that through optimizing the orientation of the external magnetic field, wide bandwidth can be achieved while maintaining high sensitivity. The mechanism of the frequency- and sensitivity-tuning is investigated through analyzing the dependence of resonant frequency and DC voltage on the magnitude and the tilt angle of hard-plane magnetic field. The frequency dependence is qualitatively explicated by Kittel's ferromagnetic resonance model.more » The asymmetric resonant frequency at positive and negative magnetic field is verified by the numerical simulation considering the in-plane anisotropy. The DC voltage dependence is interpreted through evaluating the misalignment angle between the magnetization of the free layer and the reference layer. The tunability of the detector performance by the magnetic field angle is evaluated through characterizing the sensitivity and bandwidth under 3D magnetic field. The frequency bandwidth up to 9.8 GHz or maximum sensitivity up to 154 mV/mW (after impedance mismatch correction) can be achieved by tuning the angle of the applied magnetic field. The results show that the bandwidth and sensitivity can be controlled and adjusted through optimizing the orientation of the magnetic field for various applications and requirements.« less

  8. Longitudinal 3.0T MRI analysis of changes in lymph node volume and apparent diffusion coefficient in an experimental animal model of metastatic and hyperplastic lymph nodes.

    PubMed

    Klerkx, Wenche M; Geldof, Albert A; Heintz, A Peter; van Diest, Paul J; Visser, Fredy; Mali, Willem P; Veldhuis, Wouter B

    2011-05-01

    To perform a longitudinal analysis of changes in lymph node volume and apparent diffusion coefficient (ADC) in healthy, metastatic, and hyperplastic lymph nodes. Three groups of four female Copenhagen rats were studied. Metastasis was induced by injecting cells with a high metastatic potential in their left hind footpad. Reactive nodes were induced by injecting Complete Freund Adjuvant (CFA). Imaging was performed at baseline and at 2, 5, 8, 11, and 14 days after tumor cell injection. Finally, lymph nodes were examined histopathologically. The model was highly efficient in inducing lymphadenopathy: subcutaneous cell or CFA inoculation resulted in ipsilateral metastatic or reactive popliteal lymph nodes in all rats. Metastatic nodal volumes increased exponentially from 5-7 mm(3) at baseline to 25 mm(3) at day 14, while the control node remained 5 mm(3). The hyperplastic nodes showed a rapid volume increase reaching a plateau at day 6. The ADC of metastatic nodes significantly decreased (range 13%-32%), but this decrease was also seen in reactive nodes. Metastatic and hyperplastic lymph nodes differed in terms of enlargement patterns and ADC changes. Enlarged reactive or malignant nodes could not be differentiated based on their ADC values. Copyright © 2011 Wiley-Liss, Inc.

  9. Target Control in Logical Models Using the Domain of Influence of Nodes.

    PubMed

    Yang, Gang; Gómez Tejeda Zañudo, Jorge; Albert, Réka

    2018-01-01

    Dynamical models of biomolecular networks are successfully used to understand the mechanisms underlying complex diseases and to design therapeutic strategies. Network control and its special case of target control, is a promising avenue toward developing disease therapies. In target control it is assumed that a small subset of nodes is most relevant to the system's state and the goal is to drive the target nodes into their desired states. An example of target control would be driving a cell to commit to apoptosis (programmed cell death). From the experimental perspective, gene knockout, pharmacological inhibition of proteins, and providing sustained external signals are among practical intervention techniques. We identify methodologies to use the stabilizing effect of sustained interventions for target control in Boolean network models of biomolecular networks. Specifically, we define the domain of influence (DOI) of a node (in a certain state) to be the nodes (and their corresponding states) that will be ultimately stabilized by the sustained state of this node regardless of the initial state of the system. We also define the related concept of the logical domain of influence (LDOI) of a node, and develop an algorithm for its identification using an auxiliary network that incorporates the regulatory logic. This way a solution to the target control problem is a set of nodes whose DOI can cover the desired target node states. We perform greedy randomized adaptive search in node state space to find such solutions. We apply our strategy to in silico biological network models of real systems to demonstrate its effectiveness.

  10. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.

  11. Sentinel Node Biopsy for the Head and Neck Using Contrast-Enhanced Ultrasonography Combined with Indocyanine Green Fluorescence in Animal Models: A Feasibility Study.

    PubMed

    Kogashiwa, Yasunao; Sakurai, Hiroyuki; Akimoto, Yoshihiro; Sato, Dai; Ikeda, Tetsuya; Matsumoto, Yoshifumi; Moro, Yorihisa; Kimura, Toru; Hamanoue, Yasuhiro; Nakamura, Takehiro; Yamauchi, Koichi; Saito, Koichiro; Sugasawa, Masashi; Kohno, Naoyuki

    2015-01-01

    Sentinel node navigation surgery is gaining popularity in oral cancer. We assessed application of sentinel lymph node navigation surgery to pharyngeal and laryngeal cancers by evaluating the combination of contrast-enhanced ultrasonography and indocyanine green fluorescence in animal models. This was a prospective, nonrandomized, experimental study in rabbit and swine animal models. A mixture of indocyanine green and Sonazoid was used as the tracer. The tracer mixture was injected into the tongue, larynx, or pharynx. The sentinel lymph nodes were identified transcutaneously by infra-red camera and contrast-enhanced ultrasonography. Detection time and extraction time of the sentinel lymph nodes were measured. The safety of the tracer mixture in terms of mucosal reaction was evaluated macroscopically and microscopically. Sentinel lymph nodes were detected transcutaneously by contrast-enhanced ultrasonography alone. The number of sentinel lymph nodes detected was one or two. Despite observation of contrast enhancement of Sonazoid for at least 90 minutes, the number of sentinel lymph nodes detected did not change. The average extraction time of sentinel lymph nodes was 4.8 minutes. Indocyanine green fluorescence offered visual information during lymph node biopsy. The safety of the tracer was confirmed by absence of laryngeal edema both macro and microscopically. The combination method of indocyanine green fluorescence and contrast-enhanced ultrasonography for detecting sentinel lymph nodes during surgery for head and neck cancer seems promising, especially for pharyngeal and laryngeal cancer. Further clinical studies to confirm this are warranted.

  12. Complex networks under dynamic repair model

    NASA Astrophysics Data System (ADS)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  13. Model-driven requirements engineering (MDRE) for real-time ultra-wide instantaneous bandwidth signal simulation

    NASA Astrophysics Data System (ADS)

    Chang, Daniel Y.; Rowe, Neil C.

    2013-05-01

    While conducting a cutting-edge research in a specific domain, we realize that (1) requirements clarity and correctness are crucial to our success [1], (2) hardware is hard to change, most work is in software requirements development, coding and testing [2], (3) requirements are constantly changing, so that configurability, reusability, scalability, adaptability, modularity and testability are important non-functional attributes [3], (4) cross-domain knowledge is necessary for complex systems [4], and (5) if our research is successful, the results could be applied to other domains with similar problems. In this paper, we propose to use model-driven requirements engineering (MDRE) to model and guide our requirements/development, since models are easy to understand, execute, and modify. The domain for our research is Electronic Warfare (EW) real-time ultra-wide instantaneous bandwidth (IBW1) signal simulation. The proposed four MDRE models are (1) Switch-and-Filter architecture, (2) multiple parallel data bit streams alignment, (3) post-ADC and pre-DAC bits re-mapping, and (4) Discrete Fourier Transform (DFT) filter bank. This research is unique since the instantaneous bandwidth we are dealing with is in gigahertz range instead of conventional megahertz.

  14. Achieving Agreement in Three Rounds with Bounded-Byzantine Faults

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar, R.

    2017-01-01

    A three-round algorithm is presented that guarantees agreement in a system of K greater than or equal to 3F+1 nodes provided each faulty node induces no more than F faults and each good node experiences no more than F faults, where, F is the maximum number of simultaneous faults in the network. The algorithm is based on the Oral Message algorithm of Lamport, Shostak, and Pease and is scalable with respect to the number of nodes in the system and applies equally to traditional node-fault model as well as the link-fault model. We also present a mechanical verification of the algorithm focusing on verifying the correctness of a bounded model of the algorithm as well as confirming claims of determinism.

  15. Analytic Modeling of Pressurization and Cryogenic Propellant

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy H.

    2010-01-01

    An analytic model for pressurization and cryogenic propellant conditions during all mission phases of any liquid rocket based vehicle has been developed and validated. The model assumes the propellant tanks to be divided into five nodes and also implements an empirical correlation for liquid stratification if desired. The five nodes include a tank wall node exposed to ullage gas, an ullage gas node, a saturated propellant vapor node at the liquid-vapor interface, a liquid node, and a tank wall node exposed to liquid. The conservation equations of mass and energy are then applied across all the node boundaries and, with the use of perfect gas assumptions, explicit solutions for ullage and liquid conditions are derived. All fluid properties are updated real time using NIST Refprop.1 Further, mass transfer at the liquid-vapor interface is included in the form of evaporation, bulk boiling of liquid propellant, and condensation given the appropriate conditions for each. Model validation has proven highly successful against previous analytic models and various Saturn era test data and reasonably successful against more recent LH2 tank self pressurization ground test data. Finally, this model has been applied to numerous design iterations for the Altair Lunar Lander, Ares V Core Stage, and Ares V Earth Departure Stage in order to characterize Helium and autogenous pressurant requirements, propellant lost to evaporation and thermodynamic venting to maintain propellant conditions, and non-uniform tank draining in configurations utilizing multiple LH2 or LO2 propellant tanks. In conclusion, this model provides an accurate and efficient means of analyzing multiple design configurations for any cryogenic propellant tank in launch, low-acceleration coast, or in-space maneuvering and supplies the user with pressurization requirements, unusable propellants from evaporation and liquid stratification, and general ullage gas, liquid, and tank wall conditions as functions of time.

  16. Uplink transmission of a 60-km-reach WDM/OCDM-PON using a spectrum-sliced pulse source

    NASA Astrophysics Data System (ADS)

    Choi, Yong-Kyu; Hanawa, Masanori; Park, Chang-Soo

    2014-02-01

    We propose and experimentally demonstrate the uplink transmission of a 60-km-reach wavelength division multiplexing/optical code division multiplexing (WDM/OCDM) passive optical network (PON) using a spectrum-sliced pulse source. As a single light source, a broadband pulse source with a bandwidth of 6.5 nm and a repetition rate of 1.25 GHz is generated at a central office and supplied to a remote node (RN) through a 50-km fiber link. At the RN, narrow-band pulses (as a source for uplink transmission) are obtained by spectrum slicing the broadband pulse source with a cyclic arrayed waveguide grating and are then supplied to all optical network units (ONUs) via 1×4 power splitters and 10-km drop fibers. Eight wavelengths are obtained with a 6.5-nm bandwidth of the broadband pulse source, and the qualities of the pulses with a repetition rate of 1.25 GHz and a pulse width of 45 ps for the eight wavelengths are sufficient for four-chip OCDM encoding at the ONUs. In our experiments, four signals are multiplexed by OCDM at one wavelength, and another encoded signal is also multiplexed by WDM. The bit error rates (BERs) of the signals exhibit error-free transmission (BER<10-9) over a 60-km single-mode fiber at 1.25 Gb/s.

  17. 3-D integrated heterogeneous intra-chip free-space optical interconnect.

    PubMed

    Ciftcioglu, Berkehan; Berman, Rebecca; Wang, Shang; Hu, Jianyun; Savidis, Ioannis; Jain, Manish; Moore, Duncan; Huang, Michael; Friedman, Eby G; Wicks, Gary; Wu, Hui

    2012-02-13

    This paper presents the first chip-scale demonstration of an intra-chip free-space optical interconnect (FSOI) we recently proposed. This interconnect system provides point-to-point free-space optical links between any two communication nodes, and hence constructs an all-to-all intra-chip communication fabric, which can be extended for inter-chip communications as well. Unlike electrical and other waveguide-based optical interconnects, FSOI exhibits low latency, high energy efficiency, and large bandwidth density, and hence can significantly improve the performance of future many-core chips. In this paper, we evaluate the performance of the proposed FSOI interconnect, and compare it to a waveguide-based optical interconnect with wavelength division multiplexing (WDM). It shows that the FSOI system can achieve significantly lower loss and higher energy efficiency than the WDM system, even with optimistic assumptions for the latter. A 1×1-cm2 chip prototype is fabricated on a germanium substrate with integrated photodetectors. Commercial 850-nm GaAs vertical-cavity-surface-emitting-lasers (VCSELs) and fabricated fused silica microlenses are 3-D integrated on top of the substrate. At 1.4-cm distance, the measured optical transmission loss is 5 dB, the crosstalk is less than -20 dB, and the electrical-to-electrical bandwidth is 3.3 GHz. The latter is mainly limited by the 5-GHz VCSEL.

  18. SDN control of optical nodes in metro networks for high capacity inter-datacentre links

    NASA Astrophysics Data System (ADS)

    Magalhães, Eduardo; Perry, Philip; Barry, Liam

    2017-11-01

    Worldwide demand for bandwidth has been growing fast for some years and continues to do so. To cover this, mega datacentres need scalable connectivity to provide rich connectivity to handle the heavy traffic across them. Therefore, hardware infrastructures must be able to play different roles according to service and traffic requirements. In this context, software defined networking (SDN) decouples the network control and forwarding functions enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services. In addition, elastic optical networking (EON) technologies enable efficient spectrum utilization by allocating variable bandwidth to each user according to their actual needs. In particular, flexible transponders and reconfigurable optical add/drop multiplexers (ROADMs) are key elements since they can offer degrees of freedom to self adapt accordingly. Thus, it is crucial to design control methods in order to optimize the hardware utilization and offer high reconfigurability, flexibility and adaptability. In this paper, we propose and analyze, using a simulation framework, a method of capacity maximization through optical power profile manipulation for inter datacentre links that use existing metropolitan optical networks by exploiting the global network view afforded by SDN. Results show that manipulating the loss profiles of the ROADMs in the metro-network can yield optical signal-to-noise ratio (OSNR) improvements up to 10 dB leading to an increase in 112% in total capacity.

  19. A nonlinear MEMS electrostatic kinetic energy harvester for human-powered biomedical devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Y.; Cottone, F.; Marty, F.

    This article proposes a silicon-based electrostatic kinetic energy harvester with an ultra-wide operating frequency bandwidth from 1 Hz to 160 Hz. This large bandwidth is obtained, thanks to a miniature tungsten ball impacting with a movable proof mass of silicon. The motion of the silicon proof mass is confined by nonlinear elastic stoppers on the fixed part standing against two protrusions of the proof mass. The electrostatic transducer is made of interdigited-combs with a gap-closing variable capacitance that includes vertical electrets obtained by corona discharge. Below 10 Hz, the e-KEH offers 30.6 nJ per mechanical oscillation at 2 g{sub rms}, which makes it suitable formore » powering biomedical devices from human motion. Above 10 Hz and up to 162 Hz, the harvested power is more than 0.5 μW with a maximum of 4.5 μW at 160 Hz. The highest power of 6.6 μW is obtained without the ball at 432 Hz, in accordance with a power density of 142 μW/cm{sup 3}. We also demonstrate the charging of a 47-μF capacitor to 3.5 V used to power a battery-less wireless temperature sensor node.« less

  20. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  1. Bandwidth Limitations in Characterization of High Intensity Focused Ultrasound Fields in the Presence of Shocks

    NASA Astrophysics Data System (ADS)

    Khokhlova, V. A.; Bessonova, O. V.; Soneson, J. E.; Canney, M. S.; Bailey, M. R.; Crum, L. A.

    2010-03-01

    Nonlinear propagation effects result in the formation of weak shocks in high intensity focused ultrasound (HIFU) fields. When shocks are present, the wave spectrum consists of hundreds of harmonics. In practice, shock waves are modeled using a finite number of harmonics and measured with hydrophones that have limited bandwidths. The goal of this work was to determine how many harmonics are necessary to model or measure peak pressures, intensity, and heat deposition rates of the HIFU fields. Numerical solutions of the Khokhlov-Zabolotskaya-Kuznetzov-type (KZK) nonlinear parabolic equation were obtained using two independent algorithms, compared, and analyzed for nonlinear propagation in water, in gel phantom, and in tissue. Measurements were performed in the focus of the HIFU field in the same media using fiber optic probe hydrophones of various bandwidths. Experimental data were compared to the simulation results.

  2. Discretization in time gives rise to noise-induced improvement of the signal-to-noise ratio in static nonlinearities.

    PubMed

    Davidović, A; Huntington, E H; Frater, M R

    2009-07-01

    For some nonlinear systems the performance can improve with an increasing noise level. Such noise-induced improvement in static nonlinearities is of great interest for practical applications since many systems can be modeled in that way (e.g., sensors, quantizers, limiters, etc.). We present experimental evidence that noise-induced performance improvement occurs in those systems as a consequence of discretization in time with the achievable signal-to-noise ratio (SNR) gain increasing with decreasing ratio of input noise bandwidth and total measurement bandwidth. By modifying the input noise bandwidth, noise-induced improvement with SNR gain larger than unity is demonstrated in a system where it was not previously thought possible. Our experimental results bring closer two different theoretical models for the same class of nonlinearities and shed light on the behavior of static nonlinear discrete-time systems.

  3. Automated Construction of Node Software Using Attributes in a Ubiquitous Sensor Network Environment

    PubMed Central

    Lee, Woojin; Kim, Juil; Kang, JangMook

    2010-01-01

    In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric—the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment. PMID:22163678

  4. Automated construction of node software using attributes in a ubiquitous sensor network environment.

    PubMed

    Lee, Woojin; Kim, Juil; Kang, JangMook

    2010-01-01

    In sensor networks, nodes must often operate in a demanding environment facing restrictions such as restricted computing resources, unreliable wireless communication and power shortages. Such factors make the development of ubiquitous sensor network (USN) applications challenging. To help developers construct a large amount of node software for sensor network applications easily and rapidly, this paper proposes an approach to the automated construction of node software for USN applications using attributes. In the proposed technique, application construction proceeds by first developing a model for the sensor network and then designing node software by setting the values of the predefined attributes. After that, the sensor network model and the design of node software are verified. The final source codes of the node software are automatically generated from the sensor network model. We illustrate the efficiency of the proposed technique by using a gas/light monitoring application through a case study of a Gas and Light Monitoring System based on the Nano-Qplus operating system. We evaluate the technique using a quantitative metric-the memory size of execution code for node software. Using the proposed approach, developers are able to easily construct sensor network applications and rapidly generate a large number of node softwares at a time in a ubiquitous sensor network environment.

  5. Load Balancing in Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Zhu, Yingwu

    In this chapter we start by addressing the importance and necessity of load balancing in structured P2P networks, due to three main reasons. First, structured P2P networks assume uniform peer capacities while peer capacities are heterogeneous in deployed P2P networks. Second, resorting to pseudo-uniformity of the hash function used to generate node IDs and data item keys leads to imbalanced overlay address space and item distribution. Lastly, placement of data items cannot be randomized in some applications (e.g., range searching). We then present an overview of load aggregation and dissemination techniques that are required by many load balancing algorithms. Two techniques are discussed including tree structure-based approach and gossip-based approach. They make different tradeoffs between estimate/aggregate accuracy and failure resilience. To address the issue of load imbalance, three main solutions are described: virtual server-based approach, power of two choices, and address-space and item balancing. While different in their designs, they all aim to improve balance on the address space and data item distribution. As a case study, the chapter discusses a virtual server-based load balancing algorithm that strives to ensure fair load distribution among nodes and minimize load balancing cost in bandwidth. Finally, the chapter concludes with future research and a summary.

  6. Low-complex energy-aware image communication in visual sensor networks

    NASA Astrophysics Data System (ADS)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  7. Photonic quantum state transfer between a cold atomic gas and a crystal.

    PubMed

    Maring, Nicolas; Farrera, Pau; Kutluer, Kutlu; Mazzera, Margherita; Heinze, Georg; de Riedmatten, Hugues

    2017-11-22

    Interfacing fundamentally different quantum systems is key to building future hybrid quantum networks. Such heterogeneous networks offer capabilities superior to those of their homogeneous counterparts, as they merge the individual advantages of disparate quantum nodes in a single network architecture. However, few investigations of optical hybrid interconnections have been carried out, owing to fundamental and technological challenges such as wavelength and bandwidth matching of the interfacing photons. Here we report optical quantum interconnection of two disparate matter quantum systems with photon storage capabilities. We show that a quantum state can be transferred faithfully between a cold atomic ensemble and a rare-earth-doped crystal by means of a single photon at 1,552  nanometre telecommunication wavelength, using cascaded quantum frequency conversion. We demonstrate that quantum correlations between a photon and a single collective spin excitation in the cold atomic ensemble can be transferred to the solid-state system. We also show that single-photon time-bin qubits generated in the cold atomic ensemble can be converted, stored and retrieved from the crystal with a conditional qubit fidelity of more than 85 per cent. Our results open up the prospect of optically connecting quantum nodes with different capabilities and represent an important step towards the realization of large-scale hybrid quantum networks.

  8. Benchmarking NWP Kernels on Multi- and Many-core Processors

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  9. Preoperative Prediction of Node-Negative Disease After Neoadjuvant Chemotherapy in Patients Presenting with Node-Negative or Node-Positive Breast Cancer.

    PubMed

    Murphy, Brittany L; L Hoskin, Tanya; Heins, Courtney Day N; Habermann, Elizabeth B; Boughey, Judy C

    2017-09-01

    Axillary node status after neoadjuvant chemotherapy (NAC) influences the axillary surgical staging procedure as well as recommendations regarding reconstruction and radiation. Our aim was to construct a clinical preoperative prediction model to identify the likelihood of patients being node negative after NAC. Using the National Cancer Database (NCDB) from January 2010 to December 2012, we identified cT1-T4c, N0-N3 breast cancer patients treated with NAC. The effects of patient and tumor factors on pathologic node status were assessed by multivariable logistic regression separately for clinically node negative (cN0) and clinically node positive (cN+) disease, and two models were constructed. Model performance was validated in a cohort of NAC patients treated at our institution (January 2013-July 2016), and model discrimination was assessed by estimating the area under the curve (AUC). Of 16,153 NCDB patients, 6659 (41%) were cN0 and 9494 (59%) were cN+. Factors associated with pathologic nodal status and included in the models were patient age, tumor grade, biologic subtype, histology, clinical tumor category, and, in cN+ patients only, clinical nodal category. The validation dataset included 194 cN0 and 180 cN+ patients. The cN0 model demonstrated good discrimination, with an AUC of 0.73 (95% confidence interval [CI] 0.72-0.74) in the NCDB and 0.77 (95% CI 0.68-0.85) in the external validation, while the cN+ patient model AUC was 0.71 (95% CI 0.70-0.72) in the NCDB and 0.74 (95% CI 0.67-0.82) in the external validation. We constructed two models that showed good discrimination for predicting ypN0 status following NAC in cN0 and cN+ patients. These clinically useful models can guide surgical planning after NAC.

  10. Continuum Modeling and Control of Large Nonuniform Wireless Networks via Nonlinear Partial Differential Equations

    DOE PAGES

    Zhang, Yang; Chong, Edwin K. P.; Hannig, Jan; ...

    2013-01-01

    We inmore » troduce a continuum modeling method to approximate a class of large wireless networks by nonlinear partial differential equations (PDEs). This method is based on the convergence of a sequence of underlying Markov chains of the network indexed by N , the number of nodes in the network. As N goes to infinity, the sequence converges to a continuum limit, which is the solution of a certain nonlinear PDE. We first describe PDE models for networks with uniformly located nodes and then generalize to networks with nonuniformly located, and possibly mobile, nodes. Based on the PDE models, we develop a method to control the transmissions in nonuniform networks so that the continuum limit is invariant under perturbations in node locations. This enables the networks to maintain stable global characteristics in the presence of varying node locations.« less

  11. Modeling Citation Networks Based on Vigorousness and Dormancy

    NASA Astrophysics Data System (ADS)

    Wang, Xue-Wen; Zhang, Li-Jie; Yang, Guo-Hong; Xu, Xin-Jian

    2013-08-01

    In citation networks, the activity of papers usually decreases with age and dormant papers may be discovered and become fashionable again. To model this phenomenon, a competition mechanism is suggested which incorporates two factors: vigorousness and dormancy. Based on this idea, a citation network model is proposed, in which a node has two discrete stage: vigorous and dormant. Vigorous nodes can be deactivated and dormant nodes may be activated and become vigorous. The evolution of the network couples addition of new nodes and state transitions of old ones. Both analytical calculation and numerical simulation show that the degree distribution of nodes in generated networks displays a good right-skewed behavior. Particularly, scale-free networks are obtained as the deactivated vertex is target selected and exponential networks are realized for the random-selected case. Moreover, the measurement of four real-world citation networks achieves a good agreement with the stochastic model.

  12. Social relevance: toward understanding the impact of the individual in an information cascade

    NASA Astrophysics Data System (ADS)

    Hall, Robert T.; White, Joshua S.; Fields, Jeremy

    2016-05-01

    Information Cascades (IC) through a social network occur due to the decision of users to disseminate content. We define this decision process as User Diffusion (UD). IC models typically describe an information cascade by treating a user as a node within a social graph, where a node's reception of an idea is represented by some activation state. The probability of activation then becomes a function of a node's connectedness to other activated nodes as well as, potentially, the history of activation attempts. We enrich this Coarse-Grained User Diffusion (CGUD) model by applying actor type logics to the nodes of the graph. The resulting Fine-Grained User Diffusion (FGUD) model utilizes prior research in actor typing to generate a predictive model regarding the future influence a user will have on an Information Cascade. Furthermore, we introduce a measure of Information Resonance that is used to aid in predictions regarding user behavior.

  13. Strategic Implications of Cloud Computing for Modeling and Simulation (Briefing)

    DTIC Science & Technology

    2016-04-01

    of Promises with Cloud • Cost efficiency • Unlimited storage • Backup and recovery • Automatic software integration • Easy access to information...activities that wrap the actual exercise itself (e.g., travel for exercise support, data collection, integration , etc.). Cloud -based simulation would...requiring quick delivery rather than fewer large messages requiring high bandwidth. Cloud environments tend to be better at providing high-bandwidth

  14. Biomechanics Simulations Using Cubic Hermite Meshes with Extraordinary Nodes for Isogeometric Cardiac Modeling

    PubMed Central

    Gonzales, Matthew J.; Sturgeon, Gregory; Segars, W. Paul; McCulloch, Andrew D.

    2016-01-01

    Cubic Hermite hexahedral finite element meshes have some well-known advantages over linear tetrahedral finite element meshes in biomechanical and anatomic modeling using isogeometric analysis. These include faster convergence rates as well as the ability to easily model rule-based anatomic features such as cardiac fiber directions. However, it is not possible to create closed complex objects with only regular nodes; these objects require the presence of extraordinary nodes (nodes with 3 or >= 5 adjacent elements in 2D) in the mesh. The presence of extraordinary nodes requires new constraints on the derivatives of adjacent elements to maintain continuity. We have developed a new method that uses an ensemble coordinate frame at the nodes and a local-to-global mapping to maintain continuity. In this paper, we make use of this mapping to create cubic Hermite models of the human ventricles and a four-chamber heart. We also extend the methods to the finite element equations to perform biomechanics simulations using these meshes. The new methods are validated using simple test models and applied to anatomically accurate ventricular meshes with valve annuli to simulate complete cardiac cycle simulations. PMID:27182096

  15. Macroscopic description of complex adaptive networks coevolving with dynamic node states

    NASA Astrophysics Data System (ADS)

    Wiedermann, Marc; Donges, Jonathan F.; Heitzig, Jobst; Lucht, Wolfgang; Kurths, Jürgen

    2015-05-01

    In many real-world complex systems, the time evolution of the network's structure and the dynamic state of its nodes are closely entangled. Here we study opinion formation and imitation on an adaptive complex network which is dependent on the individual dynamic state of each node and vice versa to model the coevolution of renewable resources with the dynamics of harvesting agents on a social network. The adaptive voter model is coupled to a set of identical logistic growth models and we mainly find that, in such systems, the rate of interactions between nodes as well as the adaptive rewiring probability are crucial parameters for controlling the sustainability of the system's equilibrium state. We derive a macroscopic description of the system in terms of ordinary differential equations which provides a general framework to model and quantify the influence of single node dynamics on the macroscopic state of the network. The thus obtained framework is applicable to many fields of study, such as epidemic spreading, opinion formation, or socioecological modeling.

  16. Macroscopic description of complex adaptive networks coevolving with dynamic node states.

    PubMed

    Wiedermann, Marc; Donges, Jonathan F; Heitzig, Jobst; Lucht, Wolfgang; Kurths, Jürgen

    2015-05-01

    In many real-world complex systems, the time evolution of the network's structure and the dynamic state of its nodes are closely entangled. Here we study opinion formation and imitation on an adaptive complex network which is dependent on the individual dynamic state of each node and vice versa to model the coevolution of renewable resources with the dynamics of harvesting agents on a social network. The adaptive voter model is coupled to a set of identical logistic growth models and we mainly find that, in such systems, the rate of interactions between nodes as well as the adaptive rewiring probability are crucial parameters for controlling the sustainability of the system's equilibrium state. We derive a macroscopic description of the system in terms of ordinary differential equations which provides a general framework to model and quantify the influence of single node dynamics on the macroscopic state of the network. The thus obtained framework is applicable to many fields of study, such as epidemic spreading, opinion formation, or socioecological modeling.

  17. [Application of digital 3D technique combined with nanocarbon-aided navigation in endoscopic sentinel lymph node biopsy for breast cancer].

    PubMed

    Zhang, Pu-Sheng; Luo, Yun-Feng; Yu, Jin-Long; Fang, Chi-Hua; Shi, Fu-Jun; Deng, Jian-Wen

    2016-08-20

    To study the clinical value of digital 3D technique combined with nanocarbon-aided navigation in endoscopic sentinel lymph node biopsy for breast cancer. Thirty-nine female patients with stage I/II breast cancer admitted in our hospital between September 2014 and September 2015 were recruited. CT lymphography data of the patients were segmented to reconstruct digital 3D models, which were imported into FreeForm Modeling Surgical System Platform for visual simulation surgery before operation. Endoscopic sentinel lymph node biopsy and endoscopic axillary lymph node dissection were then carried out, and the accuracy and clinical value of digital 3D technique in endoscopic sentinel lymph node biopsy were analyzed. s The 3D models faithfully represented the surgical anatomy of the patients and clearly displayed the 3D relationship among the sentinel lymph nodes, axillary lymph nodes, axillary vein, pectoralis major, pectoralis minor muscle and latissimus dorsi. In the biopsy, the detection rate of sentinel lymph nodes was 100% in the patients with a coincidence rate of 87.18% (34/39), a sensitivity of 91.67% (11/12), and a false negative rate of 8.33% (1/12). Complications such as limb pain, swelling, wound infection, and subcutaneouseroma were not found in these patients 6 months after the operation. Endoscopic sentinel lymph node biopsy assisted by digital 3D technique and nanocarbon-aided navigation allows a high detection rate of sentinel lymph nodes with a high sensitivity and a low false negative rate and can serve as a new method for sentinel lymph node biopsy for breast cancer.

  18. [Establishment of lymph node metastasis of MDA-MB-231 breast cancer model in nude mice].

    PubMed

    Wang, Le; Mi, Chengrong; Wang, Wen

    2015-06-16

    To establish lymph node metastasis of breast cancer model in nude mices using MDA-MB-231 cell lines or tumor masses. Divided twelve female nude mices of five weeks into A, B groups randomly. A group had seven nude mices, B group had five nude mices. A group nude mices were injected with MDA-MB-231 cells suspension into the second right mammary fat pad. Two weeks after emerged tumors, the orthotopic tumors of two nude mices of A group were dissected and then implanted into the second right mammary fat pad of B group nude mices. The other mices of A group continued to be fed. After six weeks of inoculation, we excised the tumors and the swollen lymph nodes in right axilla of all nude mices to make pathological examination. ① A group have a 7/7 tumor formation rate 7 days after implanted, B group was 5/5 5 days after implanted. ② The tumor volumes between the two groups had evident difference (P = 0.023), and the tumor volume of B group was bigger than A group. ③ A group had three nude mices which had one tumid lymph node respectively, the lymph node enlargement rate was 3/5; B group only had one nude mice that had one tumid lymph node, the lymph node enlargement rate was 1/5, the lymph node enlargement rate between the two groups showed no significant difference (P = 0.524). ④ The result of pathology in the two groups testified the tumors were invasive ductal carcinoma. The swollen lymph nodes in A group were reactive hyperplasia lymph nodes; the swollen lymph nodes in B group was metastatic lymph node. The method of orthotopic implantation with MDA-MB-231 tumor mass to establish lymph node metastasis of breast cancer model in nude mice, can provide a useful mean to research the lymph node metastasis mechanism of breast cancer.

  19. A model and nomogram to predict tumor site origin for squamous cell cancer confined to cervical lymph nodes.

    PubMed

    Ali, Arif N; Switchenko, Jeffrey M; Kim, Sungjin; Kowalski, Jeanne; El-Deiry, Mark W; Beitler, Jonathan J

    2014-11-15

    The current study was conducted to develop a multifactorial statistical model to predict the specific head and neck (H&N) tumor site origin in cases of squamous cell carcinoma confined to the cervical lymph nodes ("unknown primaries"). The Surveillance, Epidemiology, and End Results (SEER) database was analyzed for patients with an H&N tumor site who were diagnosed between 2004 and 2011. The SEER patients were identified according to their H&N primary tumor site and clinically positive cervical lymph node levels at the time of presentation. The SEER patient data set was randomly divided into 2 data sets for the purposes of internal split-sample validation. The effects of cervical lymph node levels, age, race, and sex on H&N primary tumor site were examined using univariate and multivariate analyses. Multivariate logistic regression models and an associated set of nomograms were developed based on relevant factors to provide probabilities of tumor site origin. Analysis of the SEER database identified 20,011 patients with H&N disease with both site-level and lymph node-level data. Sex, race, age, and lymph node levels were associated with primary H&N tumor site (nasopharynx, hypopharynx, oropharynx, and larynx) in the multivariate models. Internal validation techniques affirmed the accuracy of these models on separate data. The incorporation of epidemiologic and lymph node data into a predictive model has the potential to provide valuable guidance to clinicians in the treatment of patients with squamous cell carcinoma confined to the cervical lymph nodes. © 2014 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  20. Degree and wealth distribution in a network induced by wealth

    NASA Astrophysics Data System (ADS)

    Lee, Gyemin; Kim, Gwang Il

    2007-09-01

    A network induced by wealth is a social network model in which wealth induces individuals to participate as nodes, and every node in the network produces and accumulates wealth utilizing its links. More specifically, at every time step a new node is added to the network, and a link is created between one of the existing nodes and the new node. Innate wealth-producing ability is randomly assigned to every new node, and the node to be connected to the new node is chosen randomly, with odds proportional to the accumulated wealth of each existing node. Analyzing this network using the mean value and continuous flow approaches, we derive a relation between the conditional expectations of the degree and the accumulated wealth of each node. From this relation, we show that the degree distribution of the network induced by wealth is scale-free. We also show that the wealth distribution has a power-law tail and satisfies the 80/20 rule. We also show that, over the whole range, the cumulative wealth distribution exhibits the same topological characteristics as the wealth distributions of several networks based on the Bouchaud-Mèzard model, even though the mechanism for producing wealth is quite different in our model. Further, we show that the cumulative wealth distribution for the poor and middle class seems likely to follow by a log-normal distribution, while for the richest, the cumulative wealth distribution has a power-law behavior.

  1. Model Checking A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2012-01-01

    This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV) for a subset of digraphs. Modeling challenges of the protocol and the system are addressed. The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period.

  2. EMG-Torque Dynamics Change With Contraction Bandwidth.

    PubMed

    Golkar, Mahsa A; Jalaleddini, Kian; Kearney, Robert E

    2018-04-01

    An accurate model for ElectroMyoGram (EMG)-torque dynamics has many uses. One of its applications which has gained high attention among researchers is its use, in estimating the muscle contraction level for the efficient control of prosthesis. In this paper, the dynamic relationship between the surface EMG and torque during isometric contractions at the human ankle was studied using system identification techniques. Subjects voluntarily modulated their ankle torque in dorsiflexion direction, by activating their tibialis anterior muscle, while tracking a pseudo-random binary sequence in a torque matching task. The effects of contraction bandwidth, described by torque spectrum, on EMG-torque dynamics were evaluated by varying the visual command switching time. Nonparametric impulse response functions (IRF) were estimated between the processed surface EMG and torque. It was demonstrated that: 1) at low contraction bandwidths, the identified IRFs had unphysiological anticipatory (i.e., non-causal) components, whose amplitude decreased as the contraction bandwidth increased. We hypothesized that this non-causal behavior arose, because the EMG input contained a component due to feedback from the output torque, i.e., it was recorded from within a closed-loop. Vision was not the feedback source since the non-causal behavior persisted when visual feedback was removed. Repeating the identification using a nonparametric closed-loop identification algorithm yielded causal IRFs at all bandwidths, supporting this hypothesis. 2) EMG-torque dynamics became faster and the bandwidth of system increased as contraction modulation rate increased. Thus, accurate prediction of torque from EMG signals must take into account the contraction bandwidth sensitivity of this system.

  3. Features and heterogeneities in growing network models

    NASA Astrophysics Data System (ADS)

    Ferretti, Luca; Cortelezzi, Michele; Yang, Bin; Marmorini, Giacomo; Bianconi, Ginestra

    2012-06-01

    Many complex networks from the World Wide Web to biological networks grow taking into account the heterogeneous features of the nodes. The feature of a node might be a discrete quantity such as a classification of a URL document such as personal page, thematic website, news, blog, search engine, social network, etc., or the classification of a gene in a functional module. Moreover the feature of a node can be a continuous variable such as the position of a node in the embedding space. In order to account for these properties, in this paper we provide a generalization of growing network models with preferential attachment that includes the effect of heterogeneous features of the nodes. The main effect of heterogeneity is the emergence of an “effective fitness” for each class of nodes, determining the rate at which nodes acquire new links. The degree distribution exhibits a multiscaling behavior analogous to the the fitness model. This property is robust with respect to variations in the model, as long as links are assigned through effective preferential attachment. Beyond the degree distribution, in this paper we give a full characterization of the other relevant properties of the model. We evaluate the clustering coefficient and show that it disappears for large network size, a property shared with the Barabási-Albert model. Negative degree correlations are also present in this class of models, along with nontrivial mixing patterns among features. We therefore conclude that both small clustering coefficients and disassortative mixing are outcomes of the preferential attachment mechanism in general growing networks.

  4. Time and Energy Efficient Relay Transmission for Multi-Hop Wireless Sensor Networks.

    PubMed

    Kim, Jin-Woo; Barrado, José Ramón Ramos; Jeon, Dong-Keun

    2016-06-27

    The IEEE 802.15.4 standard is widely recognized as one of the most successful enabling technologies for short range low rate wireless communications and it is used in IoT applications. It covers all the details related to the MAC and PHY layers of the IoT protocol stack. Due to the nature of IoT, the wireless sensor networks are autonomously self-organized networks without infrastructure support. One of the issues in IoT is the network scalability. To address this issue, it is necessary to support the multi-hop topology. The IEEE 802.15.4 network can support a star, peer-to-peer, or cluster-tree topology. One of the IEEE 802.15.4 topologies suited for the high predictability of performance guarantees and energy efficient behavior is a cluster-tree topology where sensor nodes can switch off their transceivers and go into a sleep state to save energy. However, the IEEE 802.15.4 cluster-tree topology may not be able to provide sufficient bandwidth for the increased traffic load and the additional information may not be delivered successfully. The common drawback of the existing approaches is that they do not address the poor bandwidth utilization problem in IEEE 802.15.4 cluster-tree networks, so it is difficult to increase the network performance. Therefore, to solve this problem in this paper we study a relay transmission protocol based on the standard protocol in the IEEE 802.15.4 MAC. In the proposed scheme, the coordinators can relay data frames to their parent devices or their children devices without contention and can provide bandwidth for the increased traffic load or the number of devices. We also evaluate the performance of the proposed scheme through simulation. The simulation results demonstrate that the proposed scheme can improve the reliability, the end-to-end delay, and the energy consumption.

  5. Time and Energy Efficient Relay Transmission for Multi-Hop Wireless Sensor Networks

    PubMed Central

    Kim, Jin-Woo; Barrado, José Ramón Ramos; Jeon, Dong-Keun

    2016-01-01

    The IEEE 802.15.4 standard is widely recognized as one of the most successful enabling technologies for short range low rate wireless communications and it is used in IoT applications. It covers all the details related to the MAC and PHY layers of the IoT protocol stack. Due to the nature of IoT, the wireless sensor networks are autonomously self-organized networks without infrastructure support. One of the issues in IoT is the network scalability. To address this issue, it is necessary to support the multi-hop topology. The IEEE 802.15.4 network can support a star, peer-to-peer, or cluster-tree topology. One of the IEEE 802.15.4 topologies suited for the high predictability of performance guarantees and energy efficient behavior is a cluster-tree topology where sensor nodes can switch off their transceivers and go into a sleep state to save energy. However, the IEEE 802.15.4 cluster-tree topology may not be able to provide sufficient bandwidth for the increased traffic load and the additional information may not be delivered successfully. The common drawback of the existing approaches is that they do not address the poor bandwidth utilization problem in IEEE 802.15.4 cluster-tree networks, so it is difficult to increase the network performance. Therefore, to solve this problem in this paper we study a relay transmission protocol based on the standard protocol in the IEEE 802.15.4 MAC. In the proposed scheme, the coordinators can relay data frames to their parent devices or their children devices without contention and can provide bandwidth for the increased traffic load or the number of devices. We also evaluate the performance of the proposed scheme through simulation. The simulation results demonstrate that the proposed scheme can improve the reliability, the end-to-end delay, and the energy consumption. PMID:27355952

  6. Scale-free behavior of networks with the copresence of preferential and uniform attachment rules

    NASA Astrophysics Data System (ADS)

    Pachon, Angelica; Sacerdote, Laura; Yang, Shuyi

    2018-05-01

    Complex networks in different areas exhibit degree distributions with a heavy upper tail. A preferential attachment mechanism in a growth process produces a graph with this feature. We herein investigate a variant of the simple preferential attachment model, whose modifications are interesting for two main reasons: to analyze more realistic models and to study the robustness of the scale-free behavior of the degree distribution. We introduce and study a model which takes into account two different attachment rules: a preferential attachment mechanism (with probability 1 - p) that stresses the rich get richer system, and a uniform choice (with probability p) for the most recent nodes, i.e. the nodes belonging to a window of size w to the left of the last born node. The latter highlights a trend to select one of the last added nodes when no information is available. The recent nodes can be either a given fixed number or a proportion (αn) of the total number of existing nodes. In the first case, we prove that this model exhibits an asymptotically power-law degree distribution. The same result is then illustrated through simulations in the second case. When the window of recent nodes has a constant size, we herein prove that the presence of the uniform rule delays the starting time from which the asymptotic regime starts to hold. The mean number of nodes of degree k and the asymptotic degree distribution are also determined analytically. Finally, a sensitivity analysis on the parameters of the model is performed.

  7. Improvement of modulation bandwidth in electroabsorption-modulated laser by utilizing the resonance property in bonding wire.

    PubMed

    Kwon, Oh Kee; Han, Young Tak; Baek, Yong Soon; Chung, Yun C

    2012-05-21

    We present and demonstrate a simple and cost-effective technique for improving the modulation bandwidth of electroabsorption-modulated laser (EML). This technique utilizes the RF resonance caused by the EML chip (i.e., junction capacitance) and bonding wire (i.e, wire inductance). We analyze the effects of the lengths of the bonding wires on the frequency responses of EML by using an equivalent circuit model. To verify this analysis, we package a lumped EML chip on the sub-mount and measure its frequency responses. The results show that, by using the proposed technique, we can increase the modulation bandwidth of EML from ~16 GHz to ~28 GHz.

  8. A transmission power optimization with a minimum node degree for energy-efficient wireless sensor networks with full-reachability.

    PubMed

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-03-20

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.

  9. A Transmission Power Optimization with a Minimum Node Degree for Energy-Efficient Wireless Sensor Networks with Full-Reachability

    PubMed Central

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-01-01

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351

  10. Global epidemic invasion thresholds in directed cattle subpopulation networks having source, sink, and transit nodes.

    PubMed

    Schumm, Phillip; Scoglio, Caterina; Zhang, Qian; Balcan, Duygu

    2015-02-21

    Through the characterization of a metapopulation cattle disease model on a directed network having source, transit, and sink nodes, we derive two global epidemic invasion thresholds. The first threshold defines the conditions necessary for an epidemic to successfully spread at the global scale. The second threshold defines the criteria that permit an epidemic to move out of the giant strongly connected component and to invade the populations of the sink nodes. As each sink node represents a final waypoint for cattle before slaughter, the existence of an epidemic among the sink nodes is a serious threat to food security. We find that the relationship between these two thresholds depends on the relative proportions of transit and sink nodes in the system and the distributions of the in-degrees of both node types. These analytic results are verified through numerical realizations of the metapopulation cattle model. Published by Elsevier Ltd.

  11. Modeling pre-metastatic lymphvascular niche in the mouse ear sponge assay

    NASA Astrophysics Data System (ADS)

    García-Caballero, Melissa; van de Velde, Maureen; Blacher, Silvia; Lambert, Vincent; Balsat, Cédric; Erpicum, Charlotte; Durré, Tania; Kridelka, Frédéric; Noel, Agnès

    2017-01-01

    Lymphangiogenesis, the formation of new lymphatic vessels, occurs in primary tumors and in draining lymph nodes leading to pre-metastatic niche formation. Reliable in vivo models are becoming instrumental for investigating alterations occurring in lymph nodes before tumor cell arrival. In this study, we demonstrate that B16F10 melanoma cell encapsulation in a biomaterial, and implantation in the mouse ear, prevents their rapid lymphatic spread observed when cells are directly injected in the ear. Vascular remodeling in lymph nodes was detected two weeks after sponge implantation, while their colonization by tumor cells occurred two weeks later. In this model, a huge lymphangiogenic response was induced in primary tumors and in pre-metastatic and metastatic lymph nodes. In control lymph nodes, lymphatic vessels were confined to the cortex. In contrast, an enlargement and expansion of lymphatic vessels towards paracortical and medullar areas occurred in pre-metastatic lymph nodes. We designed an original computerized-assisted quantification method to examine the lymphatic vessel structure and the spatial distribution. This new reliable and accurate model is suitable for in vivo studies of lymphangiogenesis, holds promise for unraveling the mechanisms underlying lymphatic metastases and pre-metastatic niche formation in lymph nodes, and will provide new tools for drug testing.

  12. A Fully Implemented 12 × 12 Data Vortex Optical Packet Switching Interconnection Network

    NASA Astrophysics Data System (ADS)

    Shacham, Assaf; Small, Benjamin A.; Liboiron-Ladouceur, Odile; Bergman, Keren

    2005-10-01

    A fully functional optical packet switching (OPS) interconnection network based on the data vortex architecture is presented. The photonic switching fabric uniquely capitalizes on the enormous bandwidth advantage of wavelength division multiplexing (WDM) wavelength parallelism while delivering minimal packet transit latency. Utilizing semiconductor optical amplifier (SOA)-based switching nodes and conventional fiber-optic technology, the 12-port system exhibits a capacity of nearly 1 Tb/s. Optical packets containing an eight-wavelength WDM payload with 10 Gb/s per wavelength are routed successfully to all 12 ports while maintaining a bit error rate (BER) of 10-12 or better. Median port-to-port latencies of 110 ns are achieved with a distributed deflection routing network that resolves packet contention on-the-fly without the use of optical buffers and maintains the entire payload path in the optical domain.

  13. Programmable on-chip and off-chip network architecture on demand for flexible optical intra-datacenters.

    PubMed

    Rofoee, Bijan Rahimzadeh; Zervas, Georgios; Yan, Yan; Amaya, Norberto; Qin, Yixuan; Simeonidou, Dimitra

    2013-03-11

    The paper presents a novel network architecture on demand approach using on-chip and-off chip implementations, enabling programmable, highly efficient and transparent networking, well suited for intra-datacenter communications. The implemented FPGA-based adaptable line-card with on-chip design along with an architecture on demand (AoD) based off-chip flexible switching node, deliver single chip dual L2-Packet/L1-time shared optical network (TSON) server Network Interface Cards (NIC) interconnected through transparent AoD based switch. It enables hitless adaptation between Ethernet over wavelength switched network (EoWSON), and TSON based sub-wavelength switching, providing flexible bitrates, while meeting strict bandwidth, QoS requirements. The on and off-chip performance results show high throughput (9.86Ethernet, 8.68Gbps TSON), high QoS, as well as hitless switch-over.

  14. Low-Frequency MEMS Electrostatic Vibration Energy Harvester With Corona-Charged Vertical Electrets and Nonlinear Stoppers

    NASA Astrophysics Data System (ADS)

    Lu, Y.; Cottone, F.; Boisseau, S.; Galayko, D.; Marty, F.; Basset, P.

    2015-12-01

    This paper reports for the first time a MEMS electrostatic vibration energy harvester (e-VEH) with corona-charged vertical electrets on its electrodes. The bandwidth of the 1-cm2 device is extended in low and high frequencies by nonlinear elastic stoppers. With a bias voltage of 46 V (electret@21 V + DC external source@25 V) between the electrodes, the RMS power of the device reaches 0.89 μW at 33 Hz and 6.6 μW at 428 Hz. The -3dB frequency band including the hysteresis is 223∼432 Hz, the one excluding the hysteresis 88∼166 Hz. We also demonstrate the charging of a 47 μF capacitor used for powering a wireless and autonomous temperature sensor node with a data transmission beyond 10 m at 868 MHz.

  15. Data driven CAN node reliability assessment for manufacturing system

    NASA Astrophysics Data System (ADS)

    Zhang, Leiming; Yuan, Yong; Lei, Yong

    2017-01-01

    The reliability of the Controller Area Network(CAN) is critical to the performance and safety of the system. However, direct bus-off time assessment tools are lacking in practice due to inaccessibility of the node information and the complexity of the node interactions upon errors. In order to measure the mean time to bus-off(MTTB) of all the nodes, a novel data driven node bus-off time assessment method for CAN network is proposed by directly using network error information. First, the corresponding network error event sequence for each node is constructed using multiple-layer network error information. Then, the generalized zero inflated Poisson process(GZIP) model is established for each node based on the error event sequence. Finally, the stochastic model is constructed to predict the MTTB of the node. The accelerated case studies with different error injection rates are conducted on a laboratory network to demonstrate the proposed method, where the network errors are generated by a computer controlled error injection system. Experiment results show that the MTTB of nodes predicted by the proposed method agree well with observations in the case studies. The proposed data driven node time to bus-off assessment method for CAN networks can successfully predict the MTTB of nodes by directly using network error event data.

  16. Modelling of a bridge-shaped nonlinear piezoelectric energy harvester

    NASA Astrophysics Data System (ADS)

    Gafforelli, G.; Xu, R.; Corigliano, A.; Kim, S. G.

    2013-12-01

    Piezoelectric MicroElectroMechanical Systems (MEMS) energy harvesting is an attractive technology for harvesting small magnitudes of energy from ambient vibrations. Increasing the operating frequency bandwidth of such devices is one of the major issues for real world applications. A MEMS-scale doubly clamped nonlinear beam resonator is designed and developed to demonstrate very wide bandwidth and high power density. In this paper a first complete theoretical discussion of nonlinear resonating piezoelectric energy harvesting is provided. The sectional behaviour of the beam is studied through the Classical Lamination Theory (CLT) specifically modified to introduce the piezoelectric coupling and nonlinear Green-Lagrange strain tensor. A lumped parameter model is built through Rayleigh-Ritz Method and the resulting nonlinear coupled equations are solved in the frequency domain through the Harmonic Balance Method (HBM). Finally, the influence of external load resistance on the dynamic behaviour is studied. The theoretical model shows that nonlinear resonant harvesters have much wider power bandwidth than that of linear resonators but their maximum power is still bounded by the mechanical damping as is the case for linear resonating harvesters.

  17. The human as a detector of changes in variance and bandwidth

    NASA Technical Reports Server (NTRS)

    Curry, R. E.; Govindaraj, T.

    1977-01-01

    The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.

  18. Coherence bandwidth loss in transionospheric radio propagation

    NASA Technical Reports Server (NTRS)

    Rino, C. L.; Gonzalez, V. H.; Hessing, A. R.

    1980-01-01

    In this report a theoretical model is developed that predicts the single-point, two-frequency coherence function for transionospheric radio waves. The theoretical model is compared to measured complex frequency correlation coefficients using data from the seven equispaced, phase-coherent UHF signals transmitted by the Wideband satellite. The theory and data are in excellent agreement. The theory is critically dependent upon the power-law index, and the frequency coherence data clearly favor the comparatively small spectral indices that have been consistently measured from the wideband satellite phase data. A model for estimating the pulse delay jitter induced by the coherence bandwidth loss is also developed and compared with the actual delay jitter observed on synthesized pulses obtained from the Wideband UFH comb. The results are in good agreement with the theory. The results presented in this report, which are based on an asymptotic theory, are compared with the more commonly used quadratic theory. The model developed and validated in this report can be used to predict the effects of coherence bandwidth loss in disturbed nuclear environments. Simple formulas for the resultant pulse delay jitter are derived that can be used in predictive codes.

  19. Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors

    PubMed Central

    2017-01-01

    Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K+ conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. PMID:28381642

  20. Voltage-dependent K+ channels improve the energy efficiency of signalling in blowfly photoreceptors.

    PubMed

    Heras, Francisco J H; Anderson, John; Laughlin, Simon B; Niven, Jeremy E

    2017-04-01

    Voltage-dependent conductances in many spiking neurons are tuned to reduce action potential energy consumption, so improving the energy efficiency of spike coding. However, the contribution of voltage-dependent conductances to the energy efficiency of analogue coding, by graded potentials in dendrites and non-spiking neurons, remains unclear. We investigate the contribution of voltage-dependent conductances to the energy efficiency of analogue coding by modelling blowfly R1-6 photoreceptor membrane. Two voltage-dependent delayed rectifier K + conductances (DRs) shape the membrane's voltage response and contribute to light adaptation. They make two types of energy saving. By reducing membrane resistance upon depolarization they convert the cheap, low bandwidth membrane needed in dim light to the expensive high bandwidth membrane needed in bright light. This investment of energy in bandwidth according to functional requirements can halve daily energy consumption. Second, DRs produce negative feedback that reduces membrane impedance and increases bandwidth. This negative feedback allows an active membrane with DRs to consume at least 30% less energy than a passive membrane with the same capacitance and bandwidth. Voltage-dependent conductances in other non-spiking neurons, and in dendrites, might be organized to make similar savings. © 2017 The Author(s).

  1. Trust recovery model of Ad Hoc network based on identity authentication scheme

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Huan, Shuiyuan

    2017-05-01

    Mobile Ad Hoc network trust model is widely used to solve mobile Ad Hoc network security issues. Aiming at the problem of reducing the network availability caused by the processing of malicious nodes and selfish nodes in mobile Ad Hoc network routing based on trust model, an authentication mechanism based on identity authentication mobile Ad Hoc network is proposed, which uses identity authentication to identify malicious nodes, And trust the recovery of selfish nodes in order to achieve the purpose of reducing network congestion and improving network quality. The simulation results show that the implementation of the mechanism can effectively improve the network availability and security.

  2. A Collaborative Secure Localization Algorithm Based on Trust Model in Underwater Wireless Sensor Networks

    PubMed Central

    Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.

    2016-01-01

    Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300

  3. Endometrial Stromal Cells and Immune Cell Populations Within Lymph Nodes in a Nonhuman Primate Model of Endometriosis

    PubMed Central

    Fazleabas, A. T.; Braundmeier, A. G.; Markham, R.; Fraser, I. S.; Berbic, M.

    2011-01-01

    Mounting evidence suggests that immunological responses may be altered in endometriosis. The baboon (Papio anubis) is generally considered the best model of endometriosis pathogenesis. The objective of the current study was to investigate for the first time immunological changes within uterine and peritoneal draining lymph nodes in a nonhuman primate baboon model of endometriosis. Paraffin-embedded femoral lymph nodes were obtained from 22 normally cycling female baboons (induced endometriosis n = 11; control n = 11). Immunohistochemical staining was performed with antibodies for endometrial stromal cells, T cells, immature and mature dendritic cells, and B cells. Lymph nodes were evaluated using an automated cellular imaging system. Endometrial stromal cells were significantly increased in lymph nodes from animals with induced endometriosis, compared to control animals (P = .033). In animals with induced endometriosis, some lymph node immune cell populations including T cells, dendritic cells and B cells were increased, suggesting an efficient early response or peritoneal drainage. PMID:21617251

  4. High-frequency Ultrasound Imaging of Mouse Cervical Lymph Nodes.

    PubMed

    Walk, Elyse L; McLaughlin, Sarah L; Weed, Scott A

    2015-07-25

    High-frequency ultrasound (HFUS) is widely employed as a non-invasive method for imaging internal anatomic structures in experimental small animal systems. HFUS has the ability to detect structures as small as 30 µm, a property that has been utilized for visualizing superficial lymph nodes in rodents in brightness (B)-mode. Combining power Doppler with B-mode imaging allows for measuring circulatory blood flow within lymph nodes and other organs. While HFUS has been utilized for lymph node imaging in a number of mouse  model systems, a detailed protocol describing HFUS imaging and characterization of the cervical lymph nodes in mice has not been reported. Here, we show that HFUS can be adapted to detect and characterize cervical lymph nodes in mice. Combined B-mode and power Doppler imaging can be used to detect increases in blood flow in immunologically-enlarged cervical nodes. We also describe the use of B-mode imaging to conduct fine needle biopsies of cervical lymph nodes to retrieve lymph tissue for histological  analysis. Finally, software-aided steps are described to calculate changes in lymph node volume and to visualize changes in lymph node morphology following image reconstruction. The ability to visually monitor changes in cervical lymph node biology over time provides a simple and powerful technique for the non-invasive monitoring of cervical lymph node alterations in preclinical mouse models of oral cavity disease.

  5. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  6. Portable real-time optical coherence tomography system for intraoperative imaging and staging of breast cancer

    NASA Astrophysics Data System (ADS)

    Nguyen, Freddy T.; Zysk, Adam M.; Kotynek, Jan G.; Bellafiore, Frank J.; Rowland, Kendrith M.; Johnson, Patricia A.; Chaney, J. Eric; Boppart, Stephen A.

    2007-02-01

    Breast cancer continues to be one of the most widely diagnosed forms of cancer amongst women and the second leading type of cancer deaths amongst women. The recurrence rate of breast cancer is highly dependent on several factors including the complete removal of the primary tumor and the presence of cancer cells in involved lymph nodes. The metastatic spread and staging of breast cancer is also evaluated through the nodal assessment of the regional lymphatic system. A portable real-time spectral domain optical coherence tomography system is being presented as a clinical diagnostic tool in the intraoperative delineation of tumor margins as well as for real time lymph node assessment. The system employs a super luminescent diode centered at 1310 nm with a bandwidth of 92 nm. Using a spectral domain detection system, the data is acquired at a rate of 5 KHz / axial scan. The sample arm is a galvanometer scanning telecentric probe with an objective lens (f = 60 mm, confocal parameter = 1.5 mm) yielding an axial resolution of 8.3 μm and a transverse resolution of 35.0 μm. Images of tumor margins are acquired in the operating room ex vivo on freshly excised human tissue specimen. This data shows the potential of the use of OCT in defining the structural tumor margins in breast cancer. Images taken from ex-vivo samples on the bench system clearly delineate the differences between clusters of tumor cells and nearby adipose cells. In addition, the data shows the potential for OCT as a diagnostic tool in the staging of cancer metastasis through locoregional lymph node assessment.

  7. Modular Seafloor and Water Column Systems for the Ocean Observatories Initiative Cabled Array

    NASA Astrophysics Data System (ADS)

    Delaney, J. R.; Manalang, D.; Harrington, M.; Tilley, J.; Dosher, J.; Cram, G.; Harkins, G.; McGuire, C.; Waite, P.; McRae, E.; McGinnis, T.; Kenney, M.; Siani, C.; Michel-Hart, N.; Denny, S.; Boget, E.; Kawka, O. E.; Daly, K. L.; Luther, D. S.; Kelley, D. S.; Milcic, M.

    2016-02-01

    Over the past decade, cabled ocean observatories have become an increasingly important way to collect continuous real-time data at remote subsea locations. This has led to the development of a class of subsea systems designed and built specifically to distribute power and bandwidth among sensing instrumentation on the seafloor and throughout the water column. Such systems are typically powered by shore-based infrastructure and involve networks of fiber optic and electrical cabling that provide real-time data access and control of remotely deployed instrumentation. Several subsea node types were developed and/or adapted for cabled use in order to complete the installation of the largest North American scientific cabled observatory in Oct, 2014. The Ocean Observatories Initiative (OOI) Cabled Array, funded by the US National Science Foundation, consists of a core infrastructure that includes 900 km of fiber optic/electrical cables, seven primary nodes, 18 seafloor junction boxes, three mooring-mounted winched profiling systems, and three wire-crawling profiler systems. In aggregate, the installed infrastructure has 200 dedicated scientific instrument ports (of which 120 are currently assigned), and is capable of further expansion. The installed system has a 25-year design life for reliable, sustained monitoring; and all nodes, profilers and instrument packages are ROV-serviceable. Now in it's second year of operation, the systems that comprise the Cabled Array are providing reliable, 24/7 real-time data collection from deployed instrumentation, and offer a modular and scalable class of subsea systems for ocean observing. This presentation will provide an overview of the observatory-class subsystems of the OOI Cabled Array, focusing on the junction boxes, moorings and profilers that power and communicate with deployed instrumentation.

  8. The use of interaural parameters during incoherence detection in reproducible noise

    NASA Astrophysics Data System (ADS)

    Goupell, Matthew Joseph

    Interaural incoherence is a measure of the dissimilarity of the signals in the left and right ears. It is important in a number of acoustical phenomenon such as a listener's sensation envelopment and apparent source width in room acoustics, speech intelligibility, and binaural release from energetic masking. Humans are incredibly sensitive to the difference between perfectly coherent and slightly incoherent signals, however the nature of this sensitivity is not well understood. The purpose of this dissertation is to understand what parameters are important to incoherence detection. Incoherence is perceived to have time-varying characteristics. It is conjectured that incoherence detection is performed by a process that takes this time dependency into account. Left-ear-right-ear noise-pairs were generated, all with a fixed value of interaural coherence, 0.9922. The noises had a center frequency of 500 Hz, a bandwidth of 14 Hz, and a duration of 500 ms. Listeners were required to discriminate between these slightly incoherent noises and diotic noises, with a coherence of 1.0. It was found that the value of interaural incoherence itself was an inadequate predictor of discrimination. Instead, incoherence was much more readily detected for those noise-pairs with the largest fluctuations in interaural phase and level differences (as measured by the standard deviation). Noise-pairs with the same value of coherence, and geometric mean frequency of 500 Hz were also generated for bandwidths of 108 Hz and 2394 Hz. It was found that for increasing bandwidth, fluctuations in interaural differences varied less between different noise-pairs and that detection performance varied less as well. The results suggest that incoherence detection is based on the size and the speed of interaural fluctuations and that the value of coherence itself predicts performance only in the wide-band limit where different particular noises with the same incoherence have similar fluctuations. Noise-pairs with short durations of 100, 50, and 25 ms, and bandwidth of 14 Hz, and a coherence of 0.9922 were used to test if a short-term incoherence function is used in incoherence detection. It was found that listeners could significantly use fluctuations of phase and level to detect incoherence for all three of these short durations. Therefore, a short-term coherence function is not used to detect incoherence. For the smallest duration of 25 ms, listeners' detection cue sometimes changed from a "width" cue to a lateralization cue. Modeling of the data was performed. Ten different binaural models were tested against detection data for 14-Hz and 108-Hz bandwidths. These models included different types of binaural processing: independent interaural phase and level differences, lateral position, and short-term cross-correlation. Several preprocessing features were incorporated in the models: compression, temporal averaging, and envelope weighting. For the 14-Hz bandwidth data, the most successful model assumed independent centers for interaural phase and interaural level processing, and this model correlated with detectability at r = 0.87. That model also described the data best when it was assumed that interaural phase fluctuations and interaural level fluctuations contribute approximately equally to incoherence detection. For the 108-Hz bandwidth data, detection performance varied much less among different waveforms, and the data were less able to distinguish between models.

  9. Static and Dynamic Effects of Lateral Carrier Diffusion in Semiconductor Lasers

    NASA Technical Reports Server (NTRS)

    Li, Jian-Zhong; Cheung, Samson H.; Ning, C. Z.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Electron and hole diffusions in the plane of semiconductor quantum wells play an important part in the static and dynamic operations of semiconductor lasers. It is well known that the value of diffusion coefficients affects the threshold pumping current of a semiconductor laser. At the same time, the strength of carrier diffusion process is expected to affect the modulation bandwidth of an AC-modulated laser. It is important not only to investigate the combined DC and AC effects due to carrier diffusion, but also to separate the AC effects from that of the combined effects in order to provide design insights for high speed modulation. In this presentation, we apply a hydrodynamic model developed by the present authors recently from the semiconductor Bloch equations. The model allows microscopic calculation of the lateral carrier diffusion coefficient, which is a nonlinear function of the carrier density and plasma temperature. We first studied combined AC and DC effects of lateral carrier diffusion by studying the bandwidth dependence on diffusion coefficient at a given DC current under small signal modulation. The results show an increase of modulation bandwidth with decrease in the diffusion coefficient. We simultaneously studied the effects of nonlinearity in the diffusion coefficient. To clearly identify how much of the bandwidth increase is a result of decrease in the threshold pumping current for smaller diffusion coefficient, thus an effective increase of DC pumping, we study the bandwidth dependence on diffusion coefficient at a given relative pumping. A detailed comparison of the two cases will be presented.

  10. Distributed Time Synchronization Algorithms and Opinion Dynamics

    NASA Astrophysics Data System (ADS)

    Manita, Anatoly; Manita, Larisa

    2018-01-01

    We propose new deterministic and stochastic models for synchronization of clocks in nodes of distributed networks. An external accurate time server is used to ensure convergence of the node clocks to the exact time. These systems have much in common with mathematical models of opinion formation in multiagent systems. There is a direct analogy between the time server/node clocks pair in asynchronous networks and the leader/follower pair in the context of social network models.

  11. On the motion of substance in a channel of a network and human migration

    NASA Astrophysics Data System (ADS)

    Vitanov, Nikolay K.; Vitanov, Kaloyan N.

    2018-01-01

    We model the motion of a substance in a channel of a network that consists of chain of (i) nodes of the network and (ii) edges that connect the nodes and form the way for motion of the substance. The nodes of the channel can have different ;leakage;, i.e., some amount of the substance can leave the channel at a node and the rate of leaving can be different for the different nodes of the channel. The nodes close to the end of the channel for some (design or other) reason may be more ;attractive; for the substance in comparison to the nodes around the incoming node of the channel. We discuss channels containing infinite or finite number of nodes. The main outcome of the model is the distribution of the substance along the nodes. Two regimes of functioning of the channels are studied: stationary regime and non-stationary regime. The distribution of the substance along the nodes of the channel for the case of stationary regime is a distribution with a very long tail that contains as particular case the Waring distribution (for channel with infinite number of nodes) or the truncated Waring distribution (for channel with finite number of nodes). In the non-stationary regime of functioning of the channel one observes an exponential increase or exponential decrease of the amount of substance in the nodes. However the asymptotic distribution of the substance among the nodes of the channel in this regime remains stationary. The studied model is applied to the case of migration of humans through a migration channel consisting of chain of countries. In this case the model accounts for the number of migrants entering the channel through the first country of the channel; permeability of the borders between the countries; possible large attractiveness of some countries of the channel; possibility for migrants to obtain permission to reside in a country of the channel. The main outcome of the model is the distribution of migrants along the countries of the channel. We discuss the conditions for concentration of migrants in selected country of the channel. Finally two scenarios of changes of conditions of the functioning of the channel are discussed. It is shown that from the point of view of decreasing of the number of migrants in the countries of the channel it is more effective to concentrate efforts on preventing the entrance of migrants in the first country of the channel when compared to concentration of efforts on decrease of permeability of the borders between the countries of the channel.

  12. Modelling high data rate communication network access protocol

    NASA Technical Reports Server (NTRS)

    Khanna, S.; Foudriat, E. C.; Paterra, Frank; Maly, Kurt J.; Overstreet, C. Michael

    1990-01-01

    Modeling of high data rate communication systems is different from the low data rate systems. Three simulations were built during the development phase of Carrier Sensed Multiple Access/Ring Network (CSMA/RN) modeling. The first was a model using SIMCRIPT based upon the determination and processing of each event at each node. The second simulation was developed in C based upon isolating the distinct object that can be identified as the ring, the message, the node, and the set of critical events. The third model further identified the basic network functionality by creating a single object, the node which includes the set of critical events which occur at the node. The ring structure is implicit in the node structure. This model was also built in C. Each model is discussed and their features compared. It should be stated that the language used was mainly selected by the model developer because of his past familiarity. Further the models were not built with the intent to compare either structure or language but because the complexity of the problem and initial results contained obvious errors, so alternative models were built to isolate, determine, and correct programming and modeling errors. The CSMA/RN protocol is discussed in sufficient detail to understand modeling complexities. Each model is described along with its features and problems. The models are compared and concluding observations and remarks are presented.

  13. Comparison of across-frequency integration strategies in a binaural detection model.

    PubMed

    Breebaart, Jeroen

    2013-11-01

    Breebaart et al. [J. Acoust. Soc. Am. 110, 1089-1104 (2001)] reported that the masker bandwidth dependence of detection thresholds for an out-of-phase signal and an in-phase noise masker (N0Sπ) can be explained by principles of integration of information across critical bands. In this paper, different methods for such across-frequency integration process are evaluated as a function of the bandwidth and notch width of the masker. The results indicate that an "optimal detector" model assuming independent internal noise in each critical band provides a better fit to experimental data than a best filter or a simple across-frequency integrator model. Furthermore, the exponent used to model peripheral compression influences the accuracy of predictions in notched conditions.

  14. Fast and robust control of nanopositioning systems: Performance limits enabled by field programmable analog arrays.

    PubMed

    Baranwal, Mayank; Gorugantu, Ram S; Salapaka, Srinivasa M

    2015-08-01

    This paper aims at control design and its implementation for robust high-bandwidth precision (nanoscale) positioning systems. Even though modern model-based control theoretic designs for robust broadband high-resolution positioning have enabled orders of magnitude improvement in performance over existing model independent designs, their scope is severely limited by the inefficacies of digital implementation of the control designs. High-order control laws that result from model-based designs typically have to be approximated with reduced-order systems to facilitate digital implementation. Digital systems, even those that have very high sampling frequencies, provide low effective control bandwidth when implementing high-order systems. In this context, field programmable analog arrays (FPAAs) provide a good alternative to the use of digital-logic based processors since they enable very high implementation speeds, moreover with cheaper resources. The superior flexibility of digital systems in terms of the implementable mathematical and logical functions does not give significant edge over FPAAs when implementing linear dynamic control laws. In this paper, we pose the control design objectives for positioning systems in different configurations as optimal control problems and demonstrate significant improvements in performance when the resulting control laws are applied using FPAAs as opposed to their digital counterparts. An improvement of over 200% in positioning bandwidth is achieved over an earlier digital signal processor (DSP) based implementation for the same system and same control design, even when for the DSP-based system, the sampling frequency is about 100 times the desired positioning bandwidth.

  15. An improved network model for railway traffic

    NASA Astrophysics Data System (ADS)

    Li, Keping; Ma, Xin; Shao, Fubo

    In railway traffic, safety analysis is a key issue for controlling train operation. Here, the identification and order of key factors are very important. In this paper, a new network model is constructed for analyzing the railway safety, in which nodes are regarded as causation factors and links represent possible relationships among those factors. Our aim is to give all these nodes an importance order, and to find the in-depth relationship among these nodes including how failures spread among them. Based on the constructed network model, we propose a control method to ensure the safe state by setting each node a threshold. As the results, by protecting the Hub node of the constructed network, the spreading of railway accident can be controlled well. The efficiency of such a method is further tested with the help of numerical example.

  16. Robustness of weighted networks

    NASA Astrophysics Data System (ADS)

    Bellingeri, Michele; Cassi, Davide

    2018-01-01

    Complex network response to node loss is a central question in different fields of network science because node failure can cause the fragmentation of the network, thus compromising the system functioning. Previous studies considered binary networks where the intensity (weight) of the links is not accounted for, i.e. a link is either present or absent. However, in real-world networks the weights of connections, and thus their importance for network functioning, can be widely different. Here, we analyzed the response of real-world and model networks to node loss accounting for link intensity and the weighted structure of the network. We used both classic binary node properties and network functioning measure, introduced a weighted rank for node importance (node strength), and used a measure for network functioning that accounts for the weight of the links (weighted efficiency). We find that: (i) the efficiency of the attack strategies changed using binary or weighted network functioning measures, both for real-world or model networks; (ii) in some cases, removing nodes according to weighted rank produced the highest damage when functioning was measured by the weighted efficiency; (iii) adopting weighted measure for the network damage changed the efficacy of the attack strategy with respect the binary analyses. Our results show that if the weighted structure of complex networks is not taken into account, this may produce misleading models to forecast the system response to node failure, i.e. consider binary links may not unveil the real damage induced in the system. Last, once weighted measures are introduced, in order to discover the best attack strategy, it is important to analyze the network response to node loss using nodes rank accounting the intensity of the links to the node.

  17. Potential for bias and low precision in molecular divergence time estimation of the Canopy of Life: an example from aquatic bird families

    PubMed Central

    van Tuinen, Marcel; Torres, Christopher R.

    2015-01-01

    Uncertainty in divergence time estimation is frequently studied from many angles but rarely from the perspective of phylogenetic node age. If appropriate molecular models and fossil priors are used, a multi-locus, partitioned analysis is expected to equally minimize error in accuracy and precision across all nodes of a given phylogeny. In contrast, if available models fail to completely account for rate heterogeneity, substitution saturation and incompleteness of the fossil record, uncertainty in divergence time estimation may increase with node age. While many studies have stressed this concern with regard to deep nodes in the Tree of Life, the inference that molecular divergence time estimation of shallow nodes is less sensitive to erroneous model choice has not been tested explicitly in a Bayesian framework. Because of available divergence time estimation methods that permit fossil priors across any phylogenetic node and the present increase in efficient, cheap collection of species-level genomic data, insight is needed into the performance of divergence time estimation of shallow (<10 MY) nodes. Here, we performed multiple sensitivity analyses in a multi-locus data set of aquatic birds with six fossil constraints. Comparison across divergence time analyses that varied taxon and locus sampling, number and position of fossil constraint and shape of prior distribution showed various insights. Deviation from node ages obtained from a reference analysis was generally highest for the shallowest nodes but determined more by temporal placement than number of fossil constraints. Calibration with only the shallowest nodes significantly underestimated the aquatic bird fossil record, indicating the presence of saturation. Although joint calibration with all six priors yielded ages most consistent with the fossil record, ages of shallow nodes were overestimated. This bias was found in both mtDNA and nDNA regions. Thus, divergence time estimation of shallow nodes may suffer from bias and low precision, even when appropriate fossil priors and best available substitution models are chosen. Much care must be taken to address the possible ramifications of substitution saturation across the entire Tree of Life. PMID:26106406

  18. An Illustrative Guide to the Minerva Framework

    NASA Astrophysics Data System (ADS)

    Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration

    2017-10-01

    Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.

  19. Design, fabrication, test and delivery of a K-band antenna breadboard model

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The results of a research effort to develop a Ku-Band single channel monopulse antenna with significant improvements in efficiency and bandwidth are reported. A single aperture, multimode horn, utilized in a near field Cassegrainian configuration, was the technique selected for achieving the desired efficiency and bandwidth performance. In order to provide wide polarization flexibility, a wire grid, space filter polarizer was developed. A solid state switching network with appropriate driving electronics provides the receive channel sum and difference signal interface with an existing Apollo type tracking electronics subsystem. A full scale breadboard model of the antenna was fabricated and tested. Performance of the model was well within the requirements and goals of the contract.

  20. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, R; Gallagher, B; Neville, J

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied ourmore » model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.« less

  1. Photonic bandpass filter characteristics of multimode SOI waveguides integrated with submicron gratings.

    PubMed

    Sah, Parimal; Das, Bijoy Krishna

    2018-03-20

    It has been shown that a fundamental mode adiabatically launched into a multimode SOI waveguide with submicron grating offers well-defined flat-top bandpass filter characteristics in transmission. The transmitted spectral bandwidth is controlled by adjusting both waveguide and grating design parameters. The bandwidth is further narrowed down by cascading two gratings with detuned parameters. A semi-analytical model is used to analyze the filter characteristics (1500  nm≤λ≤1650  nm) of the device operating in transverse-electric polarization. The proposed devices were fabricated with an optimized set of design parameters in a SOI substrate with a device layer thickness of 250 nm. The pass bandwidth of waveguide devices integrated with single-stage gratings are measured to be ∼24  nm, whereas the device with two cascaded gratings with slightly detuned periods (ΔΛ=2  nm) exhibits a pass bandwidth down to ∼10  nm.

  2. Design optimization for 25 Gbit/s DML InGaAlAs/InGaAsP/InP SL-MQW laser diode incorporating temperature effect

    NASA Astrophysics Data System (ADS)

    Ke, Cheng; Li, Xun; Xi, Yanping; Yu, Yang

    2017-11-01

    In this paper, a detailed carrier dynamics model for quantum well lasers is used to study the modulation bandwidth of the directly modulated strained-layer multiple quantum well (SL-MQW) laser. The active region of the directly modulated laser (DML) is optimized in terms of the number of QWs and barrier height. To compromise the device dynamic performance at different operating temperatures, we present an overall optimized design for a 25 Gbps DML under an ambient temperature ranging from 25 to 85°C. To further enhance the modulation bandwidth, we have also proposed a mixed QWs design that increases the 3 dB bandwidth by almost 44% compared to the one without undergoing optimization. The experimental results show that the 3 dB bandwidth of the optimized DML can reach 19 GHz. A clear eye diagram with a bit rate of 25 Gbps was observed at 25°C.

  3. Preoperative ultrasound staging of the axilla make's peroperative examination of the sentinel node redundant in breast cancer: saving tissue, time and money.

    PubMed

    Van Berckelaer, Christophe; Huizing, Manon; Van Goethem, Mireille; Vervaecke, Andrew; Papadimitriou, Konstantinos; Verslegers, Inge; Trinh, Bich X; Van Dam, Peter; Altintas, Sevilay; Van den Wyngaert, Tim; Huyghe, Ivan; Siozopoulou, Vasiliki; Tjalma, Wiebren A A

    2016-11-01

    To evaluate the role of preoperative axillary staging with ultrasound (US) and fine needle aspiration cytology (FNAC). Can we avoid intraoperative sentinel lymph node (SLN) examination, with an acceptable revision rate by preoperative staging? This study is based on the retrospective data of 336 patients that underwent US evaluation of the axilla as part of their staging. A FNAC biopsy was performed when abnormal lymph nodes were visualized. Patients with normal appearing nodes on US or a benign diagnostic biopsy had removal of the SLNs without intraoperative pathological examination. We calculated the sensitivity, specificity and accuracy of US/FNAC in predicting the necessity of an axillary lymphadenectomy. Subsequently we looked at the total cost and the operating time of 3 models. Model A is our study protocol. Model B is a theoretical protocol based on the findings of the Z0011 trial with only clinical preoperative staging and in Model C preoperative staging and intraoperative pathological examination were both theoretically done. sentinel node, staging, ultrasound, preoperative axillary staging, FNAC, axilla RESULTS: The sensitivity, specificity and accuracy are respectively 0.75 (0.66-0.82), 1.00 (0.99-1.00) and 0.92 (0.88-0.94). Only 26 out of 317 (8.2%) patients that successfully underwent staging needed a revision. The total cost of Model A was 1.58% cheaper than Model C and resulted in a decrease in operation time by 9,46%. The benefits compared with Model B were much smaller. Preoperative US/FNAC staging of the axillary lymph nodes can avoid intraoperative examination of the sentinel node with an acceptable revision rate. It saves tissue, reduces operating time and decreases healthcare costs in general. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  5. GFSSP Training Course Lectures

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K.

    2008-01-01

    GFSSP has been extended to model conjugate heat transfer Fluid Solid Network Elements include: a) Fluid nodes and Flow Branches; b) Solid Nodes and Ambient Nodes; c) Conductors connecting Fluid-Solid, Solid-Solid and Solid-Ambient Nodes. Heat Conduction Equations are solved simultaneously with Fluid Conservation Equations for Mass, Momentum, Energy and Equation of State. The extended code was verified by comparing with analytical solution for simple conduction-convection problem The code was applied to model: a) Pressurization of Cryogenic Tank; b) Freezing and Thawing of Metal; c) Chilldown of Cryogenic Transfer Line; d) Boil-off from Cryogenic Tank.

  6. Memory-induced mechanism for self-sustaining activity in networks

    NASA Astrophysics Data System (ADS)

    Allahverdyan, A. E.; Steeg, G. Ver; Galstyan, A.

    2015-12-01

    We study a mechanism of activity sustaining on networks inspired by a well-known model of neuronal dynamics. Our primary focus is the emergence of self-sustaining collective activity patterns, where no single node can stay active by itself, but the activity provided initially is sustained within the collective of interacting agents. In contrast to existing models of self-sustaining activity that are caused by (long) loops present in the network, here we focus on treelike structures and examine activation mechanisms that are due to temporal memory of the nodes. This approach is motivated by applications in social media, where long network loops are rare or absent. Our results suggest that under a weak behavioral noise, the nodes robustly split into several clusters, with partial synchronization of nodes within each cluster. We also study the randomly weighted version of the models where the nodes are allowed to change their connection strength (this can model attention redistribution) and show that it does facilitate the self-sustained activity.

  7. Method of and apparatus for modeling interactions

    DOEpatents

    Budge, Kent G.

    2004-01-13

    A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.

  8. Improved knowledge diffusion model based on the collaboration hypernetwork

    NASA Astrophysics Data System (ADS)

    Wang, Jiang-Pan; Guo, Qiang; Yang, Guang-Yong; Liu, Jian-Guo

    2015-06-01

    The process for absorbing knowledge becomes an essential element for innovation in firms and in adapting to changes in the competitive environment. In this paper, we present an improved knowledge diffusion hypernetwork (IKDH) model based on the idea that knowledge will spread from the target node to all its neighbors in terms of the hyperedge and knowledge stock. We apply the average knowledge stock V(t) , the variable σ2(t) , and the variance coefficient c(t) to evaluate the performance of knowledge diffusion. By analyzing different knowledge diffusion ways, selection ways of the highly knowledgeable nodes, hypernetwork sizes and hypernetwork structures for the performance of knowledge diffusion, results show that the diffusion speed of IKDH model is 3.64 times faster than that of traditional knowledge diffusion (TKDH) model. Besides, it is three times faster to diffuse knowledge by randomly selecting "expert" nodes than that by selecting large-hyperdegree nodes as "expert" nodes. Furthermore, either the closer network structure or smaller network size results in the faster knowledge diffusion.

  9. Modelling the Energy Efficient Sensor Nodes for Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dahiya, R.; Arora, A. K.; Singh, V. R.

    2015-09-01

    Energy is an important requirement of wireless sensor networks for better performance. A widely employed energy-saving technique is to place nodes in sleep mode, corresponding to low-power consumption as well as to reduce operational capabilities. In this paper, Markov model of a sensor network is developed. The node is considered to enter a sleep mode. This model is used to investigate the system performance in terms of energy consumption, network capacity and data delivery delay.

  10. Zealotry effects on opinion dynamics in the adaptive voter model

    NASA Astrophysics Data System (ADS)

    Klamser, Pascal P.; Wiedermann, Marc; Donges, Jonathan F.; Donner, Reik V.

    2017-11-01

    The adaptive voter model has been widely studied as a conceptual model for opinion formation processes on time-evolving social networks. Past studies on the effect of zealots, i.e., nodes aiming to spread their fixed opinion throughout the system, only considered the voter model on a static network. Here we extend the study of zealotry to the case of an adaptive network topology co-evolving with the state of the nodes and investigate opinion spreading induced by zealots depending on their initial density and connectedness. Numerical simulations reveal that below the fragmentation threshold a low density of zealots is sufficient to spread their opinion to the whole network. Beyond the transition point, zealots must exhibit an increased degree as compared to ordinary nodes for an efficient spreading of their opinion. We verify the numerical findings using a mean-field approximation of the model yielding a low-dimensional set of coupled ordinary differential equations. Our results imply that the spreading of the zealots' opinion in the adaptive voter model is strongly dependent on the link rewiring probability and the average degree of normal nodes in comparison with that of the zealots. In order to avoid a complete dominance of the zealots' opinion, there are two possible strategies for the remaining nodes: adjusting the probability of rewiring and/or the number of connections with other nodes, respectively.

  11. Spatial network surrogates for disentangling complex system structure from spatial embedding of nodes

    NASA Astrophysics Data System (ADS)

    Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.

    2016-04-01

    Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.

  12. Lymph node segmentation on CT images by a shape model guided deformable surface methodh

    NASA Astrophysics Data System (ADS)

    Maleike, Daniel; Fabel, Michael; Tetzlaff, Ralf; von Tengg-Kobligk, Hendrik; Heimann, Tobias; Meinzer, Hans-Peter; Wolf, Ivo

    2008-03-01

    With many tumor entities, quantitative assessment of lymph node growth over time is important to make therapy choices or to evaluate new therapies. The clinical standard is to document diameters on transversal slices, which is not the best measure for a volume. We present a new algorithm to segment (metastatic) lymph nodes and evaluate the algorithm with 29 lymph nodes in clinical CT images. The algorithm is based on a deformable surface search, which uses statistical shape models to restrict free deformation. To model lymph nodes, we construct an ellipsoid shape model, which strives for a surface with strong gradients and user-defined gray values. The algorithm is integrated into an application, which also allows interactive correction of the segmentation results. The evaluation shows that the algorithm gives good results in the majority of cases and is comparable to time-consuming manual segmentation. The median volume error was 10.1% of the reference volume before and 6.1% after manual correction. Integrated into an application, it is possible to perform lymph node volumetry for a whole patient within the 10 to 15 minutes time limit imposed by clinical routine.

  13. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    NASA Astrophysics Data System (ADS)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  14. Connectivity disruption sparks explosive epidemic spreading.

    PubMed

    Böttcher, L; Woolley-Meza, O; Goles, E; Helbing, D; Herrmann, H J

    2016-04-01

    We investigate the spread of an infection or other malfunction of cascading nature when a system component can recover only if it remains reachable from a functioning central component. We consider the susceptible-infected-susceptible model, typical of mathematical epidemiology, on a network. Infection spreads from infected to healthy nodes, with the addition that infected nodes can only recover when they remain connected to a predefined central node, through a path that contains only healthy nodes. In this system, clusters of infected nodes will absorb their noninfected interior because no path exists between the central node and encapsulated nodes. This gives rise to the simultaneous infection of multiple nodes. Interestingly, the system converges to only one of two stationary states: either the whole population is healthy or it becomes completely infected. This simultaneous cluster infection can give rise to discontinuous jumps of different sizes in the number of failed nodes. Larger jumps emerge at lower infection rates. The network topology has an important effect on the nature of the transition: we observed hysteresis for networks with dominating local interactions. Our model shows how local spread can abruptly turn uncontrollable when it disrupts connectivity at a larger spatial scale.

  15. 200-GHz and 50-GHz AWG channelized linewidth dependent transmission of weak-resonant-cavity FPLD injection-locked by spectrally sliced ASE.

    PubMed

    Lin, Gong-Ru; Cheng, Tzu-Kang; Chi, Yu-Chieh; Lin, Gong-Cheng; Wang, Hai-Lin; Lin, Yi-Hong

    2009-09-28

    In a weak-resonant-cavity Fabry-Perot laser diode (WRC-FPLD) based DWDM-PON system with an array-waveguide-grating (AWG) channelized amplified spontaneous emission (ASE) source located at remote node, we study the effect of AWG filter bandwidth on the transmission performances of the 1.25-Gbit/s directly modulated WRC-FPLD transmitter under the AWG channelized ASE injection-locking. With AWG filters of two different channel spacings at 50 and 200 GHz, several characteristic parameters such as interfered reflection, relatively intensity noise, crosstalk reduction, side-mode-suppressing ratio and power penalty of BER effect of the WRC-FPLD transmitted data are compared. The 200-GHz AWG filtered ASE injection minimizes the noises of WRC-FPLD based ONU transmitter, improving the power penalty of upstream data by -1.6 dB at BER of 10(-12). In contrast, the 50-GHz AWG channelized ASE injection fails to promote better BER but increases the power penalty by + 1.5 dB under back-to-back transmission. A theoretical modeling elucidates that the BER degradation up to 4 orders of magnitude between two injection cases is mainly attributed to the reduction on ASE injection linewidth, since which concurrently degrades the signal-to-noise and extinction ratios of the transmitted data stream.

  16. Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, W Michael; Kohlmeyer, Axel; Plimpton, Steven J

    The use of accelerators such as graphics processing units (GPUs) has become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high-performance computers, machines with nodes containing more than one type of floating-point processor (e.g. CPU and GPU), are now becoming more prevalent due to these advantages. In this paper, we present a continuation of previous work implementing algorithms for using accelerators into the LAMMPS molecular dynamics software for distributed memory parallel hybrid machines. In our previous work, we focused on acceleration for short-range models with anmore » approach intended to harness the processing power of both the accelerator and (multi-core) CPUs. To augment the existing implementations, we present an efficient implementation of long-range electrostatic force calculation for molecular dynamics. Specifically, we present an implementation of the particle-particle particle-mesh method based on the work by Harvey and De Fabritiis. We present benchmark results on the Keeneland InfiniBand GPU cluster. We provide a performance comparison of the same kernels compiled with both CUDA and OpenCL. We discuss limitations to parallel efficiency and future directions for improving performance on hybrid or heterogeneous computers.« less

  17. Error Analysis of Magnetohydrodynamic Angular Rate Sensor Combing with Coriolis Effect at Low Frequency.

    PubMed

    Ji, Yue; Xu, Mengjie; Li, Xingfei; Wu, Tengfei; Tuo, Weixiao; Wu, Jun; Dong, Jiuzhi

    2018-06-13

    The magnetohydrodynamic (MHD) angular rate sensor (ARS) with low noise level in ultra-wide bandwidth is developed in lasing and imaging applications, especially the line-of-sight (LOS) system. A modified MHD ARS combined with the Coriolis effect was studied in this paper to expand the sensor’s bandwidth at low frequency (<1 Hz), which is essential for precision LOS pointing and wide-bandwidth LOS jitter suppression. The model and the simulation method were constructed and a comprehensive solving method based on the magnetic and electric interaction methods was proposed. The numerical results on the Coriolis effect and the frequency response of the modified MHD ARS were detailed. In addition, according to the experimental results of the designed sensor consistent with the simulation results, an error analysis of model errors was discussed. Our study provides an error analysis method of MHD ARS combined with the Coriolis effect and offers a framework for future studies to minimize the error.

  18. Mining Top K Spread Sources for a Specific Topic and a Given Node.

    PubMed

    Liu, Weiwei; Deng, Zhi-Hong; Cao, Longbing; Xu, Xiaoran; Liu, He; Gong, Xiuwen

    2015-11-01

    In social networks, nodes (or users) interested in specific topics are often influenced by others. The influence is usually associated with a set of nodes rather than a single one. An interesting but challenging task for any given topic and node is to find the set of nodes that represents the source or trigger for the topic and thus identify those nodes that have the greatest influence on the given node as the topic spreads. We find that it is an NP-hard problem. This paper proposes an effective framework to deal with this problem. First, the topic propagation is represented as the Bayesian network. We then construct the propagation model by a variant of the voter model. The probability transition matrix (PTM) algorithm is presented to conduct the probability inference with the complexity O(θ(3)log2θ), while θ is the number nodes in the given graph. To evaluate the PTM algorithm, we conduct extensive experiments on real datasets. The experimental results show that the PTM algorithm is both effective and efficient.

  19. Tool wear modeling using abductive networks

    NASA Astrophysics Data System (ADS)

    Masory, Oren

    1992-09-01

    A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.

  20. Broadband locally resonant metamaterials with graded hierarchical architecture

    NASA Astrophysics Data System (ADS)

    Liu, Chenchen; Reina, Celia

    2018-03-01

    We investigate the effect of hierarchical designs on the bandgap structure of periodic lattice systems with inner resonators. A detailed parameter study reveals various interesting features of structures with two levels of hierarchy as compared with one level systems with identical static mass. In particular: (i) their overall bandwidth is approximately equal, yet bounded above by the bandwidth of the single-resonator system; (ii) the number of bandgaps increases with the level of hierarchy; and (iii) the spectrum of bandgap frequencies is also enlarged. Taking advantage of these features, we propose graded hierarchical structures with ultra-broadband properties. These designs are validated over analogous continuum models via finite element simulations, demonstrating their capability to overcome the bandwidth narrowness that is typical of resonant metamaterials.

  1. On the mixing time of geographical threshold graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bradonjic, Milan

    In this paper, we study the mixing time of random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Wemore » specifically study the mixing times of random walks on 2-dimensional GTGs near the connectivity threshold. We provide a set of criteria on the distribution of vertex weights that guarantees that the mixing time is {Theta}(n log n).« less

  2. Modeling T1 and T2 relaxation in bovine white matter

    NASA Astrophysics Data System (ADS)

    Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.

    2015-10-01

    The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.

  3. Complex Network Simulation of Forest Network Spatial Pattern in Pearl River Delta

    NASA Astrophysics Data System (ADS)

    Zeng, Y.

    2017-09-01

    Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network's power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network's degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network's main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc.) for networking a standard and base datum.

  4. Learnable Models for Information Diffusion and its Associated User Behavior in Micro-blogosphere

    DTIC Science & Technology

    2012-08-30

    According to the work of Even-Dar and Shapira (2007), we recall the definition of the ba- sic voter model on network G. In the model, each node of G...reason as follows. We started with the K distinct initial nodes and all the other nodes were neutral in the beginning. Recall that we set the average time... memory , running under Linux. Learning to predict opinion share and detect anti-majority opinionists in social networks 29 7 Conclusion Unlike the popular

  5. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks.

    PubMed

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-19

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes' being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network's best service quality and lifetime.

  6. Matching-centrality decomposition and the forecasting of new links in networks.

    PubMed

    Rohr, Rudolf P; Naisbit, Russell E; Mazza, Christian; Bersier, Louis-Félix

    2016-02-10

    Networks play a prominent role in the study of complex systems of interacting entities in biology, sociology, and economics. Despite this diversity, we demonstrate here that a statistical model decomposing networks into matching and centrality components provides a comprehensive and unifying quantification of their architecture. The matching term quantifies the assortative structure in which node makes links with which other node, whereas the centrality term quantifies the number of links that nodes make. We show, for a diverse set of networks, that this decomposition can provide a tight fit to observed networks. Then we provide three applications. First, we show that the model allows very accurate prediction of missing links in partially known networks. Second, when node characteristics are known, we show how the matching-centrality decomposition can be related to this external information. Consequently, it offers us a simple and versatile tool to explore how node characteristics explain network architecture. Finally, we demonstrate the efficiency and flexibility of the model to forecast the links that a novel node would create if it were to join an existing network. © 2016 The Author(s).

  7. Matching–centrality decomposition and the forecasting of new links in networks

    PubMed Central

    Rohr, Rudolf P.; Naisbit, Russell E.; Mazza, Christian; Bersier, Louis-Félix

    2016-01-01

    Networks play a prominent role in the study of complex systems of interacting entities in biology, sociology, and economics. Despite this diversity, we demonstrate here that a statistical model decomposing networks into matching and centrality components provides a comprehensive and unifying quantification of their architecture. The matching term quantifies the assortative structure in which node makes links with which other node, whereas the centrality term quantifies the number of links that nodes make. We show, for a diverse set of networks, that this decomposition can provide a tight fit to observed networks. Then we provide three applications. First, we show that the model allows very accurate prediction of missing links in partially known networks. Second, when node characteristics are known, we show how the matching–centrality decomposition can be related to this external information. Consequently, it offers us a simple and versatile tool to explore how node characteristics explain network architecture. Finally, we demonstrate the efficiency and flexibility of the model to forecast the links that a novel node would create if it were to join an existing network. PMID:26842568

  8. Systemic risk in a unifying framework for cascading processes on networks

    NASA Astrophysics Data System (ADS)

    Lorenz, J.; Battiston, S.; Schweitzer, F.

    2009-10-01

    We introduce a general framework for models of cascade and contagion processes on networks, to identify their commonalities and differences. In particular, models of social and financial cascades, as well as the fiber bundle model, the voter model, and models of epidemic spreading are recovered as special cases. To unify their description, we define the net fragility of a node, which is the difference between its fragility and the threshold that determines its failure. Nodes fail if their net fragility grows above zero and their failure increases the fragility of neighbouring nodes, thus possibly triggering a cascade. In this framework, we identify three classes depending on the way the fragility of a node is increased by the failure of a neighbour. At the microscopic level, we illustrate with specific examples how the failure spreading pattern varies with the node triggering the cascade, depending on its position in the network and its degree. At the macroscopic level, systemic risk is measured as the final fraction of failed nodes, X*, and for each of the three classes we derive a recursive equation to compute its value. The phase diagram of X* as a function of the initial conditions, thus allows for a prediction of the systemic risk as well as a comparison of the three different model classes. We could identify which model class leads to a first-order phase transition in systemic risk, i.e. situations where small changes in the initial conditions determine a global failure. Eventually, we generalize our framework to encompass stochastic contagion models. This indicates the potential for further generalizations.

  9. Clustering model for transmission of the SARS virus: application to epidemic control and risk assessment

    NASA Astrophysics Data System (ADS)

    Small, Michael; Tse, C. K.

    2005-06-01

    We propose a new four state model for disease transmission and illustrate the model with data from the 2003 SARS epidemic in Hong Kong. The critical feature of this model is that the community is modelled as a small-world network of interconnected nodes. Each node is linked to a fixed number of immediate neighbors and a random number of geographically remote nodes. Transmission can only propagate between linked nodes. This model exhibits two features typical of SARS transmission: geographically localized outbreaks and “super-spreaders”. Neither of these features are evident in standard susceptible-infected-removed models of disease transmission. Our analysis indicates that “super-spreaders” may occur even if the infectiousness of all infected individuals is constant. Moreover, we find that nosocomial transmission in Hong Kong directly contributed to the severity of the outbreak and that by limiting individual exposure time to 3-5 days the extent of the SARS epidemic would have been minimal.

  10. A novel game theoretic approach for modeling competitive information diffusion in social networks with heterogeneous nodes

    NASA Astrophysics Data System (ADS)

    Agha Mohammad Ali Kermani, Mehrdad; Fatemi Ardestani, Seyed Farshad; Aliahmadi, Alireza; Barzinpour, Farnaz

    2017-01-01

    Influence maximization deals with identification of the most influential nodes in a social network given an influence model. In this paper, a game theoretic framework is developed that models a competitive influence maximization problem. A novel competitive influence model is additionally proposed that incorporates user heterogeneity, message content, and network structure. The proposed game-theoretic model is solved using Nash Equilibrium in a real-world dataset. It is shown that none of the well-known strategies are stable and at least one player has the incentive to deviate from the proposed strategy. Moreover, violation of Nash equilibrium strategy by each player leads to their reduced payoff. Contrary to previous works, our results demonstrate that graph topology, as well as the nodes' sociability and initial tendency measures have an effect on the determination of the influential node in the network.

  11. Empirical Research of Micro-blog Information Transmission Range by Guard nodes

    NASA Astrophysics Data System (ADS)

    Chen, Shan; Ji, Ling; Li, Guang

    2018-03-01

    The prediction and evaluation of information transmission in online social networks is a challenge. It is significant to solve this issue for monitoring public option and advertisement communication. First, the prediction process is described by a set language. Then with Sina Microblog system as used as the case object, the relationship between node influence and coverage rate is analyzed by using the topology structure of information nodes. A nonlinear model is built by a statistic method in a specific, bounded and controlled Microblog network. It can predict the message coverage rate by guard nodes. The experimental results show that the prediction model has higher accuracy to the source nodes which have lower influence in social network and practical application.

  12. Removal of eye blink artifacts in wireless EEG sensor networks using reduced-bandwidth canonical correlation analysis.

    PubMed

    Somers, Ben; Bertrand, Alexander

    2016-12-01

    Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.

  13. SpaceFibre: The Standard and the Multi-Lane Layer

    NASA Astrophysics Data System (ADS)

    Parkes, Steve; McClements, Chris; McLaren, David; Florit, Albert Ferrer; Gonzalez Villafranca, Alberto

    2016-08-01

    SpaceFibre is a new standard for spacecraft on-board data-handling networks, initially designed to deliver multi-Gbit/s data rates for synthetic aperture radar and high-resolution, multi-spectral imaging instruments, The addition of quality of service (QoS) and fault detection, isolation and recovery (FDIR) capabilities to SpaceFibre has resulted in a unified network technology. SpaceFibre provides high bandwidth, low latency, fault isolation and recovery suitable for space applications, and novel QoS that combines priority, bandwidth reservation and scheduling and which provides babbling node protection. SpaceFibre is backwards compatible with the widely used SpaceWire standard at the network level allowing simple interconnection of existing SpaceWire equipment to a SpaceFibre link or network.Developed by STAR-Dundee and the University of Dundee for the European Space Agency (ESA) SpaceFibre is able to operate over fibre-optic and electrical cable. A single lane of SpaceFibre comprises four signals (TX+/- and RX+/-) and supports data rates of 2 Gbits/s (2.5 Gbits/s data signalling rate) with data rates up to 5 Gbits/s already planned.Several lanes can operate together to provide a multi- lane link. Multi-laning increases the data-rate to well over 20 Gbits/s.This paper details the current state of SpaceFibre which is now in the process of formal standardisation by the European Cooperation for Space Standardization (ECSS). The multi-lane layer of SpaceFibre is then described.

  14. An All-Optical Access Metro Interface for Hybrid WDM/TDM PON Based on OBS

    NASA Astrophysics Data System (ADS)

    Segarra, Josep; Sales, Vicent; Prat, Josep

    2007-04-01

    A new all-optical access metro network interface based on optical burst switching (OBS) is proposed. A hybrid wavelength-division multiplexing/time-division multiplexing (WDM/TDM) access architecture with reflective optical network units (ONUs), an arrayed-waveguide-grating outside plant, and a tunable laser stack at the optical line terminal (OLT) is presented as a solution for the passive optical network. By means of OBS and a dynamic bandwidth allocation (DBA) protocol, which polls the ONUs, the available access bandwidth is managed. All the network intelligence and costly equipment is located at the OLT, where the DBA module is centrally implemented, providing quality of service (QoS). To scale this access network, an optical cross connect (OXC) is then used to attain a large number of ONUs by the same OLT. The hybrid WDM/TDM structure is also extended toward the metropolitan area network (MAN) by introducing the concept of OBS multiplexer (OBS-M). The network element OBS-M bridges the MAN and access networks by offering all-optical cross connection, wavelength conversion, and data signaling. The proposed innovative OBS-M node yields a full optical data network, interfacing access and metro with a geographically distributed access control. The resulting novel access metro architectures are nonblocking and, with an improved signaling, provide QoS, scalability, and very low latency. Finally, numerical analysis and simulations demonstrate the traffic performance of the proposed access scheme and all-optical access metro interface and architectures.

  15. High-resolution all-optical photoacoustic imaging system for remote interrogation of biological specimens

    NASA Astrophysics Data System (ADS)

    Sampathkumar, Ashwin

    2014-05-01

    Conventional photoacoustic imaging (PAI) employs light pulses to produce a photoacoustic (PA) effect and detects the resulting acoustic waves using an ultrasound transducer acoustically coupled to the target tissue. The resolution of conventional PAI is limited by the sensitivity and bandwidth of the ultrasound transducer. We have developed an all-optical versatile PAI system for characterizing ex vivo and in vivo biological specimens. The system employs noncontact interferometric detection of the acoustic signals that overcomes limitations of conventional PAI. A 532-nm pump laser with a pulse duration of 5 ns excited the PA effect in tissue. Resulting acoustic waves produced surface displacements that were sensed using a 532-nm continuous-wave (CW) probe laser in a Michelson interferometer with a GHz bandwidth. The pump and probe beams were coaxially focused using a 50X objective giving a diffraction-limited spot size of 0.48 μm. The phase-encoded probe beam was demodulated using a homodyne interferometer. The detected time-domain signal was time reversed using k-space wave-propagation methods to produce a spatial distribution of PA sources in the target tissue. Performance was assessed using PA images of ex vivo rabbit lymph node specimens and human tooth samples. A minimum peak surface displacement sensitivity of 0.19 pm was measured. The all-optical PAI (AOPAI) system is well suited for assessment of retinal diseases, caries lesion detection, skin burns, section less histology and pressure or friction ulcers.

  16. Removal of eye blink artifacts in wireless EEG sensor networks using reduced-bandwidth canonical correlation analysis

    NASA Astrophysics Data System (ADS)

    Somers, Ben; Bertrand, Alexander

    2016-12-01

    Objective. Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. Approach. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. Main results. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Significance. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.

  17. The N/Rev phenomenon in simulating a blade-element rotor system

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1983-01-01

    When a simulation model produces frequencies that are beyond the bandwidth of a discrete implementation, anomalous frequencies appear within the bandwidth. Such is the case with blade element models of rotor systems, which are used in the real time, man in the loop simulation environment. Steady state, high frequency harmonics generated by these models, whether aliased or not, obscure piloted helicopter simulation responses. Since these harmonics are attenuated in actual rotorcraft (e.g., because of structural damping), a faithful environment representation for handling qualities purposes may be created from the original model by using certain filtering techniques, as outlined here. These include harmonic consideration, conventional filtering, and decontamination. The process of decontamination is of special interest because frequencies of importance to simulation operation are not attenuated, whereas superimposed aliased harmonics are.

  18. Direct Fault Tolerant RLV Altitude Control: A Singular Perturbation Approach

    NASA Technical Reports Server (NTRS)

    Zhu, J. J.; Lawrence, D. A.; Fisher, J.; Shtessel, Y. B.; Hodel, A. S.; Lu, P.; Jackson, Scott (Technical Monitor)

    2002-01-01

    In this paper, we present a direct fault tolerant control (DFTC) technique, where by "direct" we mean that no explicit fault identification is used. The technique will be presented for the attitude controller (autopilot) for a reusable launch vehicle (RLV), although in principle it can be applied to many other applications. Any partial or complete failure of control actuators and effectors will be inferred from saturation of one or more commanded control signals generated by the controller. The saturation causes a reduction in the effective gain, or bandwidth of the feedback loop, which can be modeled as an increase in singular perturbation in the loop. In order to maintain stability, the bandwidth of the nominal (reduced-order) system will be reduced proportionally according to the singular perturbation theory. The presented DFTC technique automatically handles momentary saturations and integrator windup caused by excessive disturbances, guidance command or dispersions under normal vehicle conditions. For multi-input, multi-output (MIMO) systems with redundant control effectors, such as the RLV attitude control system, an algorithm is presented for determining the direction of bandwidth cutback using the method of minimum-time optimal control with constrained control in order to maintain the best performance that is possible with the reduced control authority. Other bandwidth cutback logic, such as one that preserves the commanded direction of the bandwidth or favors a preferred direction when the commanded direction cannot be achieved, is also discussed. In this extended abstract, a simplistic example is proved to demonstrate the idea. In the final paper, test results on the high fidelity 6-DOF X-33 model with severe dispersions will be presented.

  19. An improved model to predict bandwidth enhancement in an inductively tuned common source amplifier.

    PubMed

    Reza, Ashif; Misra, Anuraag; Das, Parnika

    2016-05-01

    This paper presents an improved model for the prediction of bandwidth enhancement factor (BWEF) in an inductively tuned common source amplifier. In this model, we have included the effect of drain-source channel resistance of field effect transistor along with load inductance and output capacitance on BWEF of the amplifier. A frequency domain analysis of the model is performed and a closed-form expression is derived for BWEF of the amplifier. A prototype common source amplifier is designed and tested. The BWEF of amplifier is obtained from the measured frequency response as a function of drain current and load inductance. In the present work, we have clearly demonstrated that inclusion of drain-source channel resistance in the proposed model helps to estimate the BWEF, which is accurate to less than 5% as compared to the measured results.

  20. Voter model with arbitrary degree dependence: clout, confidence and irreversibility

    NASA Astrophysics Data System (ADS)

    Fotouhi, Babak; Rabbat, Michael G.

    2014-03-01

    The voter model is widely used to model opinion dynamics in society. In this paper, we propose three modifications to incorporate heterogeneity into the model. We address the corresponding oversimplifications of the conventional voter model which are unrealistic. We first consider the voter model with popularity bias. The influence of each node on its neighbors depends on its degree. We find the consensus probabilities and expected consensus times for each of the states. We also find the fixation probability, which is the probability that a single node whose state differs from every other node imposes its state on the entire system. In addition, we find the expected fixation time. Then two other extensions to the model are proposed and the motivations behind them are discussed. The first one is confidence, where in addition to the states of neighbors, nodes take their own state into account at each update. We repeat the calculations for the augmented model and investigate the effects of adding confidence to the model. The second proposed extension is irreversibility, where one of the states is given the property that once nodes adopt it, they cannot switch back. This is motivated by applications where, agents take an irreversible action such as seeing a movie, purchasing a music album online, or buying a new product. The dynamics of densities, fixation times and consensus times are obtained.

  1. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    NASA Astrophysics Data System (ADS)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases, near peak memory bandwidth transfer is achieved. Our approach allows us to get the best out of the current hardware.

  2. A Component-Based Diffusion Model With Structural Diversity for Social Networks.

    PubMed

    Qing Bao; Cheung, William K; Yu Zhang; Jiming Liu

    2017-04-01

    Diffusion on social networks refers to the process where opinions are spread via the connected nodes. Given a set of observed information cascades, one can infer the underlying diffusion process for social network analysis. The independent cascade model (IC model) is a widely adopted diffusion model where a node is assumed to be activated independently by any one of its neighbors. In reality, how a node will be activated also depends on how its neighbors are connected and activated. For instance, the opinions from the neighbors of the same social group are often similar and thus redundant. In this paper, we extend the IC model by considering that: 1) the information coming from the connected neighbors are similar and 2) the underlying redundancy can be modeled using a dynamic structural diversity measure of the neighbors. Our proposed model assumes each node to be activated independently by different communities (or components) of its parent nodes, each weighted by its effective size. An expectation maximization algorithm is derived to infer the model parameters. We compare the performance of the proposed model with the basic IC model and its variants using both synthetic data sets and a real-world data set containing news stories and Web blogs. Our empirical results show that incorporating the community structure of neighbors and the structural diversity measure into the diffusion model significantly improves the accuracy of the model, at the expense of only a reasonable increase in run-time.

  3. EDOVE: Energy and Depth Variance-Based Opportunistic Void Avoidance Scheme for Underwater Acoustic Sensor Networks

    PubMed Central

    Eun, Yongsoon

    2017-01-01

    Underwater Acoustic Sensor Network (UASN) comes with intrinsic constraints because it is deployed in the aquatic environment and uses the acoustic signals to communicate. The examples of those constraints are long propagation delay, very limited bandwidth, high energy cost for transmission, very high signal attenuation, costly deployment and battery replacement, and so forth. Therefore, the routing schemes for UASN must take into account those characteristics to achieve energy fairness, avoid energy holes, and improve the network lifetime. The depth based forwarding schemes in literature use node’s depth information to forward data towards the sink. They minimize the data packet duplication by employing the holding time strategy. However, to avoid void holes in the network, they use two hop node proximity information. In this paper, we propose the Energy and Depth variance-based Opportunistic Void avoidance (EDOVE) scheme to gain energy balancing and void avoidance in the network. EDOVE considers not only the depth parameter, but also the normalized residual energy of the one-hop nodes and the normalized depth variance of the second hop neighbors. Hence, it avoids the void regions as well as balances the network energy and increases the network lifetime. The simulation results show that the EDOVE gains more than 15% packet delivery ratio, propagates 50% less copies of data packet, consumes less energy, and has more lifetime than the state of the art forwarding schemes. PMID:28954395

  4. Evolution of a radio communication relay system

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Pezeshkian, Narek; Hart, Abraham; Burmeister, Aaron; Holz, Kevin; Neff, Joseph; Roth, Leif

    2013-05-01

    Providing long-distance non-line-of-sight control for unmanned ground robots has long been recognized as a problem, considering the nature of the required high-bandwidth radio links. In the early 2000s, the DARPA Mobile Autonomous Robot Software (MARS) program funded the Space and Naval Warfare Systems Center (SSC) Pacific to demonstrate a capability for autonomous mobile communication relaying on a number of Pioneer laboratory robots. This effort also resulted in the development of ad hoc networking radios and software that were later leveraged in the development of a more practical and logistically simpler system, the Automatically Deployed Communication Relays (ADCR). Funded by the Joint Ground Robotics Enterprise and internally by SSC Pacific, several generations of ADCR systems introduced increasingly more capable hardware and software for automatic maintenance of communication links through deployment of static relay nodes from mobile robots. This capability was finally tapped in 2010 to fulfill an urgent need from theater. 243 kits of ruggedized, robot-deployable communication relays were produced and sent to Afghanistan to extend the range of EOD and tactical ground robots in 2012. This paper provides a summary of the evolution of the radio relay technology at SSC Pacific, and then focuses on the latest two stages, the Manually-Deployed Communication Relays and the latest effort to automate the deployment of these ruggedized and fielded relay nodes.

  5. Monolithic composite “pressure + acceleration + temperature + infrared” sensor using a versatile single-sided “SiN/Poly-Si/Al” process-module.

    PubMed

    Ni, Zao; Yang, Chen; Xu, Dehui; Zhou, Hong; Zhou, Wei; Li, Tie; Xiong, Bin; Li, Xinxin

    2013-01-16

    We report a newly developed design/fabrication module with low-cost single-sided "low-stress-silicon-nitride (LS-SiN)/polysilicon (poly-Si)/Al" process for monolithic integration of composite sensors for sensing-network-node applications. A front-side surface-/bulk-micromachining process on a conventional Si-substrate is developed, featuring a multifunctional SiN/poly-Si/Al layer design for diverse sensing functions. The first "pressure + acceleration + temperature + infrared" (PATIR) composite sensor with the chip size of 2.5 mm × 2.5 mm is demonstrated. Systematic theoretical design and analysis methods are developed. The diverse sensing components include a piezoresistive absolute-pressure sensor (up to 700 kPa, with a sensitivity of 49 mV/MPa under 3.3 V supplied voltage), a piezoresistive accelerometer (±10 g, with a sensitivity of 66 μV/g under 3.3 V and a -3 dB bandwidth of 780 Hz), a thermoelectric infrared detector (with a responsivity of 45 V/W and detectivity of 3.6 × 107 cm·Hz1/2/W) and a thermistor (-25-120 °C). This design/fabrication module concept enables a low-cost monolithically-integrated "multifunctional-library" technique. It can be utilized as a customizable tool for versatile application-specific requirements, which is very useful for small-size, low-cost, large-scale sensing-network node developments.

  6. Compressed ECG biometric: a fast, secured and efficient method for identification of CVD patient.

    PubMed

    Sufi, Fahim; Khalil, Ibrahim; Mahmood, Abdun

    2011-12-01

    Adoption of compression technology is often required for wireless cardiovascular monitoring, due to the enormous size of Electrocardiography (ECG) signal and limited bandwidth of Internet. However, compressed ECG must be decompressed before performing human identification using present research on ECG based biometric techniques. This additional step of decompression creates a significant processing delay for identification task. This becomes an obvious burden on a system, if this needs to be done for a trillion of compressed ECG per hour by the hospital. Even though the hospital might be able to come up with an expensive infrastructure to tame the exuberant processing, for small intermediate nodes in a multihop network identification preceded by decompression is confronting. In this paper, we report a technique by which a person can be identified directly from his / her compressed ECG. This technique completely obviates the step of decompression and therefore upholds biometric identification less intimidating for the smaller nodes in a multihop network. The biometric template created by this new technique is lower in size compared to the existing ECG based biometrics as well as other forms of biometrics like face, finger, retina etc. (up to 8302 times lower than face template and 9 times lower than existing ECG based biometric template). Lower size of the template substantially reduces the one-to-many matching time for biometric recognition, resulting in a faster biometric authentication mechanism.

  7. An analysis of a nonlinear instability in the implementation of a VTOL control system

    NASA Technical Reports Server (NTRS)

    Weber, J. M.

    1982-01-01

    The contributions to nonlinear behavior and unstable response of the model following yaw control system of a VTOL aircraft during hover were determined. The system was designed as a state rate feedback implicit model follower that provided yaw rate command/heading hold capability and used combined full authority parallel and limited authority series servo actuators to generate an input to the yaw reaction control system of the aircraft. Both linear and nonlinear system models, as well as describing function linearization techniques were used to determine the influence on the control system instability of input magnitude and bandwidth, series servo authority, and system bandwidth. Results of the analysis describe stability boundaries as a function of these system design characteristics.

  8. Network structures sustained by internal links and distributed lifetime of old nodes in stationary state of number of nodes

    NASA Astrophysics Data System (ADS)

    Ikeda, Nobutoshi

    2017-12-01

    In network models that take into account growth properties, deletion of old nodes has a serious impact on degree distributions, because old nodes tend to become hub nodes. In this study, we aim to provide a simple explanation for why hubs can exist even in conditions where the number of nodes is stationary due to the deletion of old nodes. We show that an exponential increase in the degree of nodes is a natural consequence of the balance between the deletion and addition of nodes as long as a preferential attachment mechanism holds. As a result, the largest degree is determined by the magnitude relationship between the time scale of the exponential growth of degrees and lifetime of old nodes. The degree distribution exhibits a power-law form ˜ k -γ with exponent γ = 1 when the lifetime of nodes is constant. However, various values of γ can be realized by introducing distributed lifetime of nodes.

  9. 47 CFR 24.133 - Emission limits.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement...

  10. 47 CFR 24.133 - Emission limits.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement...

  11. 47 CFR 24.133 - Emission limits.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement...

  12. 47 CFR 24.133 - Emission limits.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement...

  13. 47 CFR 24.133 - Emission limits.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement... outside the authorized bandwidth and removed from the edge of the authorized bandwidth by a displacement...

  14. Analysis of helicopter flight dynamics through modeling and simulation of primary flight control actuation system

    NASA Astrophysics Data System (ADS)

    Nelson, Hunter Barton

    A simplified second-order transfer function actuator model used in most flight dynamics applications cannot easily capture the effects of different actuator parameters. The present work integrates a nonlinear actuator model into a nonlinear state space rotorcraft model to determine the effect of actuator parameters on key flight dynamics. The completed actuator model was integrated with a swashplate kinematics where step responses were generated over a range of key hydraulic parameters. The actuator-swashplate system was then introduced into a nonlinear state space rotorcraft simulation where flight dynamics quantities such as bandwidth and phase delay analyzed. Frequency sweeps were simulated for unique actuator configurations using the coupled nonlinear actuator-rotorcraft system. The software package CIFER was used for system identification and compared directly to the linearized models. As the actuator became rate saturated, the effects on bandwidth and phase delay were apparent on the predicted handling qualities specifications.

  15. Utilization and Outcomes of Sentinel Lymph Node Biopsy for Vulvar Cancer.

    PubMed

    Cham, Stephanie; Chen, Ling; Burke, William M; Hou, June Y; Tergas, Ana I; Hu, Jim C; Ananth, Cande V; Neugut, Alfred I; Hershman, Dawn L; Wright, Jason D

    2016-10-01

    To examine the use and predictors of sentinel node biopsy in women with vulvar cancer. The Perspective database, an all-payer database that collects data from more than 500 hospitals, was used to perform a retrospective cohort study of women with vulvar cancer who underwent vulvectomy and lymph node assessment from 2006 to 2015. Multivariable models were used to determine factors associated with sentinel node biopsy. Length of stay and cost were compared between women who underwent sentinel node biopsy and lymphadenectomy. Among 2,273 women, sentinel node biopsy was utilized in 618 (27.2%) and 1,655 (72.8%) underwent inguinofemoral lymphadenectomy. Performance of sentinel node biopsy increased from 17.0% (95% confidence interval [CI] 12.0-22.0%) in 2006 to 39.1% (95% CI 27.1-51.0%) in 2015. In a multivariable model, women treated more recently were more likely to have undergone sentinel node biopsy, whereas women with more comorbidities and those treated at rural hospitals were less likely to have undergone the procedure. The median length of stay was shorter for those undergoing sentinel node biopsy (median 2 days, interquartile range 1-3) compared with women who underwent inguinofemoral lymphadenectomy (median 3 days, interquartile range 2-4). The cost of sentinel node biopsy was $7,599 (interquartile range $5,739-9,922) compared with $8,095 (interquartile range $5,917-11,281) for lymphadenectomy. The use of sentinel node biopsy for vulvar cancer has more than doubled since 2006. Sentinel lymph node biopsy is associated with a shorter hospital stay and decreased cost compared with inguinofemoral lymphadenectomy.

  16. Deep fiber networks: new ready-to-deploy architectures yield technical and economic benefits

    NASA Astrophysics Data System (ADS)

    Sipes, Donald L., Jr.; Loveless, Robert

    2001-07-01

    The advent of digital technology in HFC networks has opened up a myriad of opportunities for MSOs. The introduction of these advanced services comes at a cost: namely, the need for increased capacity; and especially increased reusable bandwidth. In HFC networks all services are ostensibly broadcast: the prime difference between services being the footprint over which these services are broadcast. Channel lineups for broadcast video services typically cover the largest are. Advertising zones are typically second, usually on the order of a typical 20K home hub. For initial penetrations for high speed data services such as cable modems, a typical hub site will be divided into several sectors using a single 6 MHz channel. Telephony services are broadcast over the smallest area, typically a 6 MHz channel for each node. Naturally as penetration of these services increase, the broadcast area for each will also decrease.

  17. Fiber-connected position localization sensor networks

    NASA Astrophysics Data System (ADS)

    Pan, Shilong; Zhu, Dan; Fu, Jianbin; Yao, Tingfeng

    2014-11-01

    Position localization has drawn great attention due to its wide applications in radars, sonars, electronic warfare, wireless communications and so on. Photonic approaches to realize position localization can achieve high-resolution, which also provides the possibility to move the signal processing from each sensor node to the central station, thanks to the low loss, immunity to electromagnetic interference (EMI) and broad bandwidth brought by the photonic technologies. In this paper, we present a review on the recent works of position localization based on photonic technologies. A fiber-connected ultra-wideband (UWB) sensor network using optical time-division multiplexing (OTDM) is proposed to realize high-resolution localization and moving the signal processing to the central station. A 3.9-cm high spatial resolution is achieved. A wavelength-division multiplexed (WDM) fiber-connected sensor network is also demonstrated to realize location which is independent of the received signal format.

  18. The Light Node Communication Framework: A New Way to Communicate Inside Smart Homes.

    PubMed

    Plantevin, Valère; Bouzouane, Abdenour; Gaboury, Sebastien

    2017-10-20

    The Internet of things has profoundly changed the way we imagine information science and architecture, and smart homes are an important part of this domain. Created a decade ago, the few existing prototypes use technologies of the day, forcing designers to create centralized and costly architectures that raise some issues concerning reliability, scalability, and ease of access which cannot be tolerated in the context of assistance. In this paper, we briefly introduce a new kind of architecture where the focus is placed on distribution. More specifically, we respond to the first issue we encountered by proposing a lightweight and portable messaging protocol. After running several tests, we observed a maximized bandwidth, whereby no packets were lost and good encryption was obtained. These results tend to prove that our innovation may be employed in a real context of distribution with small entities.

  19. High-speed zero-copy data transfer for DAQ applications

    NASA Astrophysics Data System (ADS)

    Pisani, Flavio; Cámpora Pérez, Daniel Hugo; Neufeld, Niko

    2015-05-01

    The LHCb Data Acquisition (DAQ) will be upgraded in 2020 to a trigger-free readout. In order to achieve this goal we will need to connect around 500 nodes with a total network capacity of 32 Tb/s. To get such an high network capacity we are testing zero-copy technology in order to maximize the theoretical link throughput without adding excessive CPU and memory bandwidth overhead, leaving free resources for data processing resulting in less power, space and money used for the same result. We develop a modular test application which can be used with different transport layers. For the zero-copy implementation we choose the OFED IBVerbs API because it can provide low level access and high throughput. We present throughput and CPU usage measurements of 40 GbE solutions using Remote Direct Memory Access (RDMA), for several network configurations to test the scalability of the system.

  20. The Light Node Communication Framework: A New Way to Communicate Inside Smart Homes

    PubMed Central

    Bouzouane, Abdenour; Gaboury, Sebastien

    2017-01-01

    The Internet of things has profoundly changed the way we imagine information science and architecture, and smart homes are an important part of this domain. Created a decade ago, the few existing prototypes use technologies of the day, forcing designers to create centralized and costly architectures that raise some issues concerning reliability, scalability, and ease of access which cannot be tolerated in the context of assistance. In this paper, we briefly introduce a new kind of architecture where the focus is placed on distribution. More specifically, we respond to the first issue we encountered by proposing a lightweight and portable messaging protocol. After running several tests, we observed a maximized bandwidth, whereby no packets were lost and good encryption was obtained. These results tend to prove that our innovation may be employed in a real context of distribution with small entities. PMID:29053581

  1. Optimizing Excited-State Electronic-Structure Codes for Intel Knights Landing: A Case Study on the BerkeleyGW Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek

    2016-10-06

    We profile and optimize calculations performed with the BerkeleyGW code on the Xeon-Phi architecture. BerkeleyGW depends both on hand-tuned critical kernels as well as on BLAS and FFT libraries. We describe the optimization process and performance improvements achieved. We discuss a layered parallelization strategy to take advantage of vector, thread and node-level parallelism. We discuss locality changes (including the consequence of the lack of L3 cache) and effective use of the on-package high-bandwidth memory. We show preliminary results on Knights-Landing including a roofline study of code performance before and after a number of optimizations. We find that the GW methodmore » is particularly well-suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-wave components, band-pairs, and frequencies.« less

  2. Feasibility of Using Distributed Wireless Mesh Networks for Medical Emergency Response

    PubMed Central

    Braunstein, Brian; Trimble, Troy; Mishra, Rajesh; Manoj, B. S.; Rao, Ramesh; Lenert, Leslie

    2006-01-01

    Achieving reliable, efficient data communications networks at a disaster site is a difficult task. Network paradigms, such as Wireless Mesh Network (WMN) architectures, form one exemplar for providing high-bandwidth, scalable data communication for medical emergency response activity. WMNs are created by self-organized wireless nodes that use multi-hop wireless relaying for data transfer. In this paper, we describe our experience using a mesh network architecture we developed for homeland security and medical emergency applications. We briefly discuss the architecture and present the traffic behavioral observations made by a client-server medical emergency application tested during a large-scale homeland security drill. We present our traffic measurements, describe lessons learned, and offer functional requirements (based on field testing) for practical 802.11 mesh medical emergency response networks. With certain caveats, the results suggest that 802.11 mesh networks are feasible and scalable systems for field communications in disaster settings. PMID:17238308

  3. Forming an ad-hoc nearby storage, based on IKAROS and social networking services

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2014-06-01

    We present an ad-hoc "nearby" storage, based on IKAROS and social networking services, such as Facebook. By design, IKAROS is capable to increase or decrease the number of nodes of the I/O system instance on the fly, without bringing everything down or losing data. IKAROS is capable to decide the file partition distribution schema, by taking on account requests from the user or an application, as well as a domain or a Virtual Organization policy. In this way, it is possible to form multiple instances of smaller capacity higher bandwidth storage utilities capable to respond in an ad-hoc manner. This approach, focusing on flexibility, can scale both up and down and so can provide more cost effective infrastructures for both large scale and smaller size systems. A set of experiments is performed comparing IKAROS with PVFS2 by using multiple clients requests under HPC IOR benchmark and MPICH2.

  4. Modeling and Performance Evaluation of Backoff Misbehaving Nodes in CSMA/CA Networks

    DTIC Science & Technology

    2012-08-01

    Modeling and Performance Evaluation of Backoff Misbehaving Nodes in CSMA/CA Networks Zhuo Lu, Student Member, IEEE, Wenye Wang, Senior Member, IEEE... misbehaving nodes can obtain, we define and study two general classes of backoff misbehavior: continuous misbehavior, which keeps manipulating the backoff...misbehavior sporadically. Our approach is to introduce a new performance metric, namely order gain, to characterize the performance benefits of misbehaving

  5. A Distributed and Energy-Efficient Algorithm for Event K-Coverage in Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Xu, Yiming; Liu, Jun

    2017-01-01

    For event dynamic K-coverage algorithms, each management node selects its assistant node by using a greedy algorithm without considering the residual energy and situations in which a node is selected by several events. This approach affects network energy consumption and balance. Therefore, this study proposes a distributed and energy-efficient event K-coverage algorithm (DEEKA). After the network achieves 1-coverage, the nodes that detect the same event compete for the event management node with the number of candidate nodes and the average residual energy, as well as the distance to the event. Second, each management node estimates the probability of its neighbor nodes’ being selected by the event it manages with the distance level, the residual energy level, and the number of dynamic coverage event of these nodes. Third, each management node establishes an optimization model that uses expectation energy consumption and the residual energy variance of its neighbor nodes and detects the performance of the events it manages as targets. Finally, each management node uses a constrained non-dominated sorting genetic algorithm (NSGA-II) to obtain the Pareto set of the model and the best strategy via technique for order preference by similarity to an ideal solution (TOPSIS). The algorithm first considers the effect of harsh underwater environments on information collection and transmission. It also considers the residual energy of a node and a situation in which the node is selected by several other events. Simulation results show that, unlike the on-demand variable sensing K-coverage algorithm, DEEKA balances and reduces network energy consumption, thereby prolonging the network’s best service quality and lifetime. PMID:28106837

  6. High vertical resolution crosswell seismic imaging

    DOEpatents

    Lazaratos, Spyridon K.

    1999-12-07

    A method for producing high vertical resolution seismic images from crosswell data is disclosed. In accordance with one aspect of the disclosure, a set of vertically spaced, generally horizontally extending continuous layers and associated nodes are defined within a region between two boreholes. The specific number of nodes is selected such that the value of a particular characteristic of the subterranean region at each of the nodes is one which can be determined from the seismic data. Once values are established at the nodes, values of the particular characteristic are assigned to positions between the node points of each layer based on the values at node within that layer and without regard to the values at node points within any other layer. A seismic map is produced using the node values and the assigned values therebetween. In accordance with another aspect of the disclosure, an approximate model of the region is established using direct arrival traveltime data. Thereafter, the approximate model is adjusted using reflected arrival data. In accordance with still another aspect of the disclosure, correction is provided for well deviation. An associated technique which provides improvements in ray tracing is also disclosed.

  7. Fluorescence imaging to study cancer burden on lymph nodes

    NASA Astrophysics Data System (ADS)

    D'Souza, Alisha V.; Elliott, Jonathan T.; Gunn, Jason R.; Samkoe, Kimberley S.; Tichauer, Kenneth M.; Pogue, Brian W.

    2015-03-01

    Morbidity and complexity involved in lymph node staging via surgical resection and biopsy calls for staging techniques that are less invasive. While visible blue dyes are commonly used in locating sentinel lymph nodes, since they follow tumor-draining lymphatic vessels, they do not provide a metric to evaluate presence of cancer. An area of active research is to use fluorescent dyes to assess tumor burden of sentinel and secondary lymph nodes. The goal of this work was to successfully deploy and test an intra-nodal cancer-cell injection model to enable planar fluorescence imaging of a clinically relevant blue dye, specifically methylene blue along with a cancer targeting tracer, Affibody labeled with IRDYE800CW and subsequently segregate tumor-bearing from normal lymph nodes. This direct-injection based tumor model was employed in athymic rats (6 normal, 4 controls, 6 cancer-bearing), where luciferase-expressing breast cancer cells were injected into axillary lymph nodes. Tumor presence in nodes was confirmed by bioluminescence imaging before and after fluorescence imaging. Lymphatic uptake from the injection site (intradermal on forepaw) to lymph node was imaged at approximately 2 frames/minute. Large variability was observed within each cohort.

  8. Glide dislocation nucleation from dislocation nodes at semi-coherent {111} Cu–Ni interfaces

    DOE PAGES

    Shao, Shuai; Wang, Jian; Beyerlein, Irene J.; ...

    2015-07-23

    Using atomistic simulations and dislocation theory on a model system of semi-coherent {1 1 1} interfaces, we show that misfit dislocation nodes adopt multiple atomic arrangements corresponding to the creation and redistribution of excess volume at the nodes. We identified four distinctive node structures: volume-smeared nodes with (i) spiral or (ii) straight dislocation patterns, and volume-condensed nodes with (iii) triangular or (iv) hexagonal dislocation patterns. Volume-smeared nodes contain interfacial dislocations lying in the Cu–Ni interface but volume-condensed nodes contain two sets of interfacial dislocations in the two adjacent interfaces and jogs across the atomic layer between the two adjacent interfaces.more » Finally, under biaxial tension/compression applied parallel to the interface, we show that the nucleation of lattice dislocations is preferred at the nodes and is correlated with the reduction of excess volume at the nodes.« less

  9. Competitive game theoretic optimal routing in optical networks

    NASA Astrophysics Data System (ADS)

    Yassine, Abdulsalam; Kabranov, Ognian; Makrakis, Dimitrios

    2002-09-01

    Optical transport service providers need control and optimization strategies for wavelength management, network provisioning, restoration and protection, allowing them to define and deploy new services and maintain competitiveness. In this paper, we investigate a game theory based model for wavelength and flow assignment in multi wavelength optical networks, consisting of several backbone long-haul optical network transport service providers (TSPs) who are offering their services -in terms of bandwidth- to Internet service providers (ISPs). The ISPs act as brokers or agents between the TSP and end user. The agent (ISP) buys services (bandwidth) from the TSP. The TSPs compete among themselves to sell their services and maintain profitability. We present a case study, demonstrating the impact of different bandwidth broker demands on the supplier's profit and the price paid by the network broker.

  10. Extreme events and event size fluctuations in biased random walks on networks.

    PubMed

    Kishore, Vimal; Santhanam, M S; Amritkar, R E

    2012-05-01

    Random walk on discrete lattice models is important to understand various types of transport processes. The extreme events, defined as exceedences of the flux of walkers above a prescribed threshold, have been studied recently in the context of complex networks. This was motivated by the occurrence of rare events such as traffic jams, floods, and power blackouts which take place on networks. In this work, we study extreme events in a generalized random walk model in which the walk is preferentially biased by the network topology. The walkers preferentially choose to hop toward the hubs or small degree nodes. In this setting, we show that extremely large fluctuations in event sizes are possible on small degree nodes when the walkers are biased toward the hubs. In particular, we obtain the distribution of event sizes on the network. Further, the probability for the occurrence of extreme events on any node in the network depends on its "generalized strength," a measure of the ability of a node to attract walkers. The generalized strength is a function of the degree of the node and that of its nearest neighbors. We obtain analytical and simulation results for the probability of occurrence of extreme events on the nodes of a network using a generalized random walk model. The result reveals that the nodes with a larger value of generalized strength, on average, display lower probability for the occurrence of extreme events compared to the nodes with lower values of generalized strength.

  11. Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks

    NASA Astrophysics Data System (ADS)

    Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla

    2016-12-01

    Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.

  12. Revisiting node-based SIR models in complex networks with degree correlations

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Cao, Jinde; Alofi, Abdulaziz; AL-Mazrooei, Abdullah; Elaiw, Ahmed

    2015-11-01

    In this paper, we consider two growing networks which will lead to the degree-degree correlations between two nearest neighbors in the network. When the network grows to some certain size, we introduce an SIR-like disease such as pandemic influenza H1N1/09 to the population. Due to its rapid spread, the population size changes slowly, and thus the disease spreads on correlated networks with approximately fixed size. To predict the disease evolution on correlated networks, we first review two node-based SIR models incorporating degree correlations and an edge-based SIR model without considering degree correlation, and then compare the predictions of these models with stochastic SIR simulations, respectively. We find that the edge-based model, even without considering degree correlations, agrees much better than the node-based models incorporating degree correlations with stochastic SIR simulations in many respects. Moreover, simulation results show that for networks with positive correlation, the edge-based model provides a better upper bound of the cumulative incidence than the node-based SIR models, whereas for networks with negative correlation, it provides a lower bound of the cumulative incidence.

  13. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  14. Assembly Mechanism of the Contractile Ring for Cytokinesis by Fission Yeast

    NASA Astrophysics Data System (ADS)

    Vavylonis, Dimitrios; Wu, Jian-Qiu; Huang, Xiaolei; O'Shaughnessy, Ben; Pollard, Thomas

    2008-03-01

    Animals and fungi assemble a contractile ring of actin filaments and the motor protein myosin to separate into individual daughter cells during cytokinesis. We studied the mechanism of contractile ring assembly in fission yeast with high time resolution confocal microscopy, computational image analysis methods, and numerical simulations. Approximately 63 nodes containing myosin, broadly distributed around the cell equator, assembled into a ring through stochastic motions, making many starts, stops, and changes of direction as they condense into a ring. Estimates of node friction coefficients from the mean square displacement of stationary nodes imply forces for node movement are greater than ˜ 4 pN, similarly to forces by a few molecular motors. Skeletonization and topology analysis of images of cells expressing fluorescent actin filament markers showed transient linear elements extending in all directions from myosin nodes and establishing connections among them. We propose a model with traction between nodes depending on transient connections established by stochastic search and capture (``search, capture, pull and release''). Numerical simulations of the model using parameter values obtained from experiment succesfully condense nodes into a continuous ring.

  15. Measures of node centrality in mobile social networks

    NASA Astrophysics Data System (ADS)

    Gao, Zhenxiang; Shi, Yan; Chen, Shanzhi

    2015-02-01

    Mobile social networks exploit human mobility and consequent device-to-device contact to opportunistically create data paths over time. While links in mobile social networks are time-varied and strongly impacted by human mobility, discovering influential nodes is one of the important issues for efficient information propagation in mobile social networks. Although traditional centrality definitions give metrics to identify the nodes with central positions in static binary networks, they cannot effectively identify the influential nodes for information propagation in mobile social networks. In this paper, we address the problems of discovering the influential nodes in mobile social networks. We first use the temporal evolution graph model which can more accurately capture the topology dynamics of the mobile social network over time. Based on the model, we explore human social relations and mobility patterns to redefine three common centrality metrics: degree centrality, closeness centrality and betweenness centrality. We then employ empirical traces to evaluate the benefits of the proposed centrality metrics, and discuss the predictability of nodes' global centrality ranking by nodes' local centrality ranking. Results demonstrate the efficiency of the proposed centrality metrics.

  16. Simulation for Prediction of Entry Article Demise (SPEAD): an Analysis Tool for Spacecraft Safety Analysis and Ascent/Reentry Risk Assessment

    NASA Technical Reports Server (NTRS)

    Ling, Lisa

    2014-01-01

    For the purpose of performing safety analysis and risk assessment for a probable offnominal suborbital/orbital atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. This report discusses the capabilities, modeling, and validation of the SPEAD analysis tool. SPEAD is applicable for Earth or Mars, with the option for 3 or 6 degrees-of-freedom (DOF) trajectory propagation. The atmosphere and aerodynamics data are supplied in tables, for linear interpolation of up to 4 independent variables. The gravitation model can include up to 20 zonal harmonic coefficients. The modeling of a single motor is available and can be adapted to multiple motors. For thermal analysis, the aerodynamic radiative and free-molecular/continuum convective heating, black-body radiative cooling, conductive heat transfer between adjacent nodes, and node ablation are modeled. In a 6- DOF simulation, the local convective heating on a node is a function of Mach, angle-ofattack, and sideslip angle, and is dependent on 1) the location of the node in the spacecraft and its orientation to the flow modeled by an exposure factor, and 2) the geometries of the spacecraft and the node modeled by a heating factor and convective area. Node failure is evaluated using criteria based on melting temperature, reference heat load, g-load, or a combination of the above. The failure of a liquid propellant tank is evaluated based on burnout flux from nucleate boiling or excess internal pressure. Following a component failure, updates are made as needed to the spacecraft mass and aerodynamic properties, nodal exposure and heating factors, and nodal convective and conductive areas. This allows the trajectory to be propagated seamlessly in a single run, inclusive of the trajectories of components that have separated from the spacecraft. The node ablation simulates the decreasing mass and convective/reference areas, and variable heating factor. A built-in database provides the thermo-mechanical properties of For the purpose of performing safety analysis and risk assessment for a probable offnominal suborbital/orbital atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. This report discusses the capabilities, modeling, and validation of the SPEAD analysis tool. SPEAD is applicable for Earth or Mars, with the option for 3 or 6 degrees-of-freedom (DOF) trajectory propagation. The atmosphere and aerodynamics data are supplied in tables, for linear interpolation of up to 4 independent variables. The gravitation model can include up to 20 zonal harmonic coefficients. The modeling of a single motor is available and can be adapted to multiple motors. For thermal analysis, the aerodynamic radiative and free-molecular/continuum convective heating, black-body radiative cooling, conductive heat transfer between adjacent nodes, and node ablation are modeled. In a 6- DOF simulation, the local convective heating on a node is a function of Mach, angle-ofattack, and sideslip angle, and is dependent on 1) the location of the node in the spacecraft and its orientation to the flow modeled by an exposure factor, and 2) the geometries of the spacecraft and the node modeled by a heating factor and convective area. Node failure is evaluated using criteria based on melting temperature, reference heat load, g-load, or a combination of the above. The failure of a liquid propellant tank is evaluated based on burnout flux from nucleate boiling or excess internal pressure. Following a component failure, updates are made as needed to the spacecraft mass and aerodynamic properties, nodal exposure and heating factors, and nodal convective and conductive areas. This allows the trajectory to be propagated seamlessly in a single run, inclusive of the trajectories of components that have separated from the spacecraft. The node ablation simulates the decreasing mass and convective/reference areas, and variable heating factor. A built-in database provides the thermo-mechanical properties of

  17. Suppressing epidemics on networks by exploiting observer nodes.

    PubMed

    Takaguchi, Taro; Hasegawa, Takehisa; Yoshida, Yuichi

    2014-07-01

    To control infection spreading on networks, we investigate the effect of observer nodes that recognize infection in a neighboring node and make the rest of the neighbor nodes immune. We numerically show that random placement of observer nodes works better on networks with clustering than on locally treelike networks, implying that our model is promising for realistic social networks. The efficiency of several heuristic schemes for observer placement is also examined for synthetic and empirical networks. In parallel with numerical simulations of epidemic dynamics, we also show that the effect of observer placement can be assessed by the size of the largest connected component of networks remaining after removing observer nodes and links between their neighboring nodes.

  18. Suppressing epidemics on networks by exploiting observer nodes

    NASA Astrophysics Data System (ADS)

    Takaguchi, Taro; Hasegawa, Takehisa; Yoshida, Yuichi

    2014-07-01

    To control infection spreading on networks, we investigate the effect of observer nodes that recognize infection in a neighboring node and make the rest of the neighbor nodes immune. We numerically show that random placement of observer nodes works better on networks with clustering than on locally treelike networks, implying that our model is promising for realistic social networks. The efficiency of several heuristic schemes for observer placement is also examined for synthetic and empirical networks. In parallel with numerical simulations of epidemic dynamics, we also show that the effect of observer placement can be assessed by the size of the largest connected component of networks remaining after removing observer nodes and links between their neighboring nodes.

  19. Diffusion in Colocation Contact Networks: The Impact of Nodal Spatiotemporal Dynamics.

    PubMed

    Thomas, Bryce; Jurdak, Raja; Zhao, Kun; Atkinson, Ian

    2016-01-01

    Temporal contact networks are studied to understand dynamic spreading phenomena such as communicable diseases or information dissemination. To establish how spatiotemporal dynamics of nodes impact spreading potential in colocation contact networks, we propose "inducement-shuffling" null models which break one or more correlations between times, locations and nodes. By reconfiguring the time and/or location of each node's presence in the network, these models induce alternative sets of colocation events giving rise to contact networks with varying spreading potential. This enables second-order causal reasoning about how correlations in nodes' spatiotemporal preferences not only lead to a given contact network but ultimately influence the network's spreading potential. We find the correlation between nodes and times to be the greatest impediment to spreading, while the correlation between times and locations slightly catalyzes spreading. Under each of the presented null models we measure both the number of contacts and infection prevalence as a function of time, with the surprising finding that the two have no direct causality.

  20. The Role of Energy Reservoirs in Distributed Computing: Manufacturing, Implementing, and Optimizing Energy Storage in Energy-Autonomous Sensor Nodes

    NASA Astrophysics Data System (ADS)

    Cowell, Martin Andrew

    The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to spatially and temporally varying energy availability in order to understand sensor node reliability. Looking to the future, we see an opportunity for further research to implement machine learning algorithms to control the energy resources of distributed computing networks.

Top